[THIN] Re: OT: VMWare ESX 3.x Internal / DMZ networks on same physical server

  • From: "Steve Greenberg" <steveg@xxxxxxxxxxxxxx>
  • To: <thin@xxxxxxxxxxxxx>
  • Date: Sat, 24 Feb 2007 10:51:58 -0700

Great points Rick.

 

 I love this list, what an incredible level of discussion that goes on
here!! Thanks to all..

 

We have all been doing this long enough to know that each situation is a
little different and the priorities of the organization tend to pull the
architect/designer more toward on solution then another. I am working on a
system design now in which the servers will SAN boot and run standard
Windows services and PS servers. The desktops will move to diskless thin
clients. In this type of case you really are eliminating spindles across the
organization and gaining all the efficiencies that Joe was pointing to. I am
very excited to see how well this all works out. The point about eggs in one
basket is a very good and needs to be seriously considered before
committing. In this one case, it is right choice, again for organizational
reasons.

 

I love having all these options available, that's what keeps it fun and
interesting!!

 

Steve Greenberg

Thin Client Computing

34522 N. Scottsdale Rd D8453

Scottsdale, AZ 85262

(602) 432-8649

www.thinclient.net

steveg@xxxxxxxxxxxxxx

 

  _____  

From: thin-bounce@xxxxxxxxxxxxx [mailto:thin-bounce@xxxxxxxxxxxxx] On Behalf
Of Rick Mack
Sent: Friday, February 23, 2007 10:55 PM
To: thin@xxxxxxxxxxxxx
Subject: [THIN] Re: OT: VMWare ESX 3.x Internal / DMZ networks on same
physical server

 

Hi Joe,

 

I hope this isn't interpeted as a religious discussion because it's not
meant to be. SANs have an important role to play in business and VMWare
rocks. SAN disks give you much better throughput than local hard disks.
Period, no argument. SAN storage is good, my customers use it and I promote
it like crazy, in the right places. 

 

BUT we were talking about boot on SAN and the advisability of using SAN
disks as system disks.

 

The first point to emphasize is that throughput and latency are different.
It's the difference between disk seek time and disk transfer rate, different
units (time vs data/time), different meaning. 

 

I guess I'd like to dispute a couple of the bullet points you made and maybe
concede some stuff as well.

 

(1) If we're talking system disks (Boot off SAN vs local disks), SAN disks
waste less disk space. 

 

(2) SAN volumes typically have a latenct than local disks. There are more
players between the ball and the goal.

 

(3) For large data volumes, SANs use the same hard disks we use as local
disks (unless of course you got a really "good" deal and are using SATA or
parallel IDE disks). Same power draw, same heat production. Since we've got
lots of redundancy built in there are extra power supplies, controllers,
fans, cache electronics etc. There's no way a 72 GB SAN volume could use
less power or produce less heat than a 72 GB local disk. How can it when the
underlying technology is exactly the same and you've got more support
infrastructure. Also let's not forget about those spare drives that are in
the SAN and powered up just in case. 

 

(4) HP and IBM as an example are using 2.5 " SAS disk on-board on blades and
stuck in the front of 1RU systems. They don't take up any extra rack space.
SAN disks are in big cabinets and pizza boxes, generally in their own
rack/cabinet.  

 

SANs have much greater hardware redundancy and that means they ought to be a
lot more reliable than a bunch of disks. This is genarally true, but how
often have you had both disks in a mirrored pair fail? 

 

Admittedly these days if a SAN dies it really doesn't matter if your servers
stay up or not but that's not the point.If your SAN dies because some bozo
screwed up a firmware upgrade or decided to re-arrange the LUNs, you've just
discovered that you've got all your eggs in one basket. If we've used boot
on SAN extensively we've got no domain controller, no terminal servers, no
file server, no exchange, no SQL, nothing. 

 

Even if you've got a fully replicated SAN on your DR site with up-to-date
synchronized data, you can't use the data unless you do a complete failover
to the DR site with all your systems. And if that isn't an option, how long
will it take before you've restored everything once the SAN is reconfigured
and running again? 

 

Don't get me wrong, I don't have a better solution and having everything on
disparate sets of local disks is a total nightmare to support compared to
SAN storage. I'm not biassed against SANs, I just think it's important to
use technology appropriately and as efficiently as possible. SANs give you
tremendous flexibility in data storage, good redundancy etc but as I've
stated above, it comes at a cost. You don't save power or cooling costs, you
don't save space. 

 

Google have shown us that there are other possible architectural models that
don't need SANs. Operating system partitioning has been around for a long
time, and products like Virtuozzo are going to start eroding the the VMWare
market because they're just that much more efficient. PlateSpin let's you do
P2P migrations (not that efficient yet, but just wait) that can be used for
DR redundancy etc. Heck, most of the time we use pitiful active/passive
clustering when you've got stuff around like Polyserve that makes clustering
actually work. 

 

There really is no technology solution that is a 100% fit to all problems.
VMWare isn't the answer to everything, SANs aren't the answer to everything.
We have to stay open-minded and try and use what's available in the best way
possible. 

 

regards,

 

Rick 

I'm going to have to disagree with you and Neil on the one...

The latency issue with FC vs SCSI is negligible depending on what your setup
is.  Sure FC can have higher latency if all your doing is mapping 2 physical
disks (mirrored) to one lun.  That's old school technology.  Modern SANs
aggregate disks into pools that can be carved out.  Much of SANs and SCSI
performance depends on the hardware used to implement.   I've seen 1Gb SAN
push a sustained 97MB/s for reads while a 2 mirrored 10k SAS drives could
only sustain 11MB/s for reads.  Extremes perhaps, but real world results. 

Yes, SANs are more expensive that local disks, but there are other
considerations be made:
  Heat and Cooling costs
  Power Draw
  Space/Real Estate
  Business Continuity and Recoverability
  Blades or Pizza Boxes 

Just like the blades vs Pizza Boxes debate,  each has it's advantage and
disadvantages.  However, there is so much more I can do with a SAN
infrastructure that I can do with Local Storage.

Add VMware and BAM! (borrowed this term from Emeril) the value that can be
added to an organization skyrockets.  It's no mystery why VMWare and
Virtualization has taken off. 

As far a cost effectiveness, that is debatable... When you factor in
Cooling/Power Draw/Business continuity it's hard to argue. Again, it all
depends on what your virtualizing and how important it is to the business. 

If you want a cost effective Virtualization Solution, then I would suggest
looking at Virtuozzo from SWSoft.

Joe

On 2/23/07, Rick Mack < ulrich.mack@xxxxxxxxx <mailto:ulrich.mack@xxxxxxxxx>
> wrote:



Hi Steve,

 

VMs aside, there are still a couple of significant areas where SAN disks
just don't hack it as a system disk. 

 

The first is latency which can be 4-5 times worse on a SAN "disk" (overhead
of fabric switch and other infrastructure) compared to local disks. I know
that DR etc is a lot easier with SAN disks than local hard disks, but if you
decide to go SAN boot and still want want real performance then you'd better
at least consider using the local hard disks for paging, spooling and user
profiles. 

 

The second issue is price. Even with 72 GB disks where most of the disk
space is wasted, SAN disk space still costs quite a bit more than RAID
mirrored local drives. 

 

I have a suspicion that there will be a time in the near future when people
will start realising that that VMWare isn't nearly as cost effective as
everyone argues. Please don't get me wrong, I love the idea of VMWare and
just wouldn't do without it. It's just that VMWare isn't really about saving
money once we get away from a development environment. 

 

And until we can overcome disk and network i/o bottlenecks, having more CPU
power to play with just isn't all that critical. Of course there are things
like Vista/Longhorn's flash drive read/write caching that even things up a
bit but what we really need is the next generation of hard disks that have
obscenely large on-board caches. That'll let them run at close to the
interface speeds (eg up to 6 Gb per disk on SASI). 

 

regards,

 

Rick

 

On 2/23/07, Steve Greenberg < steveg@xxxxxxxxxxxxxx
<mailto:steveg@xxxxxxxxxxxxxx> > wrote:

Nice! This is one of those mind set changes that we periodically have to go
through. I am going through one right now with the idea of booting servers
off the SAN, in the old days this was flaky but I have to update my thinking
and accept that it works and is trustworthy! 

 

Steve Greenberg

Thin Client Computing

34522 N. Scottsdale Rd D8453

Scottsdale, AZ 85262

(602) 432-8649

www.thinclient.net <http://www.thinclient.net/>  

steveg@xxxxxxxxxxxxxx 

 




-- 
Ulrich Mack
Commander Australia 

Other related posts: