[THIN] Re: OT: VMWare ESX 3.x Internal / DMZ networks on same physical server

  • From: "Joe Shonk" <joe.shonk@xxxxxxxxx>
  • To: thin@xxxxxxxxxxxxx
  • Date: Fri, 23 Feb 2007 13:10:18 -0700

I'm going to have to disagree with you and Neil on the one...

The latency issue with FC vs SCSI is negligible depending on what your setup
is.  Sure FC can have higher latency if all your doing is mapping 2 physical
disks (mirrored) to one lun.  That's old school technology.  Modern SANs
aggregate disks into pools that can be carved out.  Much of SANs and SCSI
performance depends on the hardware used to implement.   I've seen 1Gb SAN
push a sustained 97MB/s for reads while a 2 mirrored 10k SAS drives could
only sustain 11MB/s for reads.  Extremes perhaps, but real world results.

Yes, SANs are more expensive that local disks, but there are other
considerations be made:
 Heat and Cooling costs
 Power Draw
 Space/Real Estate
 Business Continuity and Recoverability
 Blades or Pizza Boxes

Just like the blades vs Pizza Boxes debate,  each has it's advantage and
disadvantages.  However, there is so much more I can do with a SAN
infrastructure that I can do with Local Storage.

Add VMware and BAM! (borrowed this term from Emeril) the value that can be
added to an organization skyrockets.  It's no mystery why VMWare and
Virtualization has taken off.

As far a cost effectiveness, that is debatable... When you factor in
Cooling/Power Draw/Business continuity it's hard to argue. Again, it all
depends on what your virtualizing and how important it is to the business.

If you want a cost effective Virtualization Solution, then I would suggest
looking at Virtuozzo from SWSoft.

Joe


On 2/23/07, Rick Mack <ulrich.mack@xxxxxxxxx> wrote:

Hi Steve,

VMs aside, there are still a couple of significant areas where SAN disks
just don't hack it as a system disk.

The first is latency which can be 4-5 times worse on a SAN "disk"
(overhead of fabric switch and other infrastructure) compared to local
disks. I know that DR etc is a lot easier with SAN disks than local hard
disks, but if you decide to go SAN boot and still want want real performance
then you'd better at least consider using the local hard disks for paging,
spooling and user profiles.

The second issue is price. Even with 72 GB disks where most of the disk
space is wasted, SAN disk space still costs quite a bit more than RAID
mirrored local drives.

I have a suspicion that there will be a time in the near future when
people will start realising that that VMWare isn't nearly as cost effective
as everyone argues. Please don't get me wrong, I love the idea of VMWare and
just wouldn't do without it. It's just that VMWare isn't really about saving
money once we get away from a development environment.

And until we can overcome disk and network i/o bottlenecks, having more
CPU power to play with just isn't all that critical. Of course there are
things like Vista/Longhorn's flash drive read/write caching that even things
up a bit but what we really need is the next generation of hard disks that
have obscenely large on-board caches. That'll let them run at close to the
interface speeds (eg up to 6 Gb per disk on SASI).

regards,

Rick

On 2/23/07, Steve Greenberg <steveg@xxxxxxxxxxxxxx> wrote:

>  Nice! This is one of those mind set changes that we periodically have
> to go through. I am going through one right now with the idea of booting
> servers off the SAN, in the old days this was flaky but I have to update my
> thinking and accept that it works and is trustworthy!
>
>
>
> Steve Greenberg
>
> Thin Client Computing
>
> 34522 N. Scottsdale Rd D8453
>
> Scottsdale, AZ 85262
>
> (602) 432-8649
>
> www.thinclient.net
>
> steveg@xxxxxxxxxxxxxx
>

Other related posts: