[THIN] Re: VMWare Farm

  • From: "Greg Reese" <gareese@xxxxxxxxx>
  • To: thin@xxxxxxxxxxxxx
  • Date: Tue, 1 Aug 2006 09:38:55 +1200

I think your cost estimates a little too conservative for putting in two
VMWare ESX boxes.  it's your server room and budget though.

consider dual HBAs in the servers for fault tolerance and redundancy.
Consider more than 8gb of ram per VmWare host.  Especially if you want to
run your entire farm virtual.  Those two things alone will eat up that cost
differnce.

My VMware hosts are quad Opteron dual core with 32gb of RAM and I still
don't consider them good enough for my main citrix servers.  I have a couple
of PS 4 machines hosted there that get light use for screwball apps I don't
want on my main servers.  Two VMWare servers in my environment was way over
100k.

For running Citrix PS4, you can do better than 4000 per server.  I paid 2500
per server for the last ones I bought and they were maxed out for
performance.

I don't see VMware as a money saver until you scale big.  Really big.  500
servers big.  it has it's place and is a great thing but i would recommend
against it for hosting your entire Citrix farm.  Even then, unless you have
a burning need for the benefits of VMotion, the more basic versions of
VMware are good enough.

Greg


On 8/1/06, Selinger, Stephen <SSelinger@xxxxxxxxx> wrote:

Hmm…Good points but let's take a further look…at least what has worked for me. This might not work in your environment but has worked great for me!!



Please note that I am not talking about Citrix on VMWARE in the following
message.



Well I have 20 Virtual machines running across three Dell 2850 servers in
my environment. They all are connected to the san and absolutely run without
any performance problem for the last two years.  They haven't been rebooted
for 228 days. These are production virtual machines that are used everyday.
I have not yet had one single issue with the virtualization platform and no
vendor has not provided support. (Including MS_). Let's compare some quick
numbers.



If I would have purchased 20 Servers for this environment it would have
cost: (Please note Canadian dollars which are pretty much US dollars now J)





*Plan A - Physical Servers*

* *

20 – DELL 1850             - $ 4,000 each = $80,000 (with 3 year warranty)

1   - Rack for Servers                  - $ 5,000 (I was out of rack space
in my environment)\

1   - Blade for Switch                  - $ 5,000 (Network Guys are out of
ports)



Total Cost = $90,000.00



** Note I would have probably needed to purchase more AC and power but I
will not include these numbers in my calculations as I do not directly pay
for those things.





*Plan B - ESX Servers*



2 – DELL 2850/ 8 GB Ram/         - $ 15,000 each = $30,000

2 – HBA for SAN                           - $   1,000 each = $ 2,000

2 – SAN Ports                           - $    2,500 each = $ 5,000

2 – ESX License                         - $  7,500 each =  $15,000

2 – ESX Support for 3 years        - $  3,000 each =  $  9,000

1 – Disk for SAN                           - $ 10,000 300 GB = $10,000



Total over 3 years $71,000.00





Ok so you say wow you only saved $18,000 dollars big deal?



Well it is a big deal when you consider that all of the physical servers
in plan A are not connected to the SAN. They are stand alone and live and
die alone. If I wanted to compare apples to apples I would put HBAs into
each server and attached them into two SAN switch ports. This is not cheap.
So assuming that an HBA costs $1000 dollars and a SAN port costs $2500
dollars which are the costs in my environment this would be the updated
numbers.



*Plan A- Physical Servers with SAN Disk*

20 – Standard Dual Proc Server  = $80,000.00

1   - Rack for Servers = $ 5,000.00 (I was out of rack space in my
environment)\

1   - Blade for Switch = $ 5,000.00 (Network Guys are out of ports)

20 – HBAs for servers = $20,000

20 – SAN Switch Ports = $50,000

1 – Disk for SAN     - $ 10,000 300 GB = $10,000





Total Cost = $170,000







*Plan B - ESX Servers*



2 – DELL 2850/ 8 GB Ram/         - $ 15,000 each = $30,000

2 – HBA for SAN                           - $   1,000 each = $ 2,000

2 – SAN Ports                           - $    2,500 each = $ 5,000

2 – ESX License                         - $  7,500 each =  $15,000

2 – ESX Support for 3 years        - $  3,000 each =  $  9,000

1 – Disk for SAN                           - $ 10,000 300 GB = $10,000



Total over 3 years $71,000.00



Potential Cost Savings = $101,000



So you would see that if (and I say IF) the requirement was that all
servers were connected to the SAN then the ESX solution would be cheaper by
$100,000 dollars. Yes you might not agree exactly with my numbers but I
think that this illustrates that there are tremendous cost savings made in
an ESX environment. We IT people have always been taught to over build
servers just to ensure that no one complains. I think that this is the only
industry that you can overbuild everything and not get fired! If you were
building office buildings for your company that were only 5% utilized and
you wanted to build a new building each time that you hired a few more staff
you would be fired so fast your head would spin.



There are also many soft benefits to virtualization with ESX I will
quickly name a few:





1)       Vmotion



Absolutely cool technology. Imagine moving a live production server in the
middle of the day to another piece of hardware without any downtime. Let's
say you want to install more ram into the host server. Simply VMotion all
the guests over to another server, install the ram and then VMotion the
hosts back!



2)       Snapshots.



Take a snapshot of you server before you install some MS patch of the
week. If the patch bombs your server simply fall back to the Snapshot and
all is well!!



3)       VMWARE HA



If one of your ESX servers dies all of the VMs on that server will restart
on another ESX server. Yes the VMs will go down but within a few minutes
they will be back on another system.



4)       Distributed Resource Scheduling



You basically setup a cluster of ESX servers that you're VMs live on. They
will be load balanced across this hardware to ensure enough resources are
spread across the farm.



I am not going to say that this won't work in every eviroment as everyone
and every company is different. I would just ask that you keep your options
open and at least look at this solution. Start small with a few test servers
and build from there.



You can download VMWARE ESX 3.0 evaulation from
http://www.vmware.com/download/vi/eval.html  Remember the eval is free but
once you start you will never go back!
























------------------------------

*From:* thin-bounce@xxxxxxxxxxxxx [mailto:thin-bounce@xxxxxxxxxxxxx] *On
Behalf Of *Greg Reese
*Sent:* July 31, 2006 1:50 PM

*To:* thin@xxxxxxxxxxxxx
*Subject:* [THIN] Re: VMWare Farm



I'm with Jeff on this one.  VMWare has it's place but I wouldn't trust my
entire farm to it.  Web interface, license server, test environments etc.
Small things that don't need lots of attention or power.

5 vmware servers and licensing for ESX will also cost a small fortune.
Not to mention the SAN space etc.  If you spent 3k per server on 15 Hp DL360
servers, you would be looking at 45k for the farm.  5 vmware servers, plus
support, plus vmware esx, HBA's, SAN space etc will be a lot more than 45k
and not perform as well.  It will perform OK, but once you get loaded up
with lots of users, the physicals will outperform.

I like the fact that running on physical boxes is a known commodity.  When
you are running things on VMware and there is a problem, it gets brought up.
The vendors point fingers at it, management wonders about it.  Then you have
to move it to a physical box to prove it is or is not a vmware issue etc.
it just becomes a hassle.  A hassle you paid more money to have.

Greg

On 8/1/06, *Jeff Pitsch* <jepitsch@xxxxxxxxx> wrote:

This all depends on what your DR needs are but Zone Preference and
Failover will allow for automatic redirecting of clients to the DR site
without any need for you to get involved.



As for the 15 VM's, that depends on many factors.  What hardware are you
moving from and moving too?  How many users?  How much load were on your old
servers?  Have you looked at 64-bit at all?  How did you determine that 5
servers running VMWare would meet your needs?  Have you done any real
testing to see if this solution would work?



Jeff Pitsch
Microsoft MVP - Terminal Server

Forums not enough?
Get support from the experts at your business
http://jeffpitschconsulting.com





On 7/31/06, *Eldon* <u2htdaab@xxxxxxxxx> wrote:

Thanks to all providing very good info so far.  Now, which features
specifically in PS 4 would resolve my DR needs (not totally up to speed with
PS 4).  Also, isn't running 15 VMs running on only 5 servers improving my
farm based upon consolidation?



On 7/31/06, *Jeff Pitsch* <jepitsch@xxxxxxxxx > wrote:

mm, i would argue that you probably have space to consolidate anyway but
moving to 64-bit OS and hardware would allow you to consolidate very easily
and the features that are in PS4 would allow for exactly what your looking
for from a DR perspective.  Just remember that you will not get the
same performance out of a VM that you would out of pure hardware.  Now there
are obvious considerations here like you may be using really old hardware,
etc but do not be surprised that you could very easily end up running more
than 15 VM's to handle the same amount of users in a virtualized
environment.



As well, don't agree to anything until you can actually test all of thi
sout.  Anyone can promise the world, it's up to you to make sure that it's
actually the world you want.  I've seen to many people fall into this trap
and only listen to what they are being told, then sign the agreements, then
live to regret because they didn't do due diligence to make sure that the
solution would actually work.



Jeff Pitsch
Microsoft MVP - Terminal Server

Forums not enough?
Get support from the experts at your business
http://jeffpitschconsulting.com





On 7/31/06, *Eldon* <u2htdaab@xxxxxxxxx> wrote:

Being the OP, the lure of VMware to me is twofold:  1 - to consolidate
hardware in my current deployment of HP G1 hardware (15) to support 250
concurrent connections to a published dekstop and other siloed apps; and 2 -
to allow failover to our DR site by using our current EMC SAN located in our
main site and a future EMC SAN (Centerra) at our DR location.  Portability
of moving VMs between SANs in a DR scenario is very appealing.



I am in the process of waiting on a quote from a Solutions Architect, but
the way it was explained to me is that I would be looking at consolidation
of 3 current servers into 1 (3 VMs per server).





On 7/31/06, *Jeff Pitsch* <jepitsch@xxxxxxxxx > wrote:

Hence my statement of lightly used servers.  Most companies care about
getting more users on a system vs less.  Now granted the OP didn't say how
many users, how many servers but in the end if you try to take an entire
farm and port it to VM's, you will typically end up using more VM's than
phsyical boxes.  VM's simply cannot get the same amount of users on a system
as physical hardware can at this point in time.  If you aren't utilizing
your servers to their full potential or even close to their potential, then
yes you could move to VM's and not notice much of difference.  But let's be
realistic for a moment, most people move to VM's to consolidate servers.  As
well, many many companies that do this with Presentation Server aren't using
their boxes to nearly their potential anyways so moving to VM's for that
reason is simply ridiculous.  I would be willing to bet that many PS
implementations have never taken the time to benchmark or stress test their
servers to see how many users they can get on a system.  They have no idea
what their sytems can handle and therefore over buy on the systems
required.  Now overbuying isn't necessarily a bad thing (for redundancy) but
I've been into many many companies that do it because they simply don't know
what their systems can handle.



whew, gotta get off that soapbox.  Sorry everyone



Jeff Pitsch
Microsoft MVP - Terminal Server

Forums not enough?
Get support from the experts at your business
http://jeffpitschconsulting.com





On 7/31/06, *Selinger, Stephen* <SSelinger@xxxxxxxxx > wrote:

Jeff,



Respectively I hope that you are only taking about highly utilized
production Citrix servers and not other servers as VMs. There are many
companies including where I work that have production VMs of various sorts
and flavours. ESX is absolutely a production ready product that is capable
of running production VMs. Yes there will be servers that have too high of
utilization to be running on ESX but there are tons of over powered
underutilized servers out there.






------------------------------

*From:* thin-bounce@xxxxxxxxxxxxx [mailto: thin-bounce@xxxxxxxxxxxxx] *On
Behalf Of *Jeff Pitsch
*Sent:* July 31, 2006 11:29 AM
*To: *thin@xxxxxxxxxxxxx
*Subject:* [THIN] Re: VMWare Farm



I believe the general concesus is is that for production, VM's are not the
way.  Lightly used servers are fine, but for an entire farm the performance
is ismply not there yet.



Jeff Pitsch
Microsoft MVP - Terminal Server

Forums not enough?
Get support from the experts at your business
http://jeffpitschconsulting.com





On 7/31/06, *Eldon* < u2htdaab@xxxxxxxxx> wrote:

Currently running FR3 on 2000 SP4, and am beginning to evaluate and look
at building a separate Windows 2003 CPS 4.0 Farm on the VMWare ESX
platform.  Just wanted to get an idea if anyone on the list has something
similar in production today, what hardware you deployed to support published
apps on ESX and VMotion, and how you designed your farm (including Data
Collector and Database).  Also looking for Best Practices and Things to
Avoid!



Thanks!!



*This communication is intended for the use of the recipient to which it
is addressed, and may contain confidential, personal and or privileged
information. Please contact us immediately if you are not the intended
recipient. Do not copy, distribute or take action relying on it. Any
communication received in error, or subsequent reply, should be deleted or
destroyed. *












*

This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal and or privileged
information. Please contact us immediately if you are not the intended
recipient. Do not copy, distribute or take action relying on it. Any
communication received in error, or subsequent reply, should be deleted or
destroyed.
*

Other related posts: