Re: Why 4 Procs

  • From: Rick Boza <rickb@xxxxxxxxxxxxxxx>
  • To: Exchange List <exchangelist@xxxxxxxxxxxxx>
  • Date: Mon, 31 Jan 2005 09:36:42 -0500

Just one added note (an additional consideration that Al alludes to but
doesn't call out enough IMHO) is the additional cost of training your admins
to run clustered environments.

For example, around upgrades and minimizing downtime: it sounds great, but
how do you actually take this on?
Do you drop a node and upgrade it offline?  What about the IIS Metabase? In
what order do upgrades need to happen? Can you do this with OS patches, or
just application patches?  What if the answer is 'Sometimes'...? (it is!).
Does your antivirus support clustered servers?  Does your backup solution?

Etcetera
Etcetera
Etcetera

All I'm saying is, there are many hidden costs to high-availability
operations.  The rewards are very often worth it, but I've worked with at
least one client that had clustered their systems and decided to un-cluster
because they found it to be 'less robust and led to too many problems.'  The
real problem was poorly documented and adhered to policies and practices, as
well as under trained staff.

Clustering is a fantastic and viable solution for some business problems,
but it shouldn't be undertaken lightly.


On 1/31/05 9:19 AM, "Mulnick, Al" <Al.Mulnick@xxxxxxxxxx> wrote:

> http://www.MSExchange.org/
> 
> A cluster is a high-availability solution more than a load-balancing
> solution.  It tends to be more expensive (capital costs) than a stand-alone
> system, and offers the higher-availability in terms of hardware failure.
> 
> For your efforts, you basically get an automated "computer restarter" should
> hardware fail.   
> 
> To achieve this, Microsoft recommends having at least one passive node in
> the cluster. For example, if you deploy a 4 node cluster, you should deploy
> A/A/A/P, if 8 way, there's some thoughts that you should have a 6x2 cluster.
> All active nodes, brings the PM's thoughts to mind: "For the love of God,
> don't do A/A clustering with Exchange on 32-bit platforms!!"   Why?  Plenty
> of reasons, such as memory limitations (32-bit platform limitations etc).
> Suffice to say, they have good reason to recommend this configuration.
> 
> Additional downtime and upgrades can be avoided with clusters, since as you
> mentioned, you would have the option of adding an upgraded node, evicting a
> node etc when upgrades are needed.  A so-called "rolling upgrade" can be
> done resulting in little downtime.  Of course, if that's what I wanted, I
> could just move mailboxes over to a new server with little impact as well.
> Depends on what you need to accomplish.
> 
> From a diminishing returns perspective, this would mean instead of a single
> 4-way machine, you'd deploy 3 2-way machines and one of them would be "dark"
> at all times waiting for failure. 6 processors, with 4 of them active at any
> given moment.
> 
> Given the added expense of hardware (you'll need particular hardware and
> storage to make this work) and the added expense of cluster aware software
> (third-party cluster-aware apps), plus the learning curve associated,
> disaster recovery differences, etc, I'd say that 4-way looks pretty cheap in
> this scenario.  
> 
> If you need geographical availability, it gets much more complex and
> expensive.
> 
> Clustering in Microsoft technology is for high-availability, and also does
> some load-balancing (you have to load balance it yourself, it's not
> automated in MSCS vs. Web load-balancing with NLB type of balancing).
> 
> My thoughts anyway.
> 
> Al
> 
> 
> 
> -----Original Message-----
> From: Paul_Lemonidis@xxxxxxxxxxx [mailto:Paul_Lemonidis@xxxxxxxxxxx]
> Sent: Friday, January 28, 2005 1:15 PM
> To: [ExchangeList]
> Subject: [exchangelist] Re: Why 4 Procs
> 
> http://www.MSExchange.org/
> 
> Hi all
> 
> Considering the likely cost of a 4 way box (the law of diminishing returns
> sets in fast once you go beyond 2 processors on a single box from I can
> discern?) and the disk space that would be required would a cluster of two
> two way boxes not be more cost effective?
> 
> A cluster has the advantage of spreading the load as well as redundancy.
> Also many changes can be made to a cluster installation with no minimal or
> no downtime to users. I am thinking primarily of hardware maintenance, O/S
> updates (such as new software installation, hotfixes, Windows updates and
> Service Packs etc.)
> 
> I would be interested to know people's thoughts? Many thanks in advance.
> 
> Regards,
> 
> Paul Lemonidis.
> 
> ----- Original Message -----
> From: "Mulnick, Al" <Al.Mulnick@xxxxxxxxxx>
> To: "[ExchangeList]" <exchangelist@xxxxxxxxxxxxx>
> Sent: Friday, January 28, 2005 3:47 PM
> Subject: [exchangelist] Re: Why 4 Procs
> 
> 
>> http://www.MSExchange.org/
>> 
>> Mike, you bring highlight a great point.  For a small implementation, a
>> single proc, if you can find them in a server chassis is likely fine for
>> just Exchange deployments.  100 users would likely be fine on a laptop if
>> not for the power-save functions :)
>> 
>> Each deployment will differ greatly.  For example, some will have 100
>> users
>> per server and of that, the 80-10-10 rule will apply for usage as well as
>> the 75/25 concept.  10% of the users will reply, "email? I have email?  I
>> didn't know" 10% will be taking 80% of the resource utilization and the
>> other 80% of the user density will use the server on a normal basis
>> similar
>> to the benchmark specs.
>> 
>> After that, you'll have to consider that not everybody uses a server at
>> the
>> same instant, so you might expect that 75% of the users would be active
>> (consuming resources) at a given time while 25% are making the company
>> money
>> in other ways. 75% is likely high, but I like to include the incoming
>> traffic that occurs when they do nothing.
>> 
>> On a 100 user machine, a single proc would be fine most likely.  A PDA
>> might
>> be enough if not for the storage requirements.  On a 1000 user machine
>> you're odds of seeing it more heavily utilized with larger db's is higher.
>> On a 10000 user machine, your odds are even greater.
>> 
>> There's another angle to consider.  Are all your users MAPI users?  Or are
>> some of them using internet protocols?  If mixed, your resource
>> requirements
>> change yet again.  It all needs to be considered.
>> 
>> So you highlight a great point about the sizing of Exchange servers: it
>> depends.  (sounds like something a consultant might say, doesn't it?)
>> 
>> I believe the original poster mentioned 7000 users across 2-4 machines (or
>> was it 5000 users?). That would be a density of about 3500 - 1750 per
>> machine depending on the final design decision.  At 3500 user density I
>> can
>> tell that in most cases you won't want a dual-proc machine.  It might work
>> if you have a light or highly geographically dispersed user population
>> consuming the services and no other apps that suck the life out of the
>> procs
>> (like AV solutions tend to do). If you go 1750 per server, you're much
>> closer to border line.  You may want to deploy with a 2 proc solution and
>> if
>> that doesn't work, upgrade to 4 way machines if the needs show you require
>> it.
>> 
>> Keep in mind what happens if you take Exchange to a sustained proc over
>> 75%.
>> It doesn't behave as well as you'd like, and any hiccup will result in
>> even
>> longer recovery times.  Is that important?  I think so, because what's the
>> point of having email if you can't use it for days at a time? It needs to
>> be
>> as reliable as the door systems else it may as well go away.
>> 
>> DR/BC requirements play a part in the decision process, since you may at
>> some point want to use RSG's to put mail back for some bozo that lost it
>> and
>> has to have it.
>> 
>> On a 100 user system, you can likely tell them they'll be without mail for
> 
>> a
>> little while while you do the restore and the processor takes their
>> resources.  Maybe during the lunch hour? On a 3500 user system, you have
>> much more utilization around the clock in most cases.
>> 
>> Al
>> 
>> 
>> 
>> -----Original Message-----
>> From: A. M. Salim [mailto:msalim@xxxxxxxxxxxx]
>> Sent: Friday, January 28, 2005 9:33 AM
>> To: [ExchangeList]
>> Subject: [exchangelist] Re: Why 4 Procs
>> 
>> http://www.MSExchange.org/
>> 
>> Hi,
>> 
>>> Bottom line: you get better performance when scaling Exchange with
>>> four processor machines.  Fact. You may get acceptable performance on
>>> a two-way machine.  If you're really a small shop and can find a
>>> single processor server class machine (I'm sure they're out there, but
>>> I don't see them as
>>> frequently) then you may do just fine with that.  In fact, I run
>>> Exchange on a single processor because it's a test lab in a VS
>>> environment.  VS 2005 only supports 1 processor for VM.  Not a choice
>>> at this point no matter how much hardware is presented.
>> 
>> In an earlier email I asked why even 2-proc let alone 4-proc and suggested
>> that perhaps there may be a tendency to over-spec as a CYA measure.  Let
>> me
>> give you some specifics.  Of the Exchange servers we manage, two are
>> single
>> CPU servers running P4/2.4 GHz and 512MB of RAM.  Each of these two
>> servers
>> has about 100 users on it, moderate traffic and mailbox sizes (limited to
>> 100MB or less in most cases).
>> 
>> The servers perform just fine.  I routinely monitor the following
>> performance specs:  CPU load, memory percent use, response speed,
>> complaints
>> of slowness.
>> 
>> Results:
>> 
>> CPU load: hardly a blip (generally under 5% or 10% load at any time even
>> at
>> peak time of day.
>> 
>> Memory: well below 512MB usage.  generally around 200MB or less.
>> 
>> Bandwisdth/Network traffic: low usage.  Well below 5% ustilization.
>> 
>> Response speed:  zero speed complaints in last 12 months (compared to
>> other
>> mailservers we have particularly a Windows based iMail server).
>> 
>> Hence my comment about over-spec'd servers.  From the emails on this
>> topic,
>> the consensus seems to be that a minimum 2-proc server is necessary for an
>> Exchange installation, and I just don't see that based on the data I have.
>> 
>> Best regards
>> Mike
>> 
>> 
>> ------------------------------------------------------
>> List Archives: http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
>> Exchange Newsletters: http://www.msexchange.org/pages/newsletter.asp
>> Exchange FAQ: http://www.msexchange.org/pages/larticle.asp?type=FAQ
>> ------------------------------------------------------
>> Other Internet Software Marketing Sites:
>> World of Windows Networking: http://www.windowsnetworking.com Leading
>> Network Software Directory: http://www.serverfiles.com
>> No.1 ISA Server Resource Site: http://www.isaserver.org Windows Security
>> Resource Site: http://www.windowsecurity.com/ Network Security Library:
>> http://www.secinf.net/ Windows 2000/NT Fax Solutions:
>> http://www.ntfaxfaq.com
>> ------------------------------------------------------
>> You are currently subscribed to this MSEXchange.org Discussion List as:
>> al.mulnick@xxxxxxxxxx To unsubscribe visit
>> http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
>> Report abuse to listadmin@xxxxxxxxxxxxxx
>> 
>> ------------------------------------------------------
>> List Archives: http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
>> Exchange Newsletters: http://www.msexchange.org/pages/newsletter.asp
>> Exchange FAQ: http://www.msexchange.org/pages/larticle.asp?type=FAQ
>> ------------------------------------------------------
>> Other Internet Software Marketing Sites:
>> World of Windows Networking: http://www.windowsnetworking.com
>> Leading Network Software Directory: http://www.serverfiles.com
>> No.1 ISA Server Resource Site: http://www.isaserver.org
>> Windows Security Resource Site: http://www.windowsecurity.com/
>> Network Security Library: http://www.secinf.net/
>> Windows 2000/NT Fax Solutions: http://www.ntfaxfaq.com
>> ------------------------------------------------------
>> You are currently subscribed to this MSEXchange.org Discussion List as:
>> paul_lemonidis@xxxxxxxxxxx
>> To unsubscribe visit
>> http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
>> Report abuse to listadmin@xxxxxxxxxxxxxx
>> 
> 
> ------------------------------------------------------
> List Archives: http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
> Exchange Newsletters: http://www.msexchange.org/pages/newsletter.asp
> Exchange FAQ: http://www.msexchange.org/pages/larticle.asp?type=FAQ
> ------------------------------------------------------
> Other Internet Software Marketing Sites:
> World of Windows Networking: http://www.windowsnetworking.com
> Leading Network Software Directory: http://www.serverfiles.com
> No.1 ISA Server Resource Site: http://www.isaserver.org
> Windows Security Resource Site: http://www.windowsecurity.com/
> Network Security Library: http://www.secinf.net/
> Windows 2000/NT Fax Solutions: http://www.ntfaxfaq.com
> ------------------------------------------------------
> You are currently subscribed to this MSEXchange.org Discussion List as:
> al.mulnick@xxxxxxxxxx
> To unsubscribe visit
> http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
> Report abuse to listadmin@xxxxxxxxxxxxxx
> 
> ------------------------------------------------------
> List Archives: http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
> Exchange Newsletters: http://www.msexchange.org/pages/newsletter.asp
> Exchange FAQ: http://www.msexchange.org/pages/larticle.asp?type=FAQ
> ------------------------------------------------------
> Other Internet Software Marketing Sites:
> World of Windows Networking: http://www.windowsnetworking.com
> Leading Network Software Directory: http://www.serverfiles.com
> No.1 ISA Server Resource Site: http://www.isaserver.org
> Windows Security Resource Site: http://www.windowsecurity.com/
> Network Security Library: http://www.secinf.net/
> Windows 2000/NT Fax Solutions: http://www.ntfaxfaq.com
> ------------------------------------------------------
> You are currently subscribed to this MSEXchange.org Discussion List as:
> rickb@xxxxxxxxxxxxxxx
> To unsubscribe visit http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
> Report abuse to listadmin@xxxxxxxxxxxxxx



Other related posts: