RE: Dual or Quad for 4000 users

  • From: "Teo De Las Heras" <teoheras@xxxxxxxxx>
  • To: "[ExchangeList]" <exchangelist@xxxxxxxxxxxxx>
  • Date: Tue, 7 Mar 2006 12:22:05 -0500

I've definately been re-reviewing all my notes on Exchange design and
sizing.  Good tip!

Teo


On 3/7/06, Douglas M. Long <admindoug@xxxxxxxxx> wrote:
>
> http://www.MSExchange.org/ <http://www.msexchange.org/>
>
> This is exactly what I was thinking. Start looking at IOPS. Determine the
> IOPS characteristics of your users and then build disk subsystem to
> accommodate it. You generally have 3 levels of users and a percentage at
> each level. Also make sure you have enough GCs in your environment. Sounds
> like there should probably be a lot more homework before any decisions are
> made. Make sure you watch the exchange web casts available. Don't think they
> are for dummies. A couple of them have some really top notch information in
> them.
>
>
>  ------------------------------
>
> And what is the calculated IOPS per user?
>
>
>
> Basing on what I know about other large deployments, I would guess that
> you are probably OK assuming your disk subsystem can meet the IOPS load.
>
>
>
> In the general case, as I'm sure you know, Exchange is I/O bound, not
> processor bound.
>
>
>
> Are you going to have a dedicated bridgehead server plus a couple of FE
> servers? Seems like that would be a good idea too in this size of
> deployment.
> ------------------------------------------------------
>
> List Archives: http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
> Exchange Newsletters: http://www.msexchange.org/pages/newsletter.asp
> ------------------------------------------------------
> Visit TechGenix.com for more information about our other sites:
> http://www.techgenix.com
> ------------------------------------------------------
> You are currently subscribed to this MSExchange.org Discussion List as:
> teoheras@xxxxxxxxx
> To unsubscribe visit
> http://www.webelists.com/cgi/lyris.pl?enter=exchangelist
> Report abuse to info@xxxxxxxxxxxxxx
>

Other related posts: