[THIN] Re: OT: Blade Servers

  • From: "Jim Kenzig http://ThinHelp.com" <jkenzig@xxxxxxxxx>
  • To: thin@xxxxxxxxxxxxx
  • Date: Tue, 19 Sep 2006 10:16:01 -0700 (PDT)

Actually there are a whole lot more considerations. We use HP blades.
   
  1. Power, you have to have enough of it to power the blade center and the 
proper connectors.
   
  2. Computer room floor. There must be enough ventilation in front of and 
behind it and also it must be able to support an enormous amount of weight.
   
  3. Computer room temperature.  You could heat a 5 story apartment complex 
from the heat these things throw off. You have to make sure that your computer 
room can handle it. 
   
  4. Network considerations...you are adding massive amount of servers to it 
all at once..be sure your network is up to the task.  Our blade system uses 
multiple (like 50 total) internal for management and external IP's  Thinks it 
through carefully. 
   
  5.  I would wait until Lonhorn comes out in Q1 07 to implement. 
   
  6. Buy 64 bit.
   
  Just my .02.
   
  Jim Kenzig
  

Evan Mann <emann@xxxxxxxxxxxxxxxxxxxxx> wrote:
      It looks like I'm going to be moving into the land of Blade servers.  
We're a Dell shop, so Dell 1955's are what is being looked at right now. I want 
to put together a host list of key items to make sure these things 
have/support.  Memory backed RAID cache and power issues are the only thing I 
have on the list now, since that was the main issues I've seen come across the 
list.  Obviously, management of the blade chassis is important.
   
  If it's useful, here is what we are planning to do with the Blades:
  There is no intention of moving our Citrix farm to blades, but we are 
deploying a new business level app using VMWare ESX3.  This new app will 
utilize web servers and SQL servers.  The web farms will be in ESX and will 
utilize an application load balancer, the SQL servers (starting with 1) will 
likely not be in ESX, but that is undecided.
   
  We will have a fiber connected SAN as well, but the plan isn't to boot off 
the SAN (right now at least).  It is unknown if we will connect the entire 
blade chassis to the SAN, or servers individually. It depends on the cost of 
the fiber switches.
  
We are doing a lot of server consolidation as well to 2 existing 2850 (dual 
3.4ghz Xeon's) running ESX3.  As we need more capacity, we will use additional 
Blades for ESX 3 and consolidation.
   
  The 10 blade chassis specs out to about 6k.  Each 3ghz Dual Core (woodcrest) 
blade with 16 gigs of RAM, 2x75gig SAS drives and dual Broadcom TOE GigE/dual 
QLogic Fiber HBA's will run us about $6600 (including warranty/support)

Other related posts: