[THIN] Re: OT: Blade Servers

  • From: Jeremy Saunders <jeremy.saunders@xxxxxxxxxxx>
  • To: thin@xxxxxxxxxxxxx
  • Date: Thu, 21 Sep 2006 22:41:02 +0800

Sure...I can't comment on the specifics of the Dell blades, as I am not
sure of all the options, but single slot Blades only have two NICs on
board. Whether your blade centre uses passthrough or a switch is
irrelevant. You can only map two physical NICs to each blade, so you hit
that 2GB limitation. Typically with ESX, the Service Console needed a
dedicated NIC, but this is no longer the case with ESX 3. However, because
you still only have two physical NICs, you need to do your sums to ensure
that the servers that are hosted on the blade do not run into bandwidth
issues. In other words, you don't want to oversubscribe the network
bandwidth requirements. The VMware architects say don't push it past 60%,
or there abouts. So when you virtualise your servers, you just need to
understand how to group them. For example, you may only get 5 guests per
blade instead of 8 or 10 per rack server. The blade has the processing
power to handle all 10, but you can't address the Network bandwidth issue
with the blade, where as you can add more physical NICs to a rack server to
overcome this.

Now, I have no idea about your environment and the new line of business
application servers you wish to virtualise, but you just need to understand
that. Getting someone in to have a chat with your team is the best advice I
can give you at this stage.

Cheers.

 Kind regards,

 Jeremy Saunders
 Senior Technical Specialist

 Infrastructure Technology Services
 (ITS) & Cerulean
 Global Technology Services (GTS)
 IBM Australia
 Level 2, 1060 Hay Street
 West Perth  WA  6005

 Visit us at
 http://www.ibm.com/services/au/its

 P:  +61 8 9261 8412                F:  +61 8 9261 8486
 M:  TBA                            E-mail:
                                    jeremy.saunders@xxxxxxxxxxx










                                                                       
             Evan Mann                                                 
             <emann@pinnaclefi                                         
             nancial.com>                                               To
             Sent by:                  thin@xxxxxxxxxxxxx              
             thin-bounce@freel                                          cc
             ists.org                                                  
                                                                   Subject
                                       [THIN] RE: [THIN] Re: OT: Blade 
             21/09/2006 10:09          Servers                         
             PM                                                        
                                                                       
                                                                       
             Please respond to                                         
             thin@xxxxxxxxxxxx                                         
                     g                                                 
                                                                       
                                                                       




I'm a blade n00b, so I was hoping you could expand a little of the
limitations you have seen using ESX and not being able to split up NICs.
The Dell Chassis has a few different I/O options.  PowerConnect switches,
ethernet passthrough fiber channel passthrough, etc, so my thought was you
can still have individual NICs for each Blade?

Dell also has an daughter card option for the Blades themselves for a Dual
GigE NIC or a Dual GigE TOE NIC, so I see this as a way to get additional
networking into your Blade, but there is only 1 daugther card option, so
you get Dual GigE or Dual Fiber HBA, not both, which is probably something
you'd want for an ESX box.

-----Original Message-----
From: thin-bounce@xxxxxxxxxxxxx [mailto:thin-bounce@xxxxxxxxxxxxx] On
Behalf Of Jeremy Saunders
Sent: Thursday, September 21, 2006 8:41 AM
To: thin@xxxxxxxxxxxxx
Subject: [THIN] Re: OT: Blade Servers

Yeh...you had to go there....didn't you Joe :) I assume you are talking
about the lack of write back cache from the LSI 1030 controller. This can
be an issue for some environments. However the new HS21's have write back
cache. My understanding from asking lots of questions internally, is that
the orginal design of the HS20's was more focused around booting from SAN.
I've used plenty of HS20's as Citrix servers, and have personally found
that the lack of write back cache has not caused performance issues.

I know you and your customer had a bad IBM experience from some disgraceful
customer service in the past, but we are not all bad. Out of 320,000
employee's, you found the one or two bad apples that did not step up and
help you. I am very sorry about what happened. But you need to get over it
and move on and stop bagging IBM on this forum with every opportunity you
get.

Anyway...

In the real world, there is no single server that has every feature you
could ask for without any quirks. You will always find some issue somewhere
along the lines that may relate to a sepcific application server, or a
limitation introduced within your own environment. At the end of the day
they will all do the same thing, and everyone has their own opinions and
experiences. So I think rather than try and play the 3 vendors against each
other, it's more constructive to understand why you want to go down the
blade path and not stick to traditional servers, especially when Evan was
talking about using some of them as ESX hosts. Even though we've had this
conversation a few times on this forum, I'm not convinced that blades
suites all environments.

Once you go down the blade path you lock yourself in. In other words, you
need to fill the blade centre with blades in order to achieve your ROI. I
guess many of you in the larger countries and environments don't have this
problem (just like Rusty ordering 30 at a time), but I see this as being an
issue in Australia, where the environments are not always as big. And then
there are the limitations of blades, especially when adding NIC's, HBA's,
etc.

Blades are great for Citrix servers, Web servers, Domain Controllers, etc,
but  once you start using them for ESX hosts, you start hitting
limitations.

You can't divy up the NICs. Sure in ESX version 3 the Service Console can
now share a NIC (and there was an unsupported hack for 2.5x), and you can
also trunk (VLAN) them, but there is still the issue of available bandwidth
to the blade. So when using blades as ESX hosts, your bottleneck will
almost always be the NIC's. And then of course if your servers are old and
don't have a PCI Express bus, the NIC's will share a PCI-X bus, and compete
against each other for bandwidth.

With a standard rack mount server, you can add NIC's, HBA's, etc.

So the answer would be to use the bigger (double size) blades, which take
up two slots. Well these are expensive, so what's the point of filling your
blade centre up with these, as you will blow out your ROI.

Some time ago I advised a couple of customers to go down the blade centre
path because it seemed like the right way to go. Sure they filled it, and
have achieved ROI, but now that we are introducing ESX into their
environment, and after doing the sums, we are hitting some
Archtectural/Design limitations when considering using some of the bades as
ESX hosts. And now I'm annoyed with myself that I didn't understand how
these limitations would effect their environment once we start looking into
using ESX for DR and Business Continuity.

Don't get me wrong, I love working with blades, and this issue will not
concern everybody, but you need to make sure they will provide you with
what you need.

If you are puchasing Dell, then get the Dell Server SE in for a whiteboard
session, and likewise for HP and IBM.

Sorry for getting carried away and going off topic a bit, but I often don't
think people look at the bigger picture when purchasing blade hardware.

Cheers.

 Kind regards,

 Jeremy Saunders
 Senior Technical Specialist

 Infrastructure Technology Services
 (ITS) & Cerulean
 Global Technology Services (GTS)
 IBM Australia
 Level 2, 1060 Hay Street
 West Perth  WA  6005

 Visit us at
 http://www.ibm.com/services/au/its

 P:  +61 8 9261 8412                F:  +61 8 9261 8486
 M:  TBA                            E-mail:
                                    jeremy.saunders@xxxxxxxxxxx











             "Joe Shonk"
             <joe.shonk@gmail.
             com>                                                       To
             Sent by:                  thin@xxxxxxxxxxxxx
             thin-bounce@freel                                          cc
             ists.org
                                                                   Subject
                                       [THIN] Re: OT: Blade Servers
             21/09/2006 08:23
             AM


             Please respond to
             thin@xxxxxxxxxxxx
                     g






Actually,  all three vendors have tight integration with Altiris (for HP
it's call RDP, but it's just Altiris with the HP logo).   I'm not quite
sure why you think IBM is better than Dell?  The Dell at least come with a
real RAID controller... HP's SAS controller is crap and IBM, well let's
just say the HS20s (8843) was nothing short of a cluster ......  On the
bright side, HP blades do support Opterons processor unlike Dell...  If
your planning to running anything like VMware ESX on blades, the Opterons
are the way to go.

Joe

On 9/20/06, Rusty Yates <rusty27@xxxxxxxxx> wrote:
  We are using Dell OpenManager right now and the on board management for
  the blade chassis (which is ok) nothing like IBM.  But when we were
  looking at IBM, HP and Dell we knew IBM and HP were better but for what
  we needed Dell was better and we got more servers for the money.  This
  year we are looking at Altiris which has a piece that is designed just
  for Dell.  Looks sweet and pricing isn't that bad.


  On 9/19/06, Evan Mann < emann@xxxxxxxxxxxxxxxxxxxxx> wrote:
   Are you using OpenManage or 3rd party for management? I've never been
   impressed with OpenManage for non-Blade servers, but I'd imagine the
   Blade variant is much different.

   From: thin-bounce@xxxxxxxxxxxxx [mailto:thin-bounce@xxxxxxxxxxxxx] On
   Behalf Of Rusty Yates
   Sent: Tuesday, September 19, 2006 12:56 PM
   To: thin@xxxxxxxxxxxxx
   Subject: [THIN] Re: OT: Blade Servers

   In our environment we are running the Dell 1855 Blades and haven't run
   into any problems.  Next year we will buy the 1955 models.

   On 9/19/06, Evan Mann <emann@xxxxxxxxxxxxxxxxxxxxx> wrote:
     It looks like I'm going to be moving into the land of Blade servers.
     We're a Dell shop, so Dell 1955's are what is being looked at right
     now. I want to put together a host list of key items to make sure
     these things have/support.  Memory backed RAID cache and power issues
     are the only thing I have on the list now, since that was the main
     issues I've seen come across the list.  Obviously, management of the
     blade chassis is important.

     If it's useful, here is what we are planning to do with the Blades:
     There is no intention of moving our Citrix farm to blades, but we are
     deploying a new business level app using VMWare ESX3.  This new app
     will utilize web servers and SQL servers.  The web farms will be in
     ESX and will utilize an application load balancer, the SQL servers
     (starting with 1) will likely not be in ESX, but that is undecided.

     We will have a fiber connected SAN as well, but the plan isn't to boot
     off the SAN (right now at least).  It is unknown if we will connect
     the entire blade chassis to the SAN, or servers individually. It
     depends on the cost of the fiber switches.

     We are doing a lot of server consolidation as well to 2 existing 2850
     (dual 3.4ghz Xeon's) running ESX3.  As we need more capacity, we will
     use additional Blades for ESX 3 and consolidation.

     The 10 blade chassis specs out to about 6k.  Each 3ghz Dual Core
     (woodcrest) blade with 16 gigs of RAM, 2x75gig SAS drives and dual
     Broadcom TOE GigE/dual QLogic Fiber HBA's will run us about $6600
     (including warranty/support)




************************************************
For Archives, RSS, to Unsubscribe, Subscribe or set Digest or Vacation mode
use the below link:
//www.freelists.org/list/thin
************************************************
************************************************
For Archives, RSS, to Unsubscribe, Subscribe or
set Digest or Vacation mode use the below link:
//www.freelists.org/list/thin
************************************************


************************************************
For Archives, RSS, to Unsubscribe, Subscribe or
set Digest or Vacation mode use the below link:
//www.freelists.org/list/thin
************************************************

Other related posts: