[THIN] Re: OT: Blade Servers

  • From: "Joe Shonk" <joe.shonk@xxxxxxxxx>
  • To: thin@xxxxxxxxxxxxx
  • Date: Fri, 22 Sep 2006 07:52:53 -0700

Perhaps, but I was a bit harsh/crude about how I presented the information
and got a few people worked up over.  For that, I apologize.

The analogy with cars is quite true...  People have different needs, wants,
and expectations out of car.  The same thing with technology,  each customer
has different business and techinical requirements.  The options they choose
to implement vary thus two customers can have drastically different results.

Joe

On 9/22/06, Steve Greenberg <steveg@xxxxxxxxxxxxxx> wrote:

Since I was integrally involved with the project Joe has referred to it is only fair for me to add that he is not exaggerating or bashing, it was an absolute nightmare that took months to resolve. It was a very high profile customer with C Level ties to IBM. IBM responded quickly to each issue, it was not a lack of customer service. They simply were not able to resolve issues even after months into it. This is not bashing IBM, these simply are the facts.



This does not mean that everyone will have the experience, but it did
happen exactly as Joe related. I know for a fact that this was the first
shipment of these units to a US customer so that I am sure these were early
adopter/beta type product issues.



As another person said, one person swears by Chevy because of a good
experience, another will never buy their product again. Understandable, but
don't blame the messenger!!!



Steve Greenberg

Thin Client Computing

34522 N. Scottsdale Rd D8453

Scottsdale, AZ 85262

(602) 432-8649

www.thinclient.net

steveg@xxxxxxxxxxxxxx


------------------------------

*From:* thin-bounce@xxxxxxxxxxxxx [mailto:thin-bounce@xxxxxxxxxxxxx] *On
Behalf Of *Jeremy Saunders
*Sent:* Thursday, September 21, 2006 10:29 PM
*To:* thin@xxxxxxxxxxxxx
*Subject:* [THIN] Re: OT: Blade Servers





The power domain issue was only with the older chassis, and with a fully
populated blade centre. When this problem occurs, it just comes up with a
warning, but didn't stop the blades from working/booting. Perhaps the
chassis that we got here in ANZ were slightly different to the ones
shipped
in the US. Although I am aware of the particular experience Joe is
referring to and I believe it was a one off that was fixed by a chassis
replacement. So therefore a faulty chassis...and he's generalising based
on
that.

As for the drive failures, the maintenance guys here do very few
replacements. Maybe we've just been lucky.

Yeh...firmware upgrades fix a lot of stuff. But that's with anything, not
just the blades and their options.

Many of the problems that Joe has listed have actually been addressed.
He's
just listed them for the sake of making a list. You could do that same wih
HP and Dell stuff too.

As an example from Joe's list...
"System would allocate 25% of the memory for itself so you're left with
3.1
gigs out 4 gigs installed. (Yes there is a fix for this too)"

Joe...even though this was true, it's a slight exageration. If you have
exactly 4GB or RAM installed, the OS will only see about 3.5GB. Roughly
512MB is reserved for the PCI bus. This is not really a design issue with
the blades; it's a "feature' of the Intel E7520 chipset and is also
documented by Microsoft here
http://support.microsoft.com/default.aspx?scid=kb;en-us;555458. You will
also find this problem documented by HP and Dell.

I like the DL360's Tony. I like the blade centre cos it looks cool, and
it's nice to be able to brag to your industry friends that you've deployed
them. Especially those that work for the opposition. But I think a DL360,
etc, are often a more practical beast to have. You can't redeploy a blade
somewhere else to make your investment last longer. It just becomes a door
stop :)

But every case is different.

Cheers.

Kind regards,

Jeremy Saunders
Senior Technical Specialist

Infrastructure Technology Services
(ITS) & Cerulean
Global Technology Services (GTS)
IBM Australia
Level 2, 1060 Hay Street
West Perth  WA  6005

Visit us at
http://www.ibm.com/services/au/its

P:  +61 8 9261 8412 F:  +61 8 9261 8486
M:  TBA E-mail:
jeremy.saunders@xxxxxxxxxxx











"Tony Lyne"

co.nz> To
Sent by: thin@xxxxxxxxxxxxx
thin-bounce@freel cc
ists.org
Subject
[THIN] Re: OT: Blade Servers
22/09/2006 10:40
AM


Please respond to thin@xxxxxxxxxxxx g

Gee Joe, Anyone would think you had a problem with their blades....



Ive lost count on how many HS20 implementations Ive done and have never
had a failure (appart from one where there was building extensions going on
in the server room and dust caused contact failure, reseating of the HS20
fixed the issue).



Ive had IBM competition here bag the power issues with the HS20's and the
enclosures, but Ive never encountered it, thats including fully populated
HS20 enclosures maxed out.



Give me an IBM blade center anytime over those sideways dl360's anyday...



T.


------------------------------

*From:* thin-bounce@xxxxxxxxxxxxx on behalf of Joe Shonk
*Sent:* Fri 22/09/2006 5:27 a.m.
*To:* thin@xxxxxxxxxxxxx
*Subject:* [THIN] Re: OT: Blade Servers

Sorry, I suppose you're right that I'm bashing IBM too much...  They are
not
all that bad.  As far as the people go, they are great. ;o)

To me, the HS20 was just a poorly designed, poorly implemented product.
Funny, since IBM doesn't have that type of reputation.  From what I was
told, the boot of SAN thing was the direction IBM ended up recommending
because of the high disk failure rates as there were some cooling issues
with the HDs.

Problems with the HS20:
        Power draw exceeds 2000W per domain, thus the domain cannot be
redundant (IBM engineers blame this on Intel to providing the correct
wattage of their EM64T processors, but still the product went out and the
problem wasn't corrected)
        I've already mentioned the performance of the LSI 1030 controller.
        USB 1.x instead of USB 2.0?
        Passive backplane feeds the switch a signal even if it's disabled
on
the blade side (causes issues with GEC and port aggregation, we physically
have to disable the port on the switch to reimage a server)
        The embedded Cisco switches couldn't be configured to allow
redundancy in the chassis.  They basically acted like two independent
switches.
        Excessive chassis vibration (now fixed)
        The original version of the 8843 blades can't go in slots 9 or
higher on the replacement chassis.
        High incidents of disk failures
        High incidents of memory failures (this issue is new)
        System would allocate 25% of the memory for itself so you're left
with 3.1 gigs out 4 gigs installed. (Yes there is a fix for this too)
        If too many servers (same power domain) are restarted at the same
time in the same then several will shutdown and cannot be powered on until
they are physically pulled from the chassis and reseated.
        Just got dine upgrading the firmware and drivers on one chassis (4
more to go).  This was not fun.  Dell and HP both have nice tools.   I
tried
using the UpdateXpress CD 3, but the latest version 4.04 is out of date.
        Web Management of the blades could be a lot better.  Only one
admin
at a time?

Given that, I do have high hopes for the HS21.  I haven't seen one yet,
but
will shortly.

Funny you mention the short coming of blade in general.  Infiniband has
been
around for a long time.  It solve most of the problems (bandwidth and
otherwise) you've mentioned, it's cheap but it's the most widely
unknown/under-utilized technology out there.  It (and it's cousins) seem
to
only find a place with grid computing.

Joe

-----Original Message-----
From: thin-bounce@xxxxxxxxxxxxx 
[mailto:thin-bounce@xxxxxxxxxxxxx<thin-bounce@xxxxxxxxxxxxx>]
On Behalf
Of Jeremy Saunders
Sent: Thursday, September 21, 2006 5:41 AM
To: thin@xxxxxxxxxxxxx
Subject: [THIN] Re: OT: Blade Servers

Yeh...you had to go there....didn't you Joe :) I assume you are talking
about the lack of write back cache from the LSI 1030 controller. This can
be an issue for some environments. However the new HS21's have write back
cache. My understanding from asking lots of questions internally, is that
the orginal design of the HS20's was more focused around booting from SAN.
I've used plenty of HS20's as Citrix servers, and have personally found
that the lack of write back cache has not caused performance issues.

I know you and your customer had a bad IBM experience from some
disgraceful
customer service in the past, but we are not all bad. Out of 320,000
employee's, you found the one or two bad apples that did not step up and
help you. I am very sorry about what happened. But you need to get over it
and move on and stop bagging IBM on this forum with every opportunity you
get.

Anyway...

In the real world, there is no single server that has every feature you
could ask for without any quirks. You will always find some issue
somewhere
along the lines that may relate to a sepcific application server, or a
limitation introduced within your own environment. At the end of the day
they will all do the same thing, and everyone has their own opinions and
experiences. So I think rather than try and play the 3 vendors against
each
other, it's more constructive to understand why you want to go down the
blade path and not stick to traditional servers, especially when Evan was
talking about using some of them as ESX hosts. Even though we've had this
conversation a few times on this forum, I'm not convinced that blades
suites all environments.

Once you go down the blade path you lock yourself in. In other words, you
need to fill the blade centre with blades in order to achieve your ROI. I
guess many of you in the larger countries and environments don't have this
problem (just like Rusty ordering 30 at a time), but I see this as being
an
issue in Australia, where the environments are not always as big. And then
there are the limitations of blades, especially when adding NIC's, HBA's,
etc.

Blades are great for Citrix servers, Web servers, Domain Controllers, etc,
but  once you start using them for ESX hosts, you start hitting
limitations.

You can't divy up the NICs. Sure in ESX version 3 the Service Console can
now share a NIC (and there was an unsupported hack for 2.5x), and you can
also trunk (VLAN) them, but there is still the issue of available
bandwidth
to the blade. So when using blades as ESX hosts, your bottleneck will
almost always be the NIC's. And then of course if your servers are old and
don't have a PCI Express bus, the NIC's will share a PCI-X bus, and
compete
against each other for bandwidth.

With a standard rack mount server, you can add NIC's, HBA's, etc.

So the answer would be to use the bigger (double size) blades, which take
up two slots. Well these are expensive, so what's the point of filling
your
blade centre up with these, as you will blow out your ROI.

Some time ago I advised a couple of customers to go down the blade centre
path because it seemed like the right way to go. Sure they filled it, and
have achieved ROI, but now that we are introducing ESX into their
environment, and after doing the sums, we are hitting some
Archtectural/Design limitations when considering using some of the bades
as
ESX hosts. And now I'm annoyed with myself that I didn't understand how
these limitations would effect their environment once we start looking
into
using ESX for DR and Business Continuity.

Don't get me wrong, I love working with blades, and this issue will not
concern everybody, but you need to make sure they will provide you with
what you need.

If you are puchasing Dell, then get the Dell Server SE in for a whiteboard
session, and likewise for HP and IBM.

Sorry for getting carried away and going off topic a bit, but I often
don't
think people look at the bigger picture when purchasing blade hardware.

Cheers.

 Kind regards,

 Jeremy Saunders
 Senior Technical Specialist

 Infrastructure Technology Services
 (ITS) & Cerulean
 Global Technology Services (GTS)
 IBM Australia
 Level 2, 1060 Hay Street
 West Perth  WA  6005

 Visit us at
 http://www.ibm.com/services/au/its

 P:  +61 8 9261 8412                F:  +61 8 9261 8486
 M:  TBA                            E-mail:
                                    jeremy.saunders@xxxxxxxxxxx











             "Joe Shonk"
             <joe.shonk@gmail.
             com>                                                       To
             Sent by:                  thin@xxxxxxxxxxxxx
             thin-bounce@freel                                          cc
             ists.org
                                                                   Subject
                                       [THIN] Re: OT: Blade Servers
             21/09/2006 08:23
             AM


Please respond to thin@xxxxxxxxxxxx g






Actually, all three vendors have tight integration with Altiris (for HP it's call RDP, but it's just Altiris with the HP logo). I'm not quite sure why you think IBM is better than Dell? The Dell at least come with a real RAID controller... HP's SAS controller is crap and IBM, well let's just say the HS20s (8843) was nothing short of a cluster ...... On the bright side, HP blades do support Opterons processor unlike Dell... If your planning to running anything like VMware ESX on blades, the Opterons are the way to go.

Joe

On 9/20/06, Rusty Yates <rusty27@xxxxxxxxx> wrote:
  We are using Dell OpenManager right now and the on board management for
  the blade chassis (which is ok) nothing like IBM.  But when we were
  looking at IBM, HP and Dell we knew IBM and HP were better but for what
  we needed Dell was better and we got more servers for the money.  This
  year we are looking at Altiris which has a piece that is designed just
  for Dell.  Looks sweet and pricing isn't that bad.


On 9/19/06, Evan Mann < emann@xxxxxxxxxxxxxxxxxxxxx> wrote: Are you using OpenManage or 3rd party for management? I've never been impressed with OpenManage for non-Blade servers, but I'd imagine the Blade variant is much different.

   From: thin-bounce@xxxxxxxxxxxxx 
[mailto:thin-bounce@xxxxxxxxxxxxx<thin-bounce@xxxxxxxxxxxxx>]
On
   Behalf Of Rusty Yates
   Sent: Tuesday, September 19, 2006 12:56 PM
   To: thin@xxxxxxxxxxxxx
   Subject: [THIN] Re: OT: Blade Servers

   In our environment we are running the Dell 1855 Blades and haven't run
   into any problems.  Next year we will buy the 1955 models.

   On 9/19/06, Evan Mann <emann@xxxxxxxxxxxxxxxxxxxxx> wrote:
     It looks like I'm going to be moving into the land of Blade servers.
     We're a Dell shop, so Dell 1955's are what is being looked at right
     now. I want to put together a host list of key items to make sure
     these things have/support.  Memory backed RAID cache and power issues
     are the only thing I have on the list now, since that was the main
     issues I've seen come across the list.  Obviously, management of the
     blade chassis is important.

     If it's useful, here is what we are planning to do with the Blades:
     There is no intention of moving our Citrix farm to blades, but we are
     deploying a new business level app using VMWare ESX3.  This new app
     will utilize web servers and SQL servers.  The web farms will be in
     ESX and will utilize an application load balancer, the SQL servers
     (starting with 1) will likely not be in ESX, but that is undecided.

     We will have a fiber connected SAN as well, but the plan isn't to
boot
     off the SAN (right now at least).  It is unknown if we will connect
     the entire blade chassis to the SAN, or servers individually. It
     depends on the cost of the fiber switches.

     We are doing a lot of server consolidation as well to 2 existing 2850
     (dual 3.4ghz Xeon's) running ESX3.  As we need more capacity, we will
     use additional Blades for ESX 3 and consolidation.

     The 10 blade chassis specs out to about 6k.  Each 3ghz Dual Core
     (woodcrest) blade with 16 gigs of RAM, 2x75gig SAS drives and dual
     Broadcom TOE GigE/dual QLogic Fiber HBA's will run us about $6600
     (including warranty/support)




************************************************ For Archives, RSS, to Unsubscribe, Subscribe or set Digest or Vacation mode use the below link: //www.freelists.org/list/thin ************************************************

************************************************
For Archives, RSS, to Unsubscribe, Subscribe or
set Digest or Vacation mode use the below link:
//www.freelists.org/list/thin
************************************************

  (See attached file: winmail.dat)

Other related posts: