[THIN] Re: OT: Blade Servers

  • From: "Joe Shonk" <joe.shonk@xxxxxxxxx>
  • To: thin@xxxxxxxxxxxxx
  • Date: Wed, 20 Sep 2006 17:23:42 -0700

Actually,  all three vendors have tight integration with Altiris (for HP
it's call RDP, but it's just Altiris with the HP logo).   I'm not quite sure
why you think IBM is better than Dell?  The Dell at least come with a real
RAID controller... HP's SAS controller is crap and IBM, well let's just say
the HS20s (8843) was nothing short of a cluster ......  On the bright side,
HP blades do support Opterons processor unlike Dell...  If your planning to
running anything like VMware ESX on blades, the Opterons are the way to go.

Joe

On 9/20/06, Rusty Yates <rusty27@xxxxxxxxx> wrote:

We are using Dell OpenManager right now and the on board management for the blade chassis (which is ok) nothing like IBM. But when we were looking at IBM, HP and Dell we knew IBM and HP were better but for what we needed Dell was better and we got more servers for the money. This year we are looking at Altiris which has a piece that is designed just for Dell. Looks sweet and pricing isn't that bad.

On 9/19/06, Evan Mann <emann@xxxxxxxxxxxxxxxxxxxxx> wrote:
>
>  Are you using OpenManage or 3rd party for management? I've never been
> impressed with OpenManage for non-Blade servers, but I'd imagine the Blade
> variant is much different.
>
>  ------------------------------
> *From:* thin-bounce@xxxxxxxxxxxxx [mailto:thin-bounce@xxxxxxxxxxxxx] *On
> Behalf Of *Rusty Yates
> *Sent:* Tuesday, September 19, 2006 12:56 PM
> *To:* thin@xxxxxxxxxxxxx
> *Subject:* [THIN] Re: OT: Blade Servers
>
> In our environment we are running the Dell 1855 Blades and haven't run
> into any problems.  Next year we will buy the 1955 models.
>
> On 9/19/06, Evan Mann <emann@xxxxxxxxxxxxxxxxxxxxx> wrote:
> >
> >  It looks like I'm going to be moving into the land of Blade servers.
> > We're a Dell shop, so Dell 1955's are what is being looked at right now. I
> > want to put together a host list of key items to make sure these things
> > have/support.  Memory backed RAID cache and power issues are the only thing
> > I have on the list now, since that was the main issues I've seen come across
> > the list.  Obviously, management of the blade chassis is important.
> >
> > If it's useful, here is what we are planning to do with the Blades:
> > There is no intention of moving our Citrix farm to blades, but we are
> > deploying a new business level app using VMWare ESX3.  This new app will
> > utilize web servers and SQL servers.  The web farms will be in ESX and will
> > utilize an application load balancer, the SQL servers (starting with 1) will
> > likely not be in ESX, but that is undecided.
> >
> > We will have a fiber connected SAN as well, but the plan isn't to boot
> > off the SAN (right now at least).  It is unknown if we will connect the
> > entire blade chassis to the SAN, or servers individually. It depends on the
> > cost of the fiber switches.
> >
> > We are doing a lot of server consolidation as well to 2 existing 2850
> > (dual 3.4ghz Xeon's) running ESX3.  As we need more capacity, we will
> > use additional Blades for ESX 3 and consolidation.
> >
> > The 10 blade chassis specs out to about 6k.  Each 3ghz Dual Core
> > (woodcrest) blade with 16 gigs of RAM, 2x75gig SAS drives and dual Broadcom
> > TOE GigE/dual QLogic Fiber HBA's will run us about $6600 (including
> > warranty/support)
> >
>
>

Other related posts: