I can vouch for the x330's and x335s, that's what we've got here. In the config you just mentioned, this is what you'd have with 28 1U boxes: 3 KVM cables (max chainable to 12 servers/KVM breakout cable) 28 power cables 56 network cables 1 PDU (APC makes a real nice 42 outlet vertical PDU that mounts in the back of the cabinet) I am the network guy, so yes, I like me. :) As for remote control with the 1U's, you have TS built into the OS, and there's also the RSA or RSA 2 adapter (depending on the model of the server). One RSA card can support up to 24 servers, and allow for monitoring at the hardware level (ambient temp, CPU temp, fan speeds, hardware/POST/BIOS errors, etc etc), and can allow for remote control of the console. It also allows for both soft (OS-driven) and hard (cut power) shutdowns, and can power on a server if it's off. But, so far, the "management" wise solutions you have cited are only cabling issues, which I can see reducing install time, but, that's a one-time deal. How often do you really have to re-cable your racks?? That's not to say I don't like the solution, but... When you already have an infrastructure in place, and only need to purchase one or two additional servers to add to a farm, it's a tough sell. The blades seem to cost, at best, a TINY bit less than a 1U box. But, then you have the cost of the chassis itself. Sure, divided across the cost of the 14 blades it can hold, it's much easier to swallow, but when you're only buying servers one or two at a time, that's a lot to swallow for that initial investment... :( Power consumption??? Last time I looked, the chassis for the HS20 had FOUR 1800 watt power supplies! If a power supply goes in a server, isn't that the point of a farm? Lots of smaller servers instead of one GIANT server? I could afford to down one of my servers with no noticable impact, and business could continue if two were down, albeit I'd get performance complaints (I only have 6)... -----Original Message----- From: Taylor, George [mailto:gtaylor@xxxxxxxx] Sent: Wednesday, June 16, 2004 3:04 PM To: thin@xxxxxxxxxxxxx Subject: [THIN] Re: Hardware - Blades, SAN? I'm not sure I can justify it cost wise compared to 1U servers, but management wise I believe I can. Imagine a rack with 28 - 1U servers in it... 28 KVM cables 28 Power Cords, hope a power supply doesn't pop, I can't think of a 1U that has dual PSs, but there may be. 28 Network cables (56 for redundancy) Just basic cable management would be a nightmare. You've now used 28 ports on your KVM switch and you've used 56 ports on your network switch (hope your networking guys like you...) and how many PDUs did you have to mount in that rack? Now let's use 28 blades, thats 2 chassis. 2 KVM cables 8 power cords 8 network cables, 16 if your networking guys like you. 28 Fiber Channel cables if your using SAN based storage 1/2 the rack space used. Yes, I know servers like the X335 you can daisy chain the KVM together, but I can't say how it works, we never got the management module to do that. I VPN into our network from home, point my browser at the management module and have remote control over all my blades w/o anything like PCAnywhere or such. I haven't looked at the math lately, but power consumption is also reduced dramatically. _____ From: Jeff Malczewski [mailto:jmalczewski@xxxxxxxx] Sent: Wednesday, June 16, 2004 11:49 AM To: 'thin@xxxxxxxxxxxxx' Subject: [THIN] Re: Hardware - Blades, SAN? The cost of the FastT-700 and the associated switch fabric was what I was most interested in.. I'm rather unfamiliar with SANs and was just curious as to the price of an entire solution would be if I were to build a new facility from the ground with this solution instead of individual boxes.. What administrative benefits have you seen from this as opposed to 5 individual 1U boxes?? It seems to me that the individual servers are still WAY cheaper, so there must be some real-world justification, just trying to find it.. -----Original Message----- From: Taylor, George [mailto:gtaylor@xxxxxxxx] Sent: Wednesday, June 16, 2004 1:36 PM To: thin@xxxxxxxxxxxxx Subject: [THIN] Re: Hardware - Blades, SAN? I can't give you the full price, our infastructure was already in place, but I can give you an close idea. Here is what I remember about the latest project, keep in mind that one bladecenter was already in place as well as the FastT-700 and associated switching fabric. The BladeCenter with optical pass-thru, ethernet switch, bigger power supplies, Accustic Attenuation Module (muffler), 3yr 24x7x4 support and all associated fiber cables, etc... about $11K 5 Blades, (Dual 2.8 Xeons, 4gigs, HBA, 40Gig IDE drive) w/ 3yr 24x7x4 about $35K 5 145Gig fiber channel drives for the FastT about $10K New Storage shelf and shortwave GBics for the FastT about $7K Along w/ that was things like server CALs, TSM CALs, etc... The whole project HW and SW turned in right around $80K _____ From: Jeff Malczewski [mailto:jmalczewski@xxxxxxxx] Sent: Wednesday, June 16, 2004 10:26 AM To: 'thin@xxxxxxxxxxxxx' Subject: [THIN] Re: Hardware - Blades, SAN? How many HS20's do you have, what was the cost for JUST hardware for that solution, including the FastT, and how much storage do you have available to you?? How many U does the complete solution take up, including the FastT? I currently use 6 x330's (Dual P3, 4Gb RAM, 1U) for my TS farm... -----Original Message----- From: Taylor, George [mailto:gtaylor@xxxxxxxx] Sent: Wednesday, June 16, 2004 11:37 AM To: thin@xxxxxxxxxxxxx Subject: [THIN] Re: Hardware - Blades, SAN? We're implementing this right now. You give up one of the on-board drives to make space for the HBA. What I've decided to do is boot from the SAN and use the little on-board IDE drive for nothing but the swap and temp files. You can get SCSI drives on the blades, but its an attached cage that takes up a second slot. My testing using the HS20s, dual Xeons, 4gig ram, booting from the FastT has shown them to be very robust machines. Currently we only have 1 Optical pass-thru and 1 ethernet switch installed, but you can double up on both for redundancy if need be. The ethernet switch gives you 4 ports that can be trunked and plays very well with our Cisco 6500 gear, the throughput is good and from what our networking guys say we are hardly touching the 4gig limit. The optical pass-thru gives you a fiber channel from each and every blade, wire management with that much fiber coming from such a little space is a chore, but does work. _____ From: SMREKAR, JACK [mailto:SMREKAR@xxxxxxxxxxxxxx] Sent: Wednesday, June 16, 2004 4:59 AM To: thin@xxxxxxxxxxxxx Subject: [THIN] Re: Hardware - Blades, SAN? While we do not have any blades I have been also thinking about using them for some things. Depending upon the company you use for your blades I would question if you need to use a SAN for the storage of your apps. They are coming with upwards of 80 gig drives and I think some of them are now just starting to come with SCSI drives so you can mirror the drives. I would look at purchasing them as they are and not to worry about attaching them to a SAN. Besides I am not sure that you could get a HBA inside one of them to attach to the SAN. Jack Smrekar Appleton Area School District _____ From: Chris Grecsek [mailto:grecsek@xxxxxxxxxxx] Sent: Wednesday, June 16, 2004 4:31 AM To: thin@xxxxxxxxxxxxx Subject: [THIN] Hardware - Blades, SAN? We're trying to determine which way to go with our backend hardware for our Citrix farm...I've been hearing a lot about these blade servers and was wondering if we should setup our farm on a slew of blades and tie it into a SAN? Seems like the optimal configuration for a ton of users running basic apps like Office, IE, Acrobat, etc. Was wondering if anyone had any feedback regarding a setup like this - have you done it, advisable, not really, pro/cons, etc. This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.