Page 1 of 2 12 LastLast
Results 1 to 25 of 27
  1. #1
    Join Date
    Nov 2001
    Location
    The South
    Posts
    5,408

    Colomachine.com/Rackmount.com Blades?

    Just got a blade housing and 10 empty blade cases, gotta say this looks very promising. The housing has lots of airflow and 6 BIG fans in the back that look like they'll move some serious air across those blades.

    I built a single P4 3.0E so far, looks real good so far.

    Anyone used any of these ATXBlade systems before? Thoughts?
    Gary Harris - the artist formerly known as Dixiesys
    resident grumpy redneck

  2. #2
    Quote Originally Posted by Dixiesys
    Just got a blade housing and 10 empty blade cases, gotta say this looks very promising. The housing has lots of airflow and 6 BIG fans in the back that look like they'll move some serious air across those blades.

    I built a single P4 3.0E so far, looks real good so far.

    Anyone used any of these ATXBlade systems before? Thoughts?

    Rackmount.com is a customer here. They put an ATXBlade system in about 18 months ago while they were still prototyping and testing. The thing just runs - zero problems. We even have a few of our own blades running in it. It started as a test, and it works so well, left them in there!
    The Gotham Bus Company, Inc.
    Colocation and Shared Hosting for Smart Webmasters
    Long Island, NY

  3. #3
    Join Date
    Nov 2001
    Location
    The South
    Posts
    5,408
    Glad to hear they're working

    I am building a couple Athlon XP's and a couple Sempron 2800's and a P4 3.06E in mine hope they all work out like a champ.
    Gary Harris - the artist formerly known as Dixiesys
    resident grumpy redneck

  4. #4
    Join Date
    Apr 2001
    Posts
    1,045
    You can build the actual blades w/ your own specs? Kinda like a pizza just add your own toppings as long as you have the main housing and individual blades?
    » ReliableServers.com
    » Dedicated Servers | Colocation | VPS
    » 973-849-0535

  5. #5
    Join Date
    Nov 2001
    Location
    The South
    Posts
    5,408
    Yes I ordered the blade housing and 10 empty blade cases with 250W P4 ready power supplies ($2150 shipping and all) and pretty much any mobo/cpu fan you can put in a 1U server you can put in the blade cases. So far we've started building 2 semprons, 2 athlon xps and a p4 monster (3.06 2m cache 2G ram 2x74G 10K rpm sata drives).
    Gary Harris - the artist formerly known as Dixiesys
    resident grumpy redneck

  6. #6
    Join Date
    Dec 2001
    Location
    127.0.0.1
    Posts
    3,642
    Gary,

    Do keep us updated - I'd be interested to hear how it all turns out

  7. #7
    Hello,

    I'm very interested in your deployment of this too. Keep us updated on performance and reliability. Sounds very promising.

  8. #8
    Join Date
    Nov 2001
    Location
    The South
    Posts
    5,408
    The main issue that might be a problem is cooling, and looking at how these are made I'm thinking they're gonna stay nice and cool, those 6 big fans in the back and the openings all around it for air flow I think they'll cool a good bit better than "typical" 1U cases since they are way more open and free flowing (in my non mechanical engineer's opinion). As long as they stay cool and the power supplies aren't prone to dying it should be a darn good setup.
    Gary Harris - the artist formerly known as Dixiesys
    resident grumpy redneck

  9. #9
    I also like how they use non-propietary parts. Just buy the housing and the sled and configure as you want(and re-use later on). I had looked at Dell/HP/IBM blades was turned off by their price. This is a fantastic solution it appears. Yes, thos fans sound like they should handle cooling just fine. Now I wonder how the noise will be. Do you have earplugs already?

  10. #10
    Join Date
    Nov 2001
    Location
    The South
    Posts
    5,408
    I doubt they'll be noticed in the data center, that place is a cacophony of WHOOSING air and whirring computers, those air conditioners at GNAX are similar to standing near an airplane lifting off on the noise level meter. But cold air = GOOD so noise = who cares! haha

    I could sleep like a baby in the data center, all that whirring and the cold air I'd sleep for days.
    Gary Harris - the artist formerly known as Dixiesys
    resident grumpy redneck

  11. #11
    Join Date
    Feb 2002
    Location
    Australia
    Posts
    24,027
    Gary, how about a few pics of your blade setup?
    WLVPN.com NetProtect owned White Label VPN provider
    Increase your hosting profits by adding VPN to your product line up

  12. #12
    One thing though which alot of people like is, you cannot use a remote reboot system with these, as there is only 2 power cords for the whole system it seems.
    Jay

  13. #13
    Quote Originally Posted by jayglate
    One thing though which alot of people like is, you cannot use a remote reboot system with these, as there is only 2 power cords for the whole system it seems.
    Each blade/carrier has its own power cord, so you can still use a remote power controller. The power on the chassis is just to run the cooling fans.

    Yes, they are loud, but its just another contributor to the continuous whooshing sound in our datacenter. Between server fans and AC, its LOUD in here!
    The Gotham Bus Company, Inc.
    Colocation and Shared Hosting for Smart Webmasters
    Long Island, NY

  14. #14
    Quote Originally Posted by gothambus
    Each blade/carrier has its own power cord, so you can still use a remote power controller. The power on the chassis is just to run the cooling fans.

    Yes, they are loud, but its just another contributor to the continuous whooshing sound in our datacenter. Between server fans and AC, its LOUD in here!

    Really?? Umm. very interesting.. Now i have to look at them.
    Jay

  15. #15
    Join Date
    Dec 2001
    Location
    127.0.0.1
    Posts
    3,642
    Quote Originally Posted by jayglate
    One thing though which alot of people like is, you cannot use a remote reboot system with these, as there is only 2 power cords for the whole system it seems.
    Jay,

    1. You sure you don't mean "One thing though which alot of people don't like is"?
    2. Usually blades can be rebooted either the conventional way or via an interface within the blade management software (if the option exists). I've never seen any kind of server that cannot be rebooted remotely - if you know of one, I'd like to see it.

  16. #16
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by Dixiesys
    Just got a blade housing and 10 empty blade cases, gotta say this looks very promising. The housing has lots of airflow and 6 BIG fans in the back that look like they'll move some serious air across those blades.

    I built a single P4 3.0E so far, looks real good so far.

    Anyone used any of these ATXBlade systems before? Thoughts?
    can you give us a cost analysis? the ATXBlade enclosure itself is a 8U (14" high), it only saves you 2U spacings vs 10 stacked 1U, but you need to pay extra for the enclosure and perhaps extra for each proprietary 1U blade housings vs standard 1U chassis. also, can you really put 10x blades in there without running into the typical 20amp limitation? if not, then the 8U big unit can cost you the same or even more spacings than stacking standard 1U chassis.

    depite the 6x 12cm fans on back of blade center enclosure, the way they design the individual blade housing is really not quite 1U friendly. basically, the CPU cooling will be limited to active CPU cooling fans which should really be avoided for serious servers. they are prolly still good for low-speed single P4/Athlon64 CPU due to the less-optimized CPU cooling and a limited 250w power supply per blade. I'm sure the newer generations of super-hot power-hungry P4/Xeon won't be happy in there....
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  17. #17
    These are not blades in the true sense of the word. Its just a chassis with a bunch of fans, and up to 10 1U carriers. You build a plain old server on each 1U carrier, and mount them vertically in the chassis. Each server is an independent, stand-alone machine. There isnt even a common power supply.

    Basically, you get to put 10 servers in an 8U space, and the cost of the chassis and 10 blades with good power supplies is less than if you were to purchase 10 1U cases to start building in.

    Maintenance is also much easier as sliding a server out of the chassis is generally faster and easier than de-racking and cracking open a traditional 1U case.
    The Gotham Bus Company, Inc.
    Colocation and Shared Hosting for Smart Webmasters
    Long Island, NY

  18. #18
    Quote Originally Posted by cwl@apaqdigital
    can you give us a cost analysis? the ATXBlade enclosure itself is a 8U (14" high), it only saves you 2U spacings vs 10 stacked 1U, but you need to pay extra for the enclosure and perhaps extra for each proprietary 1U blade housings vs standard 1U chassis. also, can you really put 10x blades in there without running into the typical 20amp limitation? if not, then the 8U big unit can cost you the same or even more spacings than stacking standard 1U chassis.
    Last I looked, the chassis and 10 carriers with 250w power supplies was retailing for $1800. The least expensive 1U case made by the same guys is retaling for $160 with a 250w power supply. That puts the chassis/carrier combo $200 more than buying 10 traditional cases.

    The 2U space savings does have some value, though that varies by datacenter. The real value in my eyes is the ease with which an individual server is accessed for maintenance. If you're stacking/railing 10 1U chassis to fill exactly 10U of space, you've got some work to do to get one out of the rack and opened up. Then there's the cost of rails or shelves.

    One definite minus is the inability to to use either a floppy drive or CD-ROM without either using a USB solution or sliding a carrier out of the chassis and kludging an temporary IDE connection.

    As far as a 20 amp limit, if your datacenter will not give you more power than that, then its a problem no matter which way you go. 10 servers draw X amps regardless of the housing/racking system.
    The Gotham Bus Company, Inc.
    Colocation and Shared Hosting for Smart Webmasters
    Long Island, NY

  19. #19
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    also, unless these blades start offering hot-swap drive bays, it will take more time to service a blade than a standard 1U. why? because, nowadays, #1 issue for crashed servers were defective HDD. pulling out a blade housing, then changing out the fix-mounted HDD, will cost more time than just swap out a small HDD tray from a traditional 1U chassis with hot-swap bays.

    an empty blade chassis w/250w is $160 w/2x fixed HDD bays w/o CD bays, meanwhile you can buy a 14" deep supermciro mini 1U chassis (w/260w, 2x HDDs, no CD) for about $110, and some of my clients even put a pair of these mini 1U in single 1U spacing, so 10x servers only uses 5 U spacing! these mini 1U can even accommodate dual-core Operon or Athlon64x2 for some serious serving business.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  20. #20
    Join Date
    May 2002
    Location
    Sunny California
    Posts
    1,679
    Quote Originally Posted by gothambus
    One definite minus is the inability to to use either a floppy drive or CD-ROM without either using a USB solution or sliding a carrier out of the chassis and kludging an temporary IDE connection.
    We have over 200 servers online and we haven't built a server with a CD or floppy drive in it for years. A USB 2.0 CD drive is bootable, as fast as a regular IDE drive, and saves thousands of dollars over the long haul ($20 for a CD drive x 200 servers = $4000 saved for my company.) I don't see this as a drawback at all.

    Quote Originally Posted by gothambus
    As far as a 20 amp limit, if your datacenter will not give you more power than that, then its a problem no matter which way you go. 10 servers draw X amps regardless of the housing/racking system.
    10 P4 servers draw about 8A during regular usage and about 12A during power-up (again, according to our calculations.) A 20A circuit should be able to hold 1 of these 10-server enclosures with no problem. 2 would probably be pushing it, but you can always fill the circuit with regular 1U/2U boxes to fill up the slack.
    Erica Douglass, Founder, Simpli Hosting, Inc.
    »»» I founded Simpli Hosting, and sold it in 2007 to Silicon Valley Web Hosting after over 6 years in the business.
    Now I'm blogging at erica.biz!

  21. #21
    Quote Originally Posted by Simpli-Erica
    We have over 200 servers online and we haven't built a server with a CD or floppy drive in it for years. A USB 2.0 CD drive is bootable, as fast as a regular IDE drive, and saves thousands of dollars over the long haul ($20 for a CD drive x 200 servers = $4000 saved for my company.) I don't see this as a drawback at all.
    I totally agree, but not everyone sends us servers for colo that can boot via USB. You'd be surprised at the lack of thought some people put into the machines that run hosting operations!
    The Gotham Bus Company, Inc.
    Colocation and Shared Hosting for Smart Webmasters
    Long Island, NY

  22. #22
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    6,896
    I also fail to see the cost savings here. For around $210/chassis you can pick up the really nice Supermicro 811 chassis, black, 420W psu's, and rails. If you're cheaper, as cwl@apaqdigital noted, you can go with the supermicro CSE-512LB's (I think it was), though I dont particularly like them.

    Personally I find the evercase 9131, and 9138 chassis as a nice go between. Rails are cheap (~$20), the chassis themselves vary depending on whether you want hotswap, what size of PSU, etc. but generally run around $120-140 without rails, and they come with awesome fans.

    Bottom line I think is the "blades" that we're talking about here aren't really blades, they dont save you on power, wiring, switch ports, or aggregate anything, in any manner, except for reducing your physical diversity in several manners (eg. loosing 1-2 fans in that chassis could be catastrophic, especially if its loaded with high end "blades"). While I love the idea, the implamentation just doesen't make sense, theres no cost savings, the space savings are near non existant, theres no aggregation, no reduction in management requirements, etc. etc.
    Myles Loosley-Millman - admin@prioritycolo.com
    Priority Colo Inc. - Affordable Colocation & Dedicated Servers.
    Two Canadian facilities serving Toronto & Markham, Ontario
    http://www.prioritycolo.com

  23. #23
    Quote Originally Posted by cwl@apaqdigital
    also, unless these blades start offering hot-swap drive bays, it will take more time to service a blade than a standard 1U. why? because, nowadays, #1 issue for crashed servers were defective HDD. pulling out a blade housing, then changing out the fix-mounted HDD, will cost more time than just swap out a small HDD tray from a traditional 1U chassis with hot-swap bays.
    What $180 1U chassis with good power supplies include hot swap drive bays?

    an empty blade chassis w/250w is $160 w/2x fixed HDD bays w/o CD bays, meanwhile you can buy a 14" deep supermciro mini 1U chassis (w/260w, 2x HDDs, no CD) for about $110, and some of my clients even put a pair of these mini 1U in single 1U spacing, so 10x servers only uses 5 U spacing! these mini 1U can even accommodate dual-core Operon or Athlon64x2 for some serious serving business.
    An excellent analysis of the Supermicro mini 1U can be found in this thread:

    http://www.webhostingtalk.com/archiv.../375407-1.html

    I'm not saying that the rackblade system is the end-all for all people. For entry level to midrange servers, it makes sense for some folks, not for others. I don't think the guys at PCWnet designed it for the dual Xeon/Opteron RAID5 hot-swap application.

    Funny story about the mini 1U form factor. We had a client here that rolled in a full rack of stuff about a year ago. They tried to save space by mounting two of those in the same 1U, but they mounted them so that the hot air out of the front server was discharged directly into the intake of the rear box. You guys can do the math on this one.
    The Gotham Bus Company, Inc.
    Colocation and Shared Hosting for Smart Webmasters
    Long Island, NY

  24. #24
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by gothambus
    What $180 1U chassis with good power supplies include hot swap drive bays?...
    supermicro SC811T-260 is under $190. it does come with 2x hot-swap SATA, rail set, and a truly reliable 260w power supply that just won't break! (well, maybe 1 in last few hundreds we have shipped...). in fact, the identical 260w power supply can also be found in the SC512L-260 mini 1U case.

    with a nice pair of sliding rails installed, you don't have to de-rack these standard 1U to do service works, such as swaping bad HDD or bad PS plus the benefit that you have the full range of options to install more updated platform from low-end celeron/sempron to high-end P4/Athlon64 or dual-core chip cooled by true passive cooling arrangement.

    the bottom line is that, like porcupine said, these ATXblade system is not true "blades" like those offered by Dell/HP/IBM. with limited, less reliable active cooling, and less desireable board layout, they can only accommodate those out-dated low-heatout platform such as northwood p4, prestonia Xeon which are just about imposible to buy them new nowadays. also, it's really unknown how reliable are those small footprint, proprietary power supply they put in these "blade" units. do they have a good track record? can you get replacement fast and easy?
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  25. #25
    Quote Originally Posted by cwl@apaqdigital
    the bottom line is that, like porcupine said, these ATXblade system is not true "blades" like those offered by Dell/HP/IBM. with limited, less reliable active cooling, and less desireable board layout, they can only accommodate those out-dated low-heatout platform such as northwood p4, prestonia Xeon which are just about imposible to buy them new nowadays. also, it's really unknown how reliable are those small footprint, proprietary power supply they put in these "blade" units. do they have a good track record? can you get replacement fast and easy?
    I'm not privy to the engineering at PCW, but those guys have been designing and selling this stuff longer than most people on WHT have been in this business. I don't doubt that the power supplies are solid, and I know they are easily replaceable. PCW is a reputable shop.

    That being said, I have no vested interest in the ATXblade system, or in PCW Microsystems. Dixiesys asked. I responded with our experince. A discussion ensued. Nothing more, nothing less. The ATXblade is good for some, not so good for others. Its has pros and cons like everything else, as do traditional case servers.

    We're neutral here. Send us whatever you want. Doesn't matter if it comes from PCW or Apaq Digital. We'll just rack it up and treat it all the same.
    The Gotham Bus Company, Inc.
    Colocation and Shared Hosting for Smart Webmasters
    Long Island, NY

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •