Results 1 to 25 of 27
-
11-05-2005, 12:54 AM #1Web Hosting Master
- Join Date
- Nov 2001
- Location
- The South
- Posts
- 5,408
Colomachine.com/Rackmount.com Blades?
Just got a blade housing and 10 empty blade cases, gotta say this looks very promising. The housing has lots of airflow and 6 BIG fans in the back that look like they'll move some serious air across those blades.
I built a single P4 3.0E so far, looks real good so far.
Anyone used any of these ATXBlade systems before? Thoughts?Gary Harris - the artist formerly known as Dixiesys
resident grumpy redneck
-
11-05-2005, 04:15 AM #2WHT Addict
- Join Date
- Dec 2004
- Posts
- 109
Originally Posted by Dixiesys
Rackmount.com is a customer here. They put an ATXBlade system in about 18 months ago while they were still prototyping and testing. The thing just runs - zero problems. We even have a few of our own blades running in it. It started as a test, and it works so well, left them in there!
-
11-05-2005, 06:52 PM #3Web Hosting Master
- Join Date
- Nov 2001
- Location
- The South
- Posts
- 5,408
Glad to hear they're working
I am building a couple Athlon XP's and a couple Sempron 2800's and a P4 3.06E in mine hope they all work out like a champ.Gary Harris - the artist formerly known as Dixiesys
resident grumpy redneck
-
11-05-2005, 09:11 PM #4Web Hosting Master
- Join Date
- Apr 2001
- Posts
- 1,045
You can build the actual blades w/ your own specs? Kinda like a pizza just add your own toppings as long as you have the main housing and individual blades?
-
11-05-2005, 09:41 PM #5Web Hosting Master
- Join Date
- Nov 2001
- Location
- The South
- Posts
- 5,408
Yes I ordered the blade housing and 10 empty blade cases with 250W P4 ready power supplies ($2150 shipping and all) and pretty much any mobo/cpu fan you can put in a 1U server you can put in the blade cases. So far we've started building 2 semprons, 2 athlon xps and a p4 monster (3.06 2m cache 2G ram 2x74G 10K rpm sata drives).
Gary Harris - the artist formerly known as Dixiesys
resident grumpy redneck
-
11-05-2005, 09:53 PM #6Web Hosting Rockstar
- Join Date
- Dec 2001
- Location
- 127.0.0.1
- Posts
- 3,642
Gary,
Do keep us updated - I'd be interested to hear how it all turns out
-
11-06-2005, 07:12 PM #7Newbie
- Join Date
- Jun 2005
- Posts
- 16
Hello,
I'm very interested in your deployment of this too. Keep us updated on performance and reliability. Sounds very promising.
-
11-06-2005, 07:26 PM #8Web Hosting Master
- Join Date
- Nov 2001
- Location
- The South
- Posts
- 5,408
The main issue that might be a problem is cooling, and looking at how these are made I'm thinking they're gonna stay nice and cool, those 6 big fans in the back and the openings all around it for air flow I think they'll cool a good bit better than "typical" 1U cases since they are way more open and free flowing (in my non mechanical engineer's opinion). As long as they stay cool and the power supplies aren't prone to dying it should be a darn good setup.
Gary Harris - the artist formerly known as Dixiesys
resident grumpy redneck
-
11-06-2005, 07:33 PM #9Newbie
- Join Date
- Jun 2005
- Posts
- 16
I also like how they use non-propietary parts. Just buy the housing and the sled and configure as you want(and re-use later on). I had looked at Dell/HP/IBM blades was turned off by their price. This is a fantastic solution it appears. Yes, thos fans sound like they should handle cooling just fine. Now I wonder how the noise will be. Do you have earplugs already?
-
11-06-2005, 07:38 PM #10Web Hosting Master
- Join Date
- Nov 2001
- Location
- The South
- Posts
- 5,408
I doubt they'll be noticed in the data center, that place is a cacophony of WHOOSING air and whirring computers, those air conditioners at GNAX are similar to standing near an airplane lifting off on the noise level meter. But cold air = GOOD so noise = who cares! haha
I could sleep like a baby in the data center, all that whirring and the cold air I'd sleep for days.Gary Harris - the artist formerly known as Dixiesys
resident grumpy redneck
-
11-06-2005, 08:02 PM #11Web Hosting Master
- Join Date
- Feb 2002
- Location
- Australia
- Posts
- 24,027
Gary, how about a few pics of your blade setup?
• WLVPN.com • NetProtect owned White Label VPN provider •
• Increase your hosting profits by adding VPN to your product line up •
-
11-07-2005, 07:10 PM #12Doh!!
- Join Date
- Jan 2001
- Location
- NJ
- Posts
- 2,343
One thing though which alot of people like is, you cannot use a remote reboot system with these, as there is only 2 power cords for the whole system it seems.
Jay
-
11-07-2005, 07:35 PM #13WHT Addict
- Join Date
- Dec 2004
- Posts
- 109
Originally Posted by jayglate
Yes, they are loud, but its just another contributor to the continuous whooshing sound in our datacenter. Between server fans and AC, its LOUD in here!
-
11-07-2005, 07:36 PM #14Doh!!
- Join Date
- Jan 2001
- Location
- NJ
- Posts
- 2,343
Originally Posted by gothambus
Really?? Umm. very interesting.. Now i have to look at them.Jay
-
11-07-2005, 08:15 PM #15Web Hosting Rockstar
- Join Date
- Dec 2001
- Location
- 127.0.0.1
- Posts
- 3,642
Originally Posted by jayglate
1. You sure you don't mean "One thing though which alot of people don't like is"?
2. Usually blades can be rebooted either the conventional way or via an interface within the blade management software (if the option exists). I've never seen any kind of server that cannot be rebooted remotely - if you know of one, I'd like to see it.
-
11-07-2005, 08:16 PM #16Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Originally Posted by Dixiesys
depite the 6x 12cm fans on back of blade center enclosure, the way they design the individual blade housing is really not quite 1U friendly. basically, the CPU cooling will be limited to active CPU cooling fans which should really be avoided for serious servers. they are prolly still good for low-speed single P4/Athlon64 CPU due to the less-optimized CPU cooling and a limited 250w power supply per blade. I'm sure the newer generations of super-hot power-hungry P4/Xeon won't be happy in there....
-
11-07-2005, 08:55 PM #17WHT Addict
- Join Date
- Dec 2004
- Posts
- 109
These are not blades in the true sense of the word. Its just a chassis with a bunch of fans, and up to 10 1U carriers. You build a plain old server on each 1U carrier, and mount them vertically in the chassis. Each server is an independent, stand-alone machine. There isnt even a common power supply.
Basically, you get to put 10 servers in an 8U space, and the cost of the chassis and 10 blades with good power supplies is less than if you were to purchase 10 1U cases to start building in.
Maintenance is also much easier as sliding a server out of the chassis is generally faster and easier than de-racking and cracking open a traditional 1U case.
-
11-07-2005, 09:09 PM #18WHT Addict
- Join Date
- Dec 2004
- Posts
- 109
Originally Posted by cwl@apaqdigital
The 2U space savings does have some value, though that varies by datacenter. The real value in my eyes is the ease with which an individual server is accessed for maintenance. If you're stacking/railing 10 1U chassis to fill exactly 10U of space, you've got some work to do to get one out of the rack and opened up. Then there's the cost of rails or shelves.
One definite minus is the inability to to use either a floppy drive or CD-ROM without either using a USB solution or sliding a carrier out of the chassis and kludging an temporary IDE connection.
As far as a 20 amp limit, if your datacenter will not give you more power than that, then its a problem no matter which way you go. 10 servers draw X amps regardless of the housing/racking system.
-
11-07-2005, 09:23 PM #19Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
also, unless these blades start offering hot-swap drive bays, it will take more time to service a blade than a standard 1U. why? because, nowadays, #1 issue for crashed servers were defective HDD. pulling out a blade housing, then changing out the fix-mounted HDD, will cost more time than just swap out a small HDD tray from a traditional 1U chassis with hot-swap bays.
an empty blade chassis w/250w is $160 w/2x fixed HDD bays w/o CD bays, meanwhile you can buy a 14" deep supermciro mini 1U chassis (w/260w, 2x HDDs, no CD) for about $110, and some of my clients even put a pair of these mini 1U in single 1U spacing, so 10x servers only uses 5 U spacing! these mini 1U can even accommodate dual-core Operon or Athlon64x2 for some serious serving business.
-
11-07-2005, 09:33 PM #20Web Hosting Master
- Join Date
- May 2002
- Location
- Sunny California
- Posts
- 1,679
Originally Posted by gothambus
Originally Posted by gothambusErica Douglass, Founder, Simpli Hosting, Inc.
»»» I founded Simpli Hosting, and sold it in 2007 to Silicon Valley Web Hosting after over 6 years in the business.
Now I'm blogging at erica.biz!
-
11-07-2005, 10:03 PM #21WHT Addict
- Join Date
- Dec 2004
- Posts
- 109
Originally Posted by Simpli-Erica
-
11-07-2005, 11:37 PM #22Web Hosting Master
- Join Date
- Dec 2001
- Location
- Toronto, Ontario, Canada
- Posts
- 6,896
I also fail to see the cost savings here. For around $210/chassis you can pick up the really nice Supermicro 811 chassis, black, 420W psu's, and rails. If you're cheaper, as cwl@apaqdigital noted, you can go with the supermicro CSE-512LB's (I think it was), though I dont particularly like them.
Personally I find the evercase 9131, and 9138 chassis as a nice go between. Rails are cheap (~$20), the chassis themselves vary depending on whether you want hotswap, what size of PSU, etc. but generally run around $120-140 without rails, and they come with awesome fans.
Bottom line I think is the "blades" that we're talking about here aren't really blades, they dont save you on power, wiring, switch ports, or aggregate anything, in any manner, except for reducing your physical diversity in several manners (eg. loosing 1-2 fans in that chassis could be catastrophic, especially if its loaded with high end "blades"). While I love the idea, the implamentation just doesen't make sense, theres no cost savings, the space savings are near non existant, theres no aggregation, no reduction in management requirements, etc. etc.Myles Loosley-Millman - admin@prioritycolo.com
Priority Colo Inc. - Affordable Colocation & Dedicated Servers.
Two Canadian facilities serving Toronto & Markham, Ontario
http://www.prioritycolo.com
-
11-08-2005, 03:57 AM #23WHT Addict
- Join Date
- Dec 2004
- Posts
- 109
Originally Posted by cwl@apaqdigital
an empty blade chassis w/250w is $160 w/2x fixed HDD bays w/o CD bays, meanwhile you can buy a 14" deep supermciro mini 1U chassis (w/260w, 2x HDDs, no CD) for about $110, and some of my clients even put a pair of these mini 1U in single 1U spacing, so 10x servers only uses 5 U spacing! these mini 1U can even accommodate dual-core Operon or Athlon64x2 for some serious serving business.
http://www.webhostingtalk.com/archiv.../375407-1.html
I'm not saying that the rackblade system is the end-all for all people. For entry level to midrange servers, it makes sense for some folks, not for others. I don't think the guys at PCWnet designed it for the dual Xeon/Opteron RAID5 hot-swap application.
Funny story about the mini 1U form factor. We had a client here that rolled in a full rack of stuff about a year ago. They tried to save space by mounting two of those in the same 1U, but they mounted them so that the hot air out of the front server was discharged directly into the intake of the rear box. You guys can do the math on this one.
-
11-08-2005, 11:26 AM #24Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Originally Posted by gothambus
with a nice pair of sliding rails installed, you don't have to de-rack these standard 1U to do service works, such as swaping bad HDD or bad PS plus the benefit that you have the full range of options to install more updated platform from low-end celeron/sempron to high-end P4/Athlon64 or dual-core chip cooled by true passive cooling arrangement.
the bottom line is that, like porcupine said, these ATXblade system is not true "blades" like those offered by Dell/HP/IBM. with limited, less reliable active cooling, and less desireable board layout, they can only accommodate those out-dated low-heatout platform such as northwood p4, prestonia Xeon which are just about imposible to buy them new nowadays. also, it's really unknown how reliable are those small footprint, proprietary power supply they put in these "blade" units. do they have a good track record? can you get replacement fast and easy?
-
11-08-2005, 12:10 PM #25WHT Addict
- Join Date
- Dec 2004
- Posts
- 109
Originally Posted by cwl@apaqdigital
That being said, I have no vested interest in the ATXblade system, or in PCW Microsystems. Dixiesys asked. I responded with our experince. A discussion ensued. Nothing more, nothing less. The ATXblade is good for some, not so good for others. Its has pros and cons like everything else, as do traditional case servers.
We're neutral here. Send us whatever you want. Doesn't matter if it comes from PCW or Apaq Digital. We'll just rack it up and treat it all the same.