Results 1 to 8 of 8
  1. #1
    Join Date
    Jul 2009
    Posts
    71

    Efficient Rack Cooling - Hardware Placement Advise

    We are not rack dense in a standard 42 rack.
    We currently put 2U solid spacers on the front. As well, I run the switch with the ports facing frontward and use a cable manager on top and bottom of the switches.

    I can see from some of your pics in the other thread....that you place the networking equipment in reverse. It depends on how the fan is facing on the networking equipment as I don't like to have fans facing the inlet portion of the rack.

    Any theories or practices ya'll follow ?

  2. #2
    Join Date
    Aug 2009
    Location
    Orange County, CA
    Posts
    25
    You need to make sure you keep all your racks uniform. One side is the "hot" aisle and the other is the "cold" aisle. You should maximize air intake on one side and have less on the opposite side.

    The way I am designing my new data center is having all the vents on the front side pushing air into the front side of the machine and having the A/C Returns on the opposite side to suck the hot air out, which is how most datacenters work. Although, some datacenters push air up from the floor.

  3. #3
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Most switches will vent air side-to-side, not front to back, so they don't really fit into a proper hot-aisle cold-aisle setup no matter what. We generally blank them off in the front and just have the ports face the back, so at least they aren't hurting the airflow of anything else. The switches will run a little warmer than everything else, but the Cisco switches are also spec'd to run at higher temperatures than most other systems. They still run at temps well under Cisco spec and have been perfectly reliable in that configuration.

    Note: If you have open sided racks/cabinets you might want to make sure you don't have your switches right next to each other with nothing in-between, otherwise you're looking at having the exhaust air of one going into the intake in the switch in the next cabinet down. In that case, some companies make brackets to put in to direct the exhaust air to the back, otherwise a piece of plastic or metal to simply direct the air back instead of straight into the next switch should work.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  4. #4
    Join Date
    May 2003
    Posts
    1,664
    Most providers will face the switches on the backside to help with wiring management. I know of at least 5 others in DCs we all share that are major server providers that do the same.

  5. #5
    Join Date
    Aug 2009
    Location
    Orlando, FL
    Posts
    1,063
    Quote Originally Posted by KarlZimmer View Post
    The switches will run a little warmer than everything else, but the Cisco switches are also spec'd to run at higher temperatures than most other systems. They still run at temps well under Cisco spec and have been perfectly reliable in that configuration.
    We saw this first hand. All our switches are in the top of rack on the rear side. We did it this way for cable managment as mentioned. However, I used a dell power connect switch which didn't have as many cooling fans in it. You could tell by looking at it, that it wasn't the same quality. It ended up over heating and I had to replace it with a Cisco.

    I've also heard that it is "best practice" to leave 1U of space between each server. You don't always have that ability. I usually balance my racks by power requirements not cooling. I'd rather have them run hotter than pop a breaker.

    As mentioned on a thread last week, we would all like to keep operating temperatures around 70 degrees. However, we ran plenty of cabinets in the mid 80's for years without any problems. You may not like the idea, but it can work.

  6. #6
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by skullbox View Post
    I've also heard that it is "best practice" to leave 1U of space between each server. You don't always have that ability. I usually balance my racks by power requirements not cooling. I'd rather have them run hotter than pop a breaker.

    As mentioned on a thread last week, we would all like to keep operating temperatures around 70 degrees. However, we ran plenty of cabinets in the mid 80's for years without any problems. You may not like the idea, but it can work.
    On the first point, who says leaving space between them is best practice? If the systems are properly engineered for front-to-back airflow, whatever is above or below them shouldn't really matter. Especially don't leave the 1u space if you're not using blanking panels, which is how I've seen most do it. You do not want the hot and cold air to blend, and the more air going between the servers means less cold air going through the server, and that is what really matters.

    As far as running DCs in the mid-80's, that is perfectly within spec of most providers. As long as it is consistent, and you don't have hot spots higher than that you'll be perfectly fine. HP, Google, Intel, Sun, etc. have all run in-depth studies that have proven that keeping a DC around 86 degrees will cause no noticeable difference in overall performance or reliability, but will save you a lot of money on cooling. The one danger is, you're not that far from where you will start seeing a steep degradation, so you need to make sure there are NO hot spots and that all cooling is uniform. You're also close to the limits in case of a cooling system failure, though if you're properly redundant there that shouldn't be an issue either.

    Note: We run our facility at 72 degrees, and I go through weekly with an infrared temperature sensor to make sure we're keeping things within the old recommended ASHRAE standards for a Class 1 environment, even though they now allow significantly higher temperatures. Things are very consistent, but people still complain about how it is warm, that other facilities they've been in are 64 degrees. Going down to 64 degrees is actually outside of their recommended standards, simply because that level of temperature offers no benefit at all and is simply wasteful of energy. The only reason to run a facility at temperatures that low is if you're compensating for poor design, with a lot of hot spots, etc. or you do not have a redundant configuration and need to build in time for heat to build-up.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  7. #7
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    If you have high velocity air being blown past a rack, be very careful of what I like to call car window effect. Think of driving down a highway at 80 and opening a window, everything in the car flys out

    If you have high velocity air rushing past an opening, it will draw the air from within the opening out, so if you leave a space between servers and rush air past that space, it will draw the hot air out from the hot row into the cold row. Then the fans from the servers will pull them in.

    Leaving a space between the servers in a BAD idea IMHO.

    My advice would be to start from the bottom fill up as high as you need to then blank out the rest of the racks.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  8. #8
    Join Date
    Aug 2009
    Location
    Orlando, FL
    Posts
    1,063
    I never understood the space between the servers. Although I did notice, if you aren't using rails and the servers are touching eachother the temperature of the chassis does increase.

Similar Threads

  1. Best rack arrangement for cooling?
    By nik martin in forum Colocation and Data Centers
    Replies: 5
    Last Post: 09-04-2008, 11:39 PM
  2. Insufficient cooling due to rack door design
    By Steerpike58 in forum Colocation and Data Centers
    Replies: 27
    Last Post: 08-14-2008, 04:15 PM
  3. Efficient Cooling Temps ?
    By chefwong in forum Colocation and Data Centers
    Replies: 2
    Last Post: 11-20-2006, 02:07 PM
  4. Server Rack Cooling
    By dbeck in forum Dedicated Server
    Replies: 3
    Last Post: 07-23-2006, 12:07 PM
  5. rack cooling
    By Bodeba in forum Colocation and Data Centers
    Replies: 25
    Last Post: 12-28-2004, 09:50 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •