Results 1 to 20 of 20
  1. #1

    * Higher Temperatures - is it a real danger for servers or just hysteria?

    I recently read an article about a former VP of Sun Microsystems saying that he supervised a data center (in the Middle East) running at ~103 degree Fahrenheit (45 degree Celsius) for a longer period of time and that the data center finally concluded that it was overall cheaper to continue running at that temperature (even with a slightly higher failure rate).

    I tried to research the data center he was referring to, but couldn't find anything about it. The only other DCs I could find who are doing something comparable (and do not have a billion-dollar backbone like Google or Microsoft) are running at maximum 81 degree (27 degree Celsius).



    I'm curious - what is your opinion about this issue?
    But if it is true what the big companies say (that it's actually benefitial to run at higher temperatures), then why are so few DCs doing it?
    Do you know examples of "normal" DCs where they are running at significantly higher temperatures? How and why are they doing that?

    Looking forward to your answers!

  2. #2
    Join Date
    Feb 2011
    Posts
    669
    This is an interesting thread, I to would be interested in what temps people keep their data centers at.

    Back in the 80's we used to keep our machine rooms (Amdahl mainframes etc) at 67degrees. Now I keep my small data center at 75degrees, perhaps this is still to low?

    Dave

  3. #3
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,456
    103F? what? i can't even go outside when its that warm, let alone function inside
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  4. #4
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,571
    For a private datacenter with 100% known hardware (e.g. well designed, efficient) you can get away with higher ambient temps...even 100+. The problem with general colo centers is that there's all kinds of equipment, and not all is going to take kindly to the high temps. It takes one guy next to you with a reverse mounted switch to blow 130' air into the 'cold' row and game over. Something else to consider is a cooling failure... lower ambient temps give much more time to get things fixed before thermal. It's also hard to get new clients in a 100' DC. That's probably why you don't see it much... mid 70's MAYBE low 80's is the most I'd be comfortable with.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  5. #5
    Join Date
    Jun 2002
    Location
    Waco, TX
    Posts
    5,292
    I've been in a couple in the last 1.5 years running at 78-80F.

    That is too warm for me, having to have a fan to work on rather heavy lifting of servers and racks, just not good.

  6. #6
    As I said - 80 F is not that uncommon nowadays.
    But higher temperatures? Till now you all said it would be possible - why is nobody doing it then?
    If it's a matter of comfort - I guess you could just turn on an AC when you need to work in a server room, no?

  7. #7
    Join Date
    Apr 2011
    Posts
    74
    My opinion is that data centers with known hardware can play with the temps. The most important thing is to keep the temperature stable. Temperature changes will have a worse effect to all electronic parts than a somehow high temperature. This is why... I guess... data centers stick with a reasonable temperature ( that will also allow humans work in the room ) and keep it "stable" forever.

  8. #8
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by FastServ View Post
    For a private datacenter with 100% known hardware (e.g. well designed, efficient) you can get away with higher ambient temps...even 100+. The problem with general colo centers is that there's all kinds of equipment, and not all is going to take kindly to the high temps. It takes one guy next to you with a reverse mounted switch to blow 130' air into the 'cold' row and game over. Something else to consider is a cooling failure... lower ambient temps give much more time to get things fixed before thermal. It's also hard to get new clients in a 100' DC. That's probably why you don't see it much... mid 70's MAYBE low 80's is the most I'd be comfortable with.
    Exactly, in private corporate data centers this sort of thing is done all the time. They know the costs of hardware failures (most of the cost is not the hardware itself) and can weigh that against the cost savings. They can specifically design the airflow throughout the whole facility as they know exactly what equipment goes where and can specifically select hardware they know to be more reliable at the higher temperatures. Not to mention, if it is all their hardware and they have a cooling failure, they can quickly and easily shut everything down to prevent further damage.

    In a shared facility, those sorts of measures simply aren't possible. You don't know what gear any customers are going to put anywhere, you can't select their gear for them, and the customers likely won't like it if you have to hard kill all their gear because of a temporary chiller failure, etc. In shared colocation environments, the customer sees higher hardware failures, they're going to leave, you're going to lose them as a customer, and those sorts of customer loses cost a lot more than what your savings is going to be, especially when you consider the reputation hit to all of the "sky high data center temperature" reviews. No matter how clear and upfront you are to customers, they're not going to fully understand until they experience it. That isn't even considering the comfort of the customers regularly going in and out of the facility, customer perception for marketing, etc.

    We keep our facilities at 74F, which seems to be a happy medium. People don't notice it is warm at all, and it is better than keeping it at 66-68F like I've seen at some facilities. It also gives us enough time in case of cooling failures to correct the issue before temperatures get to an uncomfortable level.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  9. #9
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by KarlZimmer View Post
    We keep our facilities at 74F, which seems to be a happy medium. People don't notice it is warm at all, and it is better than keeping it at 66-68F like I've seen at some facilities. It also gives us enough time in case of cooling failures to correct the issue before temperatures get to an uncomfortable level.
    Ditto, we hover about the 73-74 range. As we're implementing full isolation we'll likely raise the ambient/hot temp further. However, we cannot begin to try something like 90deg or higher. Simply impossible with mixed client gear.

  10. #10
    Join Date
    Apr 2003
    Location
    San Jose, CA.
    Posts
    1,622
    The sort of questions makes me think of the reports Google published regarding failure rate and temperatures... I'm not sure if it was limited to just HD failure rate, or whole systems.

    As well, this discussion makes me wonder if they were comparing standard hardware... as Google has their motherboards (and likely other parts) custom designed.
    Daved @ Lightwave Networking, LLC.
    AS1426 https:/www.lightwave.net
    Primary Bandwidth: EGIHosting (NLayer, NTT, HE, Cogent)
    Xen PV VPS Hosting

  11. #11
    Join Date
    Jun 2001
    Location
    Denver, CO
    Posts
    3,301
    Quote Originally Posted by FastServ View Post
    Something else to consider is a cooling failure... lower ambient temps give much more time to get things fixed before thermal.
    Some time, but not much time. A room that has a 150w/sq ft critical load online will go from 70 to 100+ in about 15 - 25 minutes, depending on floor height, ceiling height, etc.
    Jay Sudowski // Handy Networks LLC // Co-Founder & CTO
    AS30475 - Level(3), HE, Telia, XO and Cogent. Noction optimized network.
    Offering Dedicated Server and Colocation Hosting from our SSAE 16 SOC 2, Type 2 Certified Data Center.
    Current specials here. Check them out.

  12. #12
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by Jay Suds View Post
    Some time, but not much time. A room that has a 150w/sq ft critical load online will go from 70 to 100+ in about 15 - 25 minutes, depending on floor height, ceiling height, etc.
    Though if you have hot air exhaust, outside air inlets, etc. you can certainly extend that timeframe. Most of the failures we've seen also can get resolved very quickly with redundant paths available, etc. so 10-15 minutes is all you really need.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  13. #13
    Join Date
    Oct 2007
    Location
    United States
    Posts
    1,175
    I don't see why that temperature isn't attainable at a private datacenter where you have designed infrastructure to operate at those temperatures. I'm sure there is a lot more we don't know about how they were cooling the servers/hardware that allowed such extreme temperatures.

    In a general purpose datacenter where there is a large mix of hardware configurations, clients, etc, it isn't possible to operate at such extreme temperatures. There will need to be some kind of standard in place for all server hardware and infrastructure to achieve those running temperatures. Would be cool to know what datacenter did that, and how.
    www.DMEHosting.com - DME Hosting LLC | Servers, KVM/OpenVZ VPS's, Email Hosting, Web Hosting

  14. #14
    How many people do you think are going to walk into that "hot" data center
    and go "oh this is great its pretty warm in here" ?? That's insane.


    In a data center where they're not servicing the public/other people's hardware they can actually just do what works better.

    @OP
    You're out of touch with reality completely if you don't know that
    the public perception of a data center is that it's a place that
    can keep your hardware cold. A lot of data center's operate by
    way of housing the public's hardware.

  15. #15
    Join Date
    Mar 2011
    Location
    Graz, Austria
    Posts
    298
    We are around 20 Celsius, even in summer (which is mostly because how our building is built)

    30C would be still fine imo, 40+C is clearly too much for shared space as others said.

    I did privately see DCs running at 45-55C however (Hong Kong) and way lower than usual (5C-11C, Ukraine, with defect heating in winter).
    They told me that they never had problems with this temperatures.
    Personally i would say it is overrated, IT equipment runs at most temps between 0 and 60C generally ok.

    It's just a basic calculation: If the cost of cooling is more expensive than some downtime (even if the DC is completely down) like in the Middle East
    or Africa (where Power is unreliable and very expensive, so you better spend it on USVs than better cooling) most would use the cheaper way
    and finance something better with the saved money (new/better USVs, more uplinks etc.).

    If all fails you can still cool with argon lol

  16. #16
    Quote Originally Posted by KarlZimmer View Post
    Though if you have hot air exhaust, outside air inlets, etc. you can certainly extend that timeframe. Most of the failures we've seen also can get resolved very quickly with redundant paths available, etc. so 10-15 minutes is all you really need.
    +1 to this. We operate a (very) small private facility in a hot climate (Australia). We run the floor at around 27 degrees (Celsius) and have had no noticeable increase in MTBF (but we do refresh hardware every 2-3 years).

    If cooling fails, we have evaporative coolers without washers to extract hot air from the plenum. Depending on atmospheric, we can keep the facility around 2-3 degrees higher than ambient air, all for under 10% of compute load (ie 100W for every kW) which is nice for ride-through should automatic transfer on a generator fail!

  17. #17
    Join Date
    Jun 2008
    Posts
    40
    Ive experimented running hot and had a single fan fail after 2 years at 100% CPU load. Disks are still OK on smart. Negligible power consumption change

  18. #18
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Ah, that does bring up one point, that running at higher facility temperatures will increase the power usage of systems. This will lead you to see a much greater improvement in PUE than you'll actually see the power usage drop. The increased power usage is due to the increased speed of fans, etc.

    As an example, say you use 10kw IT load under normal temperatures and you use 5kw for cooling, distribution, UPS, etc. You have total power load of 15kw and IT load of for a PUE of 1.5. Now, say you increase your temperatures and drop the cooling/distribution overhead to 4kw, but you're now using 11kw of IT load. You'll then have a PUE of 1.36, but be using the same amount of power. I believe this is one of the major flaws with the PUE metric, since you'll actually get a higher PUE by running servers with no fans in them, even though you might be using less power overall.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  19. #19
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,571
    Electronics in general will become less efficient at higher temps, not just from increased fan usage.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  20. #20
    Join Date
    Mar 2010
    Location
    Germany
    Posts
    681
    I'd say most wont do it because it gives you less headroom to really problematic temperatures if the AC fails.
    The value of another hour for fixing is closely related to the value of the data in the DC. So, as "a google" with no SLA on anything, multiple redundancy and the best engineers to design my hardware, I wouldn't care as much as some financial institution would, because if they got any issues... well. I figure we all still remember?

    As long as the business people decide that it's OK I don't see an issue with it.
    Probably someone will have to remind thme of their responsibility if a massive failure occurs.
    Check out my SSD guides for Samsung, HGST (Hitachi Global Storage) and Intel!

Similar Threads

  1. Location of dedicated servers. Danger?
    By Powinteh in forum Colocation and Data Centers
    Replies: 6
    Last Post: 01-31-2011, 04:33 AM
  2. Some real deals! Dedicated servers and RPS - real private servers
    By Beast5 in forum Dedicated Hosting Offers
    Replies: 2
    Last Post: 04-28-2009, 05:45 AM
  3. Host goes into laughter hysteria because of guest's voice
    By unity100 in forum Web Hosting Lounge
    Replies: 7
    Last Post: 03-28-2009, 12:31 PM
  4. Replies: 0
    Last Post: 03-29-2006, 03:06 PM
  5. a potential danger to all dedicated servers
    By node9 in forum Dedicated Server
    Replies: 4
    Last Post: 06-06-2001, 07:25 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •