Results 1 to 38 of 38
  1. #1
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808

    Things to Consider When Choosing a Datacenter

    Top 10 Things to Consider When Choosing a Datacenter

    There are many things to consider when choosing a datacenter that are often overlooked because one places priority on a single facet rather than looking at all of the crucial aspects of a given facility. Here are some things to consider when selecting a datacenter host:

    1. Redundant Power
    A minimum of N+1 power on critical systems (UPS and Generators) should be an absolute requirement for your business; however this doesn’t mean there aren’t points of failure. Not all power distribution is the same so demand a copy of your provider’s power map. Using a 2N or greater system is the only practical way to prevent failure. Definitions of redundant power can vary, so demand to see a map that shows what it is truly redundant to. True B power should be redundant to the street.
    2. Redundant Cooling
    Redundant means more than just N+1 CRAH (computer room air handler) or CRAC (computer room air chiller) units. If the facility has chilled water, demand either a loop feed bi-directional system or a completely redundant pipe. This allows for maintenance on the pipe without taking the system down. Other considerations include redundant chillers, pumps, valves, controls, and electrical.
    3. Network Carriers
    At a minimum you should require a carrier neutral datacenter. Competition drives pricing. Therefore, by being in a carrier neutral facility with access to multiple providers, you increase your bottom line and decrease risk.
    4. Location
    The risk of system outage is significantly reduced by placing your servers in a datacenter that is located in a disaster free area. The threat of natural disaster such as tornadoes, hurricanes, earthquakes, and wildfires can be easily thwarted by choosing a datacenter that does not reside in a coastal or storm centered region. Nevada, Utah, and Arizona are deemed to be safest areas from such disasters in the U.S.
    5. Security
    It is important to demand accountability from your datacenter operator. While two-factor authentication is good, the most secure datacenters enforce three-factor authentication: something you have, something you are, and something you know. Man traps to avoid pass-back and tailgating at all points of ingress and egress should also be high on your list of requirements. Your datacenter operator should be able to tell you every person who is in the datacenter at a given time and when they entered and exited.
    6. Support
    Do not risk your business to an unmanned facility. Require a minimum of two remote hands engineers at all times. You should also demand that onsite personnel be certified at a minimum with CCNP, CISSP, and MCSE. Don’t be fooled by datacenters who hire “button pushers”. Remember that your infrastructure lies in their hands during critical moments.
    7. Flexibility to meet your business needs
    Don’t pay for a datacenter that is everything to everyone; in other words, avoid paying for services you don’t require. And do plan for growth. As your business grows, you want a datacenter that grows with you.
    8. Vendors and Partners of the Datacenter
    Often times the datacenter operator has relationships established with vendors. Leveraging these relationships can save you time and money compared to working with solution providers.
    9. Service
    Be sure to consider any other services the datacenter may offer you with regard to office space, engineering services, consulting services, customer accessibility, remote hands, etc.
    10. Cost
    Look beyond monthly fees and consider the cost implications downtime would have on your business. The right colocation facility reduces downtime associated with mission critical facilities problems. Some colocation providers offer carrier neutral datacenters (you choose from multiple carriers to find your best price/value on your circuits). Carrier neutral datacenters give you more choices and better pricing, and if the datacenter charges no cross-connect fees, you can save even more in monthly costs. Carrier neutral datacenter facilities can be the ideal place for your network hub.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  2. #2
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    5,954
    Quote Originally Posted by JordanJ View Post
    Using a 2N or greater system is the only practical way to prevent failure.
    Can you explain this theory, for everyones benefit?

    Quote Originally Posted by JordanJ View Post
    Your datacenter operator should be able to tell you every person who is in the datacenter at a given time and when they entered and exited.
    Can you also explain this one to me, specifically with regards to how this makes a facility more (or rather less) secure?

    Quote Originally Posted by JordanJ View Post
    a minimum with CCNP, CISSP, and MCSE. Don’t be fooled by datacenters who hire “button pushers”. Remember that your infrastructure lies in their hands during critical moments.
    This one too. It's widely recognized that these such certificates (amongst many others) aren't held in high regards. How do any of these certificates give data centre personnel an edge in a carrier neutral environment with scenario's such as HVAC leaks, UPS problems, PDU harmonic issues, proper cabling methods, etc?

  3. #3
    Join Date
    Jul 2005
    Location
    New Jersey, US
    Posts
    1,507
    Nice work putting together an informative guide

    Many people just choose by how a site looks or by price, so this will give them a good checklist to go by.
    PlatinumServerManagement (also known as PSM)
    The OLDEST and LARGEST and MOST TRUSTED server management provider in the USA, with 15+ employees and growing!
    Providing quality support for OVER 18 years! Currently supporting over 3,000+ servers monthly!

    www.PlatinumServerManagement.com Proud member of the NJ BBB & Chamber of Commerce & Authorized cPanel Partner.

  4. #4
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by porcupine View Post
    Can you explain this theory, for everyones benefit??
    Absolutely, no matter how much redundancy you have built into any 1 system, there is always a single point of failure. Even if you go crazy with double ended substations, main tie mains, your still ending up with a single point of failure as well as the inability to take parts down for maintenance.


    Can you also explain this one to me, specifically with regards to how this makes a facility more (or rather less) secure??
    Facilities that use a simple biometric door or key card door cannot control tailgating. By using a proper man trap, facilities can control access to the datafloor.



    This one too. It's widely recognized that these such certificates (amongst many others) aren't held in high regards. How do any of these certificates give data centre personnel an edge in a carrier neutral environment with scenario's such as HVAC leaks, UPS problems, PDU harmonic issues, proper cabling methods, etc?
    I am not sure who says its widely recognized, however I know that on 95% of the large RFP's we receive require networking certifications at the datacenter. Also, 9/10 job postings I just clicked on monster for "network engineer" REQUIRED certs.



    As for your maintenanceissues, HVAC leaks are handled by your maintenancestaff, UPS problems by your onsite electrician/electrical engineer, and your PDUs really shouldn't have any harmonic issues unless your trying to power motor load from them while at the same time skimping on a decent K rating transformer. As for Proper Cabling methods, that would be deferred to your cable tech, hopefully with at least a basic certification in that.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  5. #5
    Join Date
    Nov 2005
    Posts
    352
    Quote Originally Posted by JordanJ View Post
    4. Location
    The risk of system outage is significantly reduced by placing your servers in a datacenter that is located in a disaster free area. The threat of natural disaster such as tornadoes, hurricanes, earthquakes, and wildfires can be easily thwarted by choosing a datacenter that does not reside in a coastal or storm centered region. Nevada, Utah, and Arizona are deemed to be safest areas from such disasters in the U.S.
    The desert southwest part of the country is probably not the best place for hosting your equipment (really hot too), especially if you need good connectivity to Europe. You have to host your equipment in the "scary places" along the east coast. Not to mention the fact that those data connections have to pass through all kinds of dangerous parts of the country.

    Quote Originally Posted by JordanJ View Post
    6. Support
    Do not risk your business to an unmanned facility. Require a minimum of two remote hands engineers at all times. You should also demand that onsite personnel be certified at a minimum with CCNP, CISSP, and MCSE. Don’t be fooled by datacenters who hire “button pushers”. Remember that your infrastructure lies in their hands during critical moments.
    A CCNP is not very useful in a facility that only uses Juniper or Foundry equipment. An MCSE isn't useful at all in a linux shop. Even an RHCE is useless in a Debian or SUSE shop.

    A CISSP looks like it is only useful in large organizations (not in hosting facilities) due to the amount of information you need to know about the infrastructure (software and hardware) that you would be supporting/protecting. In order for an on-site CISSP to be able to offer you support, they would need to know intimate details about your particular operation.

    Just because someone doesn't have a certification in something doesn't mean they are clueless about it. And just because they have 3 or 4 certs doesn't automatically make them a genius. Also, never underestimate the need for "button pushers".

  6. #6
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    5,954
    Quote Originally Posted by JordanJ View Post
    Absolutely, no matter how much redundancy you have built into any 1 system, there is always a single point of failure. Even if you go crazy with double ended substations, main tie mains, your still ending up with a single point of failure as well as the inability to take parts down for maintenance.
    Problem here is, 2N is not a redundant configuration. It's a statement of requiring 2x of something, to run the load (eg. 2 x 1MW generators, to run a 2MW load). 2N+2 would represent full redundancy in this case.

    Quote Originally Posted by JordanJ View Post
    Facilities that use a simple biometric door or key card door cannot control tailgating. By using a proper man trap, facilities can control access to the datafloor.
    I was referring to the concept of the data centre *telling* you who was in their facility at any given time. A blatant disregard for security, and privacy.

    Quote Originally Posted by JordanJ View Post
    I am not sure who says its widely recognized, however I know that on 95% of the large RFP's we receive require networking certifications at the datacenter. Also, 9/10 job postings I just clicked on monster for "network engineer" REQUIRED certs.
    I dont know anyone who shops monster for network engineers. Any large data centre, telco, CLEC, etc. around here, shops for staff through the grapevine. Anyone I've ever talked to has outright stated that relying on any of the above certs as a measurement of skill, is as accurate as rolling dice.

  7. #7
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by dexxtreme View Post
    The desert southwest part of the country is probably not the best place for hosting your equipment (really hot too), especially if you need good connectivity to Europe. You have to host your equipment in the "scary places" along the east coast. Not to mention the fact that those data connections have to pass through all kinds of dangerous parts of the country.
    Obviously if your running a application that is latency sensitive to Europe your going to want to locate it as close to Europe as possible.

    As for the "hot" southwest not being a great place for your equipment I would have to disagree. Remember, less important then the temperature is the wet bulb temperature of which the "desert" has a surprising low one.
    http://www.wrcc.dri.edu/htmlfiles/westcomp.wb.html

    As for certifications, I do agree with what your saying, I will re-write it to say make sure your datacenter has certified professional's applicable to your environment.

    I will say however, while you can find fantastic employees who are not certified - a customer has no way to verify that without certifications.
    Last edited by JordanJ; 06-26-2009 at 08:26 PM.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  8. #8
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by porcupine View Post
    Problem here is, 2N is not a redundant configuration. It's a statement of requiring 2x of something, to run the load (eg. 2 x 1MW generators, to run a 2MW load). 2N+2 would represent full redundancy in this case.
    I think you misunderstand my point. What I am trying to say is:

    A dual corded machine plugged into 2PDUs, fed by 2 UPSes, fed by 2 distribution panels, protected by 2 gensets should be a minimum rather then the more standard N+1 where:

    A dual corded machine is plugged into 2PDUs fed by 1 UPS fed by 1 distribution panel, protected by 2 gensets.

    Is taking it a step further to 2N+1 or 2N+2 a bad thing? Absolutely not.

    Quote Originally Posted by porcupine View Post
    I was referring to the concept of the data centre *telling* you who was in their facility at any given time. A blatant disregard for security, and privacy.
    Your absolutely correct, this was badly written and I absolutely did not mean the datacenter should "tell you" who was in their facility.

    When I get some time next week I will take the improvements you guys have helped me re-word and re-post the changes.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  9. #9
    Join Date
    Aug 2008
    Location
    Vancouver, Canada
    Posts
    650
    I would also add to check if the datacenter is currently in any court battles. One datacenter I was planning to colocate with was in a court battle over shares of the company and could be liquidated if it wasn't settled.
    Tailored VPS offers fully customizable VPS Hosting
    Powered by OpenVZ | Servers located in the USA | 99.9% Uptime

  10. #10
    Join Date
    Mar 2008
    Location
    X.400
    Posts
    501
    Hello AHN-Jay,

    If you don't mind me asking which company was this?

    Also, I wanted to append some additional information in regards to porcupine. Personally, I know some folks that have worked in this industry for 15-20 years and I am certain that they can without doubt educate and coach new CCNP and CISSP graduates from their work experience so I definitely agree with you on this point.

    Of course, we need to recognize individuals for taking the effort and time to obtain a specific certifications and post secondary education. However, if I had to bring on a new hire I would consider the person with the practical hands experience Vs. a graduate with a certificate or diploma/degree. The reason is, in the real world you would be able get a true feeling for how things work Vs. an environment created for a specific scenario.

    Either way, this turned into a great thread. Keep it going
    LevelHosting Inc. - Unlimited Web Hosting at your level
    cPanel, CloudLinux, LiteSpeed, RVSiteBuilder Pro, Softaculous and 24/7 Support
    Meeting the demands of high traffic web sites at an affordable price!

  11. #11
    Join Date
    Aug 2008
    Location
    Vancouver, Canada
    Posts
    650
    Quote Originally Posted by LevelHosting Inc View Post
    If you don't mind me asking which company was this?
    It is rackster.com. I learned about the court battle when I was asking for reviews about them:
    http://www.webhostingtalk.com/showthread.php?t=868692
    Tailored VPS offers fully customizable VPS Hosting
    Powered by OpenVZ | Servers located in the USA | 99.9% Uptime

  12. #12
    There are two parts to this, though. I know a lot of people can pick a good datacenter. But then when they go to configure their system they configure it wrong.

    They might be running 18 amps on each 30amp A&B circuit ("oh we'll buy a new circuit later" and never do it). Or they have redundant routers and firewalls (VERY common) but they do not configure the "track IP" properly so the failovers never kick in when needed. Or they will connect switches to switches with multiple ports and not properly configure LAG (many times LAG or LACP is not configured at all and it just blows me away to see all the traffic going through 1 port).

    I think a lot of people wake up one day and decide "i need redundancy, i need a good datacenter, and i will buy redundant equipment" but then they fail to properly configure it. Just an addendum for those that read this. I think the article is fantastic and agree with it completely for picking a datacenter, but make sure your hardware guys know what they are doing too.

  13. #13
    Join Date
    Oct 2008
    Location
    Singapore
    Posts
    4,521
    The risk of system outage is significantly reduced by placing your servers in a datacenter that is located in a disaster free area. The threat of natural disaster such as tornadoes, hurricanes, earthquakes, and wildfires can be easily thwarted by choosing a datacenter that does not reside in a coastal or storm centered region. Nevada, Utah, and Arizona are deemed to be safest areas from such disasters in the U.S.
    I agree with the risk part, but I don't agree looking for the safest zones, as it is pretty meaningless if you are looking for a specific location. Moreover, if your facility is built properly, a general disaster shouldn't affect the building other than some power outages.
    LIMENEX WEB HOSTING
    Affordable High Performance Web Hosting in United States & United Kingdom
    Web Hosting | Reseller Hosting | Managed VPS | Managed Dedicated Servers | Cheap SSL Certificates

  14. #14
    Join Date
    Dec 2001
    Location
    Atlanta
    Posts
    4,419
    Quote Originally Posted by JordanJ View Post
    I think you misunderstand my point. What I am trying to say is:

    A dual corded machine plugged into 2PDUs, fed by 2 UPSes, fed by 2 distribution panels, protected by 2 gensets should be a minimum rather then the more standard N+1 where:

    A dual corded machine is plugged into 2PDUs fed by 1 UPS fed by 1 distribution panel, protected by 2 gensets.

    Is taking it a step further to 2N+1 or 2N+2 a bad thing? Absolutely not.



    Your absolutely correct, this was badly written and I absolutely did not mean the datacenter should "tell you" who was in their facility.

    When I get some time next week I will take the improvements you guys have helped me re-word and re-post the changes.
    I would agree with you - I prefer 2(N+1) meaning that all of your equipment has dual power and is fed by A/B systems that are N+1 each in their own right. This is the most efficient and cost effective manner to get more 9's on the uptime.
    When in a single powered environment at least have N+1 - otherwise we all know what happens when the "only generator" fails to start or goes offline.

    I am glad you point out that the user has responsibility to have dual powered equipment and it would be even better if they kept cold spares or some hot spare / active passive redundancy of all their key equipment if they really want to be up all the time.

    Quote Originally Posted by JordanJ View Post
    As for the "hot" southwest not being a great place for your equipment I would have to disagree. Remember, less important then the temperature is the wet bulb temperature of which the "desert" has a surprising low one.
    http://www.wrcc.dri.edu/htmlfiles/westcomp.wb.html
    raw heat actually does matter - but you also have to look at the 24 hour temps - not just the daytime temps. wet bulb helps when disipating heat but this is really realting to efficiency of the cooling heat exchanger outside unit to get rid of the heat. The heat being there in the first place is the issue and a hotter climate will drive up your costs due to the amount of building load that has to be dissipated. ITs a good indicator of selecting the right kind of heat exchanger as well since a high humidity area is not going to be as efficient in rejecting heat through evaporative cooling processes as a dry cooler will be. It also is important to look at the availabilit and cost of water in the desert areas since they may be great for running an evaporative chiller but if the water is too costly or not plentiful enough to run millions of gallons a year through it in lost water cycle then it does not matter and you will have to use the more expensive dry cooling system.

    Also - you can run the water a slightly higher temp for your heat exchangers based on the outside temp because you mainly are worried about the raw exchange ie if you send out 100 degree water from your units you want at least 88 degree or cooler water back generally speaking and in the winter you can run it lower - which can give you a more efficient exchange at your internal exchangers.

    Something else many dont remember is that a colder climate will also drive up the costs or at least even them out since many power companies will charge more in extreme climates due to heater load on the ELEC system in the winter and the run up in energy costs to power the plants that occurs in extreme climates. This relates more to the power costs of the dc. I find spring and fall to have the cheapest power and best operations.


    It looks like you are learning some good stuff on data center operations and seeing that there is a lot to cover. Its amazing the costs that can be incurred to provide the casual inquiry as to fully redundant space that is requested from time to time.
    Last edited by sailor; 06-27-2009 at 06:53 AM.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  15. #15
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,571
    Fully redundant colo isn't cost prohibitive at all -- it just takes careful planning on the customer part. I personally wouldn't choose a datacenter unless they provided true a+b redundancy and n+1 by default and haven't had any issues finding them.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  16. #16
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by sailor View Post
    raw heat actually does matter - but you also have to look at the 24 hour temps - not just the daytime temps. wet bulb helps when disipating heat but this is really realting to efficiency of the cooling heat exchanger outside unit to get rid of the heat. The heat being there in the first place is the issue and a hotter climate will drive up your costs due to the amount of building load that has to be dissipated.

    ITs a good indicator of selecting the right kind of heat exchanger as well since a high humidity area is not going to be as efficient in rejecting heat through evaporative cooling processes as a dry cooler will be.
    This is not really true, in a more efficient facility using chillers, the only impact heat has is on building load, and with R-30 insulation it is VERY minimal in the scope of things.

    As for chilled water when the return water enters the plant on the chilled water loop, the heat is transferred to the condenser water loop and rejected to the air exchanger - the lower the wet bulb temperature, the lower the water in that condenser loop remains and the less work your chillers have to do. That being said, chillers built more recently are VERY efficient, I am seeing part loads in the .22KW/ton range.


    Quote Originally Posted by sailor View Post
    It also is important to look at the availabilit and cost of water in the desert areas since they may be great for running an evaporative chiller but if the water is too costly or not plentiful enough to run millions of gallons a year through it in lost water cycle then it does not matter and you will have to use the more expensive dry cooling system.
    Absolutely water cost is something to look at, but water cost would need to ATLEAST increase fifty fold before dry cooling would work out cheaper. It really is a horrible way to cool a datacenter. What some facilities are doing is putting both coils in the units and making them CRAH/CRACs because there are some and I mean 5-6 days a year where dry cooling does turn out more efficient because of high wet bulb, but the extra cost there just isnt worth it.

    Quote Originally Posted by sailor View Post
    Also - you can run the water a slightly higher temp for your heat exchangers based on the outside temp because you mainly are worried about the raw exchange ie if you send out 100 degree water from your units you want at least 88 degree or cooler water back generally speaking and in the winter you can run it lower - which can give you a more efficient exchange at your internal exchangers.
    You can certainly change the temperature of your CONDENSER loop, but your chilled water loop needs to stay at its constant 44-46deg.

    Quote Originally Posted by sailor View Post
    Something else many dont remember is that a colder climate will also drive up the costs or at least even them out since many power companies will charge more in extreme climates due to heater load on the ELEC system in the winter and the run up in energy costs to power the plants that occurs in extreme climates. This relates more to the power costs of the dc. I find spring and fall to have the cheapest power and best operations.
    Your 100% correct here, some times people forget that even though they might be able to do air-to-air exchangers and save a ton of money on their cooling, the additional power cost due to being in a cold climate will eat them alive.


    Quote Originally Posted by sailor View Post
    It looks like you are learning some good stuff on data center operations and seeing that there is a lot to cover.
    Absolutely, thats the best part about this industry, Jason Schafer from Tier 1 Research and I were just discussing this the other day, no matter how much you think you know or know or how long you have been doing this, your always learning with datacenters.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  17. #17
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by LaptopFreak View Post
    I agree with the risk part, but I don't agree looking for the safest zones, as it is pretty meaningless if you are looking for a specific location. Moreover, if your facility is built properly, a general disaster shouldn't affect the building other than some power outages.
    3 things happen when your in an area highly effected by storms.

    1) You get more power outages - every time you hit batteries its a risk.
    In a disaster:
    2) Your facility has a harder time getting fuel
    3) Your facility has a harder time getting water
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  18. #18
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by LevelHosting Inc View Post
    Hello AHN-Jay,

    If you don't mind me asking which company was this?

    Also, I wanted to append some additional information in regards to porcupine. Personally, I know some folks that have worked in this industry for 15-20 years and I am certain that they can without doubt educate and coach new CCNP and CISSP graduates from their work experience so I definitely agree with you on this point.

    Of course, we need to recognize individuals for taking the effort and time to obtain a specific certifications and post secondary education. However, if I had to bring on a new hire I would consider the person with the practical hands experience Vs. a graduate with a certificate or diploma/degree. The reason is, in the real world you would be able get a true feeling for how things work Vs. an environment created for a specific scenario.

    Either way, this turned into a great thread. Keep it going
    I agree 100% with you, however looking at it as a CUSTOMER, how do you know that they hired a quality employee? That is really where the certs come in to play.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  19. #19
    Join Date
    Jun 2001
    Location
    Kalamazoo
    Posts
    33,190
    Quote Originally Posted by LevelHosting Inc View Post
    Either way, this turned into a great thread. Keep it going
    It'd be a good WHT Wiki article once a concensus is reached.
    There is no best host. There is only the host that's best for you.

  20. #20
    Join Date
    Nov 2001
    Location
    London
    Posts
    4,857
    Jordan,

    Outside standards.

    1. ISO9001
    2. SAS70
    3. ISO27001

    The ISO standards are much harder to get than SAS70 but for larger, corporate customers, will certainly add weight to your sales proposition.
    Matthew Russell | Namecheap
    Twitter: @mattdrussell

    www.namecheap.com - hosting from a registrar DONE RIGHT!

  21. #21
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Matt, Fantastic Point! Can't believe I missed that.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  22. #22
    Join Date
    Dec 2001
    Location
    Atlanta
    Posts
    4,419
    Quote Originally Posted by JordanJ View Post
    This is not really true, in a more efficient facility using chillers, the only impact heat has is on building load, and with R-30 insulation it is VERY minimal in the scope of things.

    As for chilled water when the return water enters the plant on the chilled water loop, the heat is transferred to the condenser water loop and rejected to the air exchanger - the lower the wet bulb temperature, the lower the water in that condenser loop remains and the less work your chillers have to do. That being said, chillers built more recently are VERY efficient, I am seeing part loads in the .22KW/ton range.
    We will have to agree to disagree on the heat load on a building having minimal impact. I have been working in / around / owning / operating data centers for over 20 years now with a lot of real world pragmatic experience to apply and go by vs simply theory. I know you have been doing the data center thing for a year now but you can not go by books alone and you will see as you gain years of experience in the future that heat load does have a significant impact on the building operations.

    Chilled water - that is if you are using a chilled water plant vs crac units with compressors in the units in which case you are delivering your outside exchanger water directly to the unit. either was - the loop exchanger water does have an impact as well as gross water / cooling flow to and from your towers - you can run your towers harder to run colder water to your external cooling towers and you can run your pumps harder and overpump and I think you will find as we did your overall power consumption will go down since pumps and cooling towers running colder water will take load off the compressors and overall power consumption will go down.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  23. #23
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    [QUOTE=sailor;6258958]We will have to agree to disagree on the heat load on a building having minimal impact. /QUOTE]

    I agree it does make an impact; my point was simply that while total cooling requirements have increased year after year, your heat from the building shell is a constant. Thus, what was once a HUGE percentage however in today’s newer facilities where you have as much as 10 times the power as facilities built as little as 3 years ago, that percentage shrinks down to a much smaller number. IE 80 tons in a 1000 ton system is a bigger deal than 80 tons in an 6000 ton system.

    Quote Originally Posted by sailor View Post
    Chilled water - that is if you are using a chilled water plant vs crac units with compressors in the units in which case you are delivering your outside exchanger water directly to the unit. either was - the loop exchanger water does have an impact as well as gross water / cooling flow to and from your towers - you can run your towers harder to run colder water to your external cooling towers and you can run your pumps harder and overpump and I think you will find as we did your overall power consumption will go down since pumps and cooling towers running colder water will take load off the compressors and overall power consumption will go down.
    You're correct there are a number of ways to improve efficiency in the evaporative heat rejection; however you cannot ever lower the temperature of the condenser water lower than the wet bulb temperature, which brings me to my initial point that the outside temperature is less important in the operation of a data facility then the wet bulb temperature. I think from your responses you agree with me, and we are arguing the same point
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  24. #24
    Join Date
    Feb 2007
    Posts
    184
    I'm surprised cost is ranked 10... if anyone were to follow that table they would be paying a fortune.

  25. #25
    Join Date
    Dec 2001
    Location
    Atlanta
    Posts
    4,419
    [QUOTE=JordanJ;6258992]
    Quote Originally Posted by sailor View Post
    We will have to agree to disagree on the heat load on a building having minimal impact. /QUOTE]

    I agree it does make an impact; my point was simply that while total cooling requirements have increased year after year, your heat from the building shell is a constant. Thus, what was once a HUGE percentage however in today’s newer facilities where you have as much as 10 times the power as facilities built as little as 3 years ago, that percentage shrinks down to a much smaller number. IE 80 tons in a 1000 ton system is a bigger deal than 80 tons in an 6000 ton system.



    You're correct there are a number of ways to improve efficiency in the evaporative heat rejection; however you cannot ever lower the temperature of the condenser water lower than the wet bulb temperature, which brings me to my initial point that the outside temperature is less important in the operation of a data facility then the wet bulb temperature. I think from your responses you agree with me, and we are arguing the same point

    heat is always an issue - in 3 years loads have not gone up 10 times.

    wet bulb is important and certainly a dry climate helps but there are many other factors.

    so I agree with you on certain points but not on others.

    I am sure you will learn over time - trial and error is a good thing. jordan - you are a smartguy and I have full confidence over time with experience you will be very good at it. you have a lot to learn though.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  26. #26
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Jeff,

    Simply put datacenters run differently now then they did years ago - technology is advancing and more and more large scale facilities are being created 15MW+.

    <<snipped>>

    That means on DESIGN day (that means worst possible case) the static load makes up a whopping 3.1% of our total cooling. If I take a average system cooling number (includes pumps, cooling tower, water treatment - everything) of .70KW/ton (pretty average) and the Phoenix power rate, the total cost of running the chillers to handle the building load would be a whopping $6392.29/month at the worst possible time.

    This $6392.29 is out of a max power bill of $981,920.

    This is more then just math I have done, there are APC white papers on the subject, math my engineering teams have done, 4 different chiller manufacturers. All ran the numbers - all ended up with the same result.

    Does heat play other roles besides cooling? Yes it sure does, but saying the "desert" is a bad place for a datacenter would ask the question why:

    IBM
    American Express
    FedEX
    Godaddy
    Ebay
    Monster
    Wells Fargo
    Banner Health

    have all chosen one of the hottest parts of the country to put their datacenter.

    When a fab plant for Intel goes down, they have to scrap EVERYTHING on the floor, certify all of their gear at a cost of millions - yet the vast majority of chip makers including Intel choose to locate their plants in the desert southwest.

    Just like we had Web 1.0, and have now progressed to Web 2.0, things have changed, whats important has changed. With the work of the green grid, large facilities are changing the way they look at power and efficiency, a PUE of 2.0 is simply unacceptable, room based cooling is out, hot isle containment and plenum systems are in. What I am saying is simply not something I am making up - its the result of many months of planning, research multiple engineers, designers, commissioning planners, CFD modeling, load profile tests.

    We will just have to agree to disagree.
    Last edited by anon-e-mouse; 06-28-2009 at 06:56 PM.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  27. #27
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by digitalpioneer View Post
    I'm surprised cost is ranked 10... if anyone were to follow that table they would be paying a fortune.
    Whats the cost of going down? How much is your business worth? Whats more important? Saving $50/month or having a business in a year.

    Plenty of datacenters are out there offer these servers on the list at reasonable prices.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  28. #28
    Join Date
    Apr 2008
    Location
    TX (home), CO (college)
    Posts
    385
    Slightly offtopic, but I'd think that N+1 redundancy would mean "no single point of failure" unless you specifically order, say, only one circuit for a full rack. Otherwise I'd expect A+B power per rack, and on up the line for stuff in which there's only one discrete "system". For generators I'd expect an extra one to handle a failure, and dual power circuits to each. Maybe this is unrealistic, but I'm just brainstorming here.

    Also, while the ten tips are nice, the good man's signature seems to belie his intent. If the person handing out the info was a large DC customer or a DC whose main business wasn't colo I'd think differently...
    I see your bandwidth and raise you a gigabit
    Recommendations: MDDHosting shared, Virpus high-BW VPS, 100TB/SoftLayer for awesome servers

  29. #29
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by iansltx View Post
    Slightly offtopic, but I'd think that N+1 redundancy would mean "no single point of failure" unless you specifically order, say, only one circuit for a full rack. Otherwise I'd expect A+B power per rack, and on up the line for stuff in which there's only one discrete "system".
    Well this is part of the reason I made the post - that is not the case. Normally there are about 4-6 main breakers between the street and your rack, each one are single points of failure. Let me give you an example.
    When you come off your UPS (N+1 UPS) at 480v, you normally enter a distribution panel with 1 main, that 1 main then powers a couple 225amp 3/phase breakers that power your PDUs/panels. Each of the breakers (single point of failure) then feed a 480>208/120 transformer (another single point of failure) that has 1-6 panels hanging off it. Your power would come from a single breaker on one of the panels.

    Lets isolate the transformer. Could you feed 2 transformers from the main distribution panel and connect them with a main tie main? You sure can, but now your paying ALOT more than you would pay just to have 2N and your still single fed to your rack, single fed from the UPS to the main distribution panel etc. Simply put, 2N is the best way to go. If anyone really wants me to go into it more and draw it all out I am happy to. I really want to make sure everyone is an educated buyer.

    <<snipped>>
    Last edited by anon-e-mouse; 06-28-2009 at 06:54 PM.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  30. #30
    Join Date
    Apr 2008
    Location
    TX (home), CO (college)
    Posts
    385
    Interesting. I've always thought of N+1 as "no single point of failure". That is to say, if N = 1 then there would be two components in an N+1 system.
    Last edited by iansltx; 06-29-2009 at 10:25 AM.
    I see your bandwidth and raise you a gigabit
    Recommendations: MDDHosting shared, Virpus high-BW VPS, 100TB/SoftLayer for awesome servers

  31. #31
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by iansltx View Post
    Interesting. I've always thought of N+1 as "no single point of failure". That is to say, if N = 1 then there would be two components in an N+1 system.
    Its actually a very common misconception. It means the high risk items gensets and UPS are N+1.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  32. #32
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by JordanJ View Post
    Its actually a very common misconception. It means the high risk items gensets and UPS are N+1.
    Correct, unless you have 2N at the UPS/Generator level you will have a single point of failure somewhere, cabling, PDU, etc. Note: To fully utilize a 2N configuration, you also need to be using dual corded equipment with redundant power supplies. I'm personally amazed at how many people say uptime is of utmost importance, yet only get a single power drop, to a single in-cabinet PDU, to a single power supply...
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  33. #33
    Join Date
    Dec 2001
    Location
    Atlanta
    Posts
    4,419
    Quote Originally Posted by KarlZimmer View Post
    Correct, unless you have 2N at the UPS/Generator level you will have a single point of failure somewhere, cabling, PDU, etc. Note: To fully utilize a 2N configuration, you also need to be using dual corded equipment with redundant power supplies. I'm personally amazed at how many people say uptime is of utmost importance, yet only get a single power drop, to a single in-cabinet PDU, to a single power supply...
    Very true.

    and they run the network into a single switch that is fed by only one power supply.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  34. #34
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    Most important factor i would say is Network Carriers and Support. Those need to be up to standard!!!

  35. #35
    Join Date
    Sep 2008
    Location
    Dallas, TX
    Posts
    4,552
    Thanks, really appreciate it. Used this when I was lookin'

  36. #36
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    I am glad we could help!
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  37. #37
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by JordanJ View Post
    3. Network Carriers
    At a minimum you should require a carrier neutral datacenter. Competition drives pricing. Therefore, by being in a carrier neutral facility with access to multiple providers, you increase your bottom line and decrease risk.
    With the carriers, it seems you skipped some things. First, the more carriers the better. It doesn't matter if it is a carrier neutral facility if there is only one network on-site. In addition, you would want diverse entry locations into the building and diverse paths to the building. We have seen it very commonly where a provider will have diverse entrances, but they'll just meet back together somewhere under the street, etc.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  38. #38
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Karl, great call you're 100% correct! I will make sure to make that change!
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

Similar Threads

  1. Choosing Datacenter based on it's connectivity
    By festuc in forum Colocation and Data Centers
    Replies: 1
    Last Post: 04-05-2009, 01:38 PM
  2. Choosing a Datacenter
    By tensiond79 in forum Colocation and Data Centers
    Replies: 14
    Last Post: 09-17-2008, 02:04 PM
  3. Urgent Suggestion Needed - Help Choosing Server / Datacenter
    By Prolime Servers in forum Dedicated Server
    Replies: 24
    Last Post: 07-13-2008, 12:44 PM
  4. Things to be checked when choosing a 3rd party processor?
    By Nish in forum Ecommerce Hosting & Discussion
    Replies: 0
    Last Post: 12-19-2003, 02:55 PM
  5. Replies: 1
    Last Post: 02-05-2003, 09:55 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •