Page 2 of 2 FirstFirst 12
Results 26 to 38 of 38
  1. #26
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Jeff,

    Simply put datacenters run differently now then they did years ago - technology is advancing and more and more large scale facilities are being created 15MW+.

    <<snipped>>

    That means on DESIGN day (that means worst possible case) the static load makes up a whopping 3.1% of our total cooling. If I take a average system cooling number (includes pumps, cooling tower, water treatment - everything) of .70KW/ton (pretty average) and the Phoenix power rate, the total cost of running the chillers to handle the building load would be a whopping $6392.29/month at the worst possible time.

    This $6392.29 is out of a max power bill of $981,920.

    This is more then just math I have done, there are APC white papers on the subject, math my engineering teams have done, 4 different chiller manufacturers. All ran the numbers - all ended up with the same result.

    Does heat play other roles besides cooling? Yes it sure does, but saying the "desert" is a bad place for a datacenter would ask the question why:

    IBM
    American Express
    FedEX
    Godaddy
    Ebay
    Monster
    Wells Fargo
    Banner Health

    have all chosen one of the hottest parts of the country to put their datacenter.

    When a fab plant for Intel goes down, they have to scrap EVERYTHING on the floor, certify all of their gear at a cost of millions - yet the vast majority of chip makers including Intel choose to locate their plants in the desert southwest.

    Just like we had Web 1.0, and have now progressed to Web 2.0, things have changed, whats important has changed. With the work of the green grid, large facilities are changing the way they look at power and efficiency, a PUE of 2.0 is simply unacceptable, room based cooling is out, hot isle containment and plenum systems are in. What I am saying is simply not something I am making up - its the result of many months of planning, research multiple engineers, designers, commissioning planners, CFD modeling, load profile tests.

    We will just have to agree to disagree.
    Last edited by anon-e-mouse; 06-28-2009 at 06:56 PM.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  2. #27
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by digitalpioneer View Post
    I'm surprised cost is ranked 10... if anyone were to follow that table they would be paying a fortune.
    Whats the cost of going down? How much is your business worth? Whats more important? Saving $50/month or having a business in a year.

    Plenty of datacenters are out there offer these servers on the list at reasonable prices.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  3. #28
    Join Date
    Apr 2008
    Location
    TX (home), CO (college)
    Posts
    385
    Slightly offtopic, but I'd think that N+1 redundancy would mean "no single point of failure" unless you specifically order, say, only one circuit for a full rack. Otherwise I'd expect A+B power per rack, and on up the line for stuff in which there's only one discrete "system". For generators I'd expect an extra one to handle a failure, and dual power circuits to each. Maybe this is unrealistic, but I'm just brainstorming here.

    Also, while the ten tips are nice, the good man's signature seems to belie his intent. If the person handing out the info was a large DC customer or a DC whose main business wasn't colo I'd think differently...
    I see your bandwidth and raise you a gigabit
    Recommendations: MDDHosting shared, Virpus high-BW VPS, 100TB/SoftLayer for awesome servers

  4. #29
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by iansltx View Post
    Slightly offtopic, but I'd think that N+1 redundancy would mean "no single point of failure" unless you specifically order, say, only one circuit for a full rack. Otherwise I'd expect A+B power per rack, and on up the line for stuff in which there's only one discrete "system".
    Well this is part of the reason I made the post - that is not the case. Normally there are about 4-6 main breakers between the street and your rack, each one are single points of failure. Let me give you an example.
    When you come off your UPS (N+1 UPS) at 480v, you normally enter a distribution panel with 1 main, that 1 main then powers a couple 225amp 3/phase breakers that power your PDUs/panels. Each of the breakers (single point of failure) then feed a 480>208/120 transformer (another single point of failure) that has 1-6 panels hanging off it. Your power would come from a single breaker on one of the panels.

    Lets isolate the transformer. Could you feed 2 transformers from the main distribution panel and connect them with a main tie main? You sure can, but now your paying ALOT more than you would pay just to have 2N and your still single fed to your rack, single fed from the UPS to the main distribution panel etc. Simply put, 2N is the best way to go. If anyone really wants me to go into it more and draw it all out I am happy to. I really want to make sure everyone is an educated buyer.

    <<snipped>>
    Last edited by anon-e-mouse; 06-28-2009 at 06:54 PM.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  5. #30
    Join Date
    Apr 2008
    Location
    TX (home), CO (college)
    Posts
    385
    Interesting. I've always thought of N+1 as "no single point of failure". That is to say, if N = 1 then there would be two components in an N+1 system.
    Last edited by iansltx; 06-29-2009 at 10:25 AM.
    I see your bandwidth and raise you a gigabit
    Recommendations: MDDHosting shared, Virpus high-BW VPS, 100TB/SoftLayer for awesome servers

  6. #31
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by iansltx View Post
    Interesting. I've always thought of N+1 as "no single point of failure". That is to say, if N = 1 then there would be two components in an N+1 system.
    Its actually a very common misconception. It means the high risk items gensets and UPS are N+1.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  7. #32
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,957
    Quote Originally Posted by JordanJ View Post
    Its actually a very common misconception. It means the high risk items gensets and UPS are N+1.
    Correct, unless you have 2N at the UPS/Generator level you will have a single point of failure somewhere, cabling, PDU, etc. Note: To fully utilize a 2N configuration, you also need to be using dual corded equipment with redundant power supplies. I'm personally amazed at how many people say uptime is of utmost importance, yet only get a single power drop, to a single in-cabinet PDU, to a single power supply...
    Karl Zimmerman - Founder & CEO of Steadfast
    VMware Virtual Data Center Platform

    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation

  8. #33
    Join Date
    Dec 2001
    Location
    Houston Texas
    Posts
    4,420
    Quote Originally Posted by KarlZimmer View Post
    Correct, unless you have 2N at the UPS/Generator level you will have a single point of failure somewhere, cabling, PDU, etc. Note: To fully utilize a 2N configuration, you also need to be using dual corded equipment with redundant power supplies. I'm personally amazed at how many people say uptime is of utmost importance, yet only get a single power drop, to a single in-cabinet PDU, to a single power supply...
    Very true.

    and they run the network into a single switch that is fed by only one power supply.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  9. #34
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,225
    Most important factor i would say is Network Carriers and Support. Those need to be up to standard!!!

  10. #35
    Join Date
    Sep 2008
    Location
    Dallas, TX
    Posts
    4,568
    Thanks, really appreciate it. Used this when I was lookin'

  11. #36
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    I am glad we could help!
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  12. #37
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,957
    Quote Originally Posted by JordanJ View Post
    3. Network Carriers
    At a minimum you should require a carrier neutral datacenter. Competition drives pricing. Therefore, by being in a carrier neutral facility with access to multiple providers, you increase your bottom line and decrease risk.
    With the carriers, it seems you skipped some things. First, the more carriers the better. It doesn't matter if it is a carrier neutral facility if there is only one network on-site. In addition, you would want diverse entry locations into the building and diverse paths to the building. We have seen it very commonly where a provider will have diverse entrances, but they'll just meet back together somewhere under the street, etc.
    Karl Zimmerman - Founder & CEO of Steadfast
    VMware Virtual Data Center Platform

    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation

  13. #38
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Karl, great call you're 100% correct! I will make sure to make that change!
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

Page 2 of 2 FirstFirst 12

Similar Threads

  1. Choosing Datacenter based on it's connectivity
    By festuc in forum Colocation, Data Centers, IP Space and Networks
    Replies: 1
    Last Post: 04-05-2009, 01:38 PM
  2. Choosing a Datacenter
    By tensiond79 in forum Colocation, Data Centers, IP Space and Networks
    Replies: 14
    Last Post: 09-17-2008, 02:04 PM
  3. Urgent Suggestion Needed - Help Choosing Server / Datacenter
    By Prolime Servers in forum Dedicated Server
    Replies: 24
    Last Post: 07-13-2008, 12:44 PM
  4. Things to be checked when choosing a 3rd party processor?
    By Nish in forum Ecommerce Hosting & Discussion
    Replies: 0
    Last Post: 12-19-2003, 02:55 PM
  5. Replies: 1
    Last Post: 02-05-2003, 09:55 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •