Results 26 to 38 of 38
-
06-27-2009, 10:51 PM #26Web Hosting Master
- Join Date
- Apr 2006
- Location
- Phoenix
- Posts
- 808
Jeff,
Simply put datacenters run differently now then they did years ago - technology is advancing and more and more large scale facilities are being created 15MW+.
<<snipped>>
That means on DESIGN day (that means worst possible case) the static load makes up a whopping 3.1% of our total cooling. If I take a average system cooling number (includes pumps, cooling tower, water treatment - everything) of .70KW/ton (pretty average) and the Phoenix power rate, the total cost of running the chillers to handle the building load would be a whopping $6392.29/month at the worst possible time.
This $6392.29 is out of a max power bill of $981,920.
This is more then just math I have done, there are APC white papers on the subject, math my engineering teams have done, 4 different chiller manufacturers. All ran the numbers - all ended up with the same result.
Does heat play other roles besides cooling? Yes it sure does, but saying the "desert" is a bad place for a datacenter would ask the question why:
IBM
American Express
FedEX
Godaddy
Ebay
Monster
Wells Fargo
Banner Health
have all chosen one of the hottest parts of the country to put their datacenter.
When a fab plant for Intel goes down, they have to scrap EVERYTHING on the floor, certify all of their gear at a cost of millions - yet the vast majority of chip makers including Intel choose to locate their plants in the desert southwest.
Just like we had Web 1.0, and have now progressed to Web 2.0, things have changed, whats important has changed. With the work of the green grid, large facilities are changing the way they look at power and efficiency, a PUE of 2.0 is simply unacceptable, room based cooling is out, hot isle containment and plenum systems are in. What I am saying is simply not something I am making up - its the result of many months of planning, research multiple engineers, designers, commissioning planners, CFD modeling, load profile tests.
We will just have to agree to disagree.Last edited by anon-e-mouse; 06-28-2009 at 06:56 PM.
Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
Managed Dedicated Servers | Bare-Metal Servers | Cloud Services
-
06-27-2009, 10:54 PM #27Web Hosting Master
- Join Date
- Apr 2006
- Location
- Phoenix
- Posts
- 808
Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
Managed Dedicated Servers | Bare-Metal Servers | Cloud Services
-
06-28-2009, 03:27 PM #28Aspiring Evangelist
- Join Date
- Apr 2008
- Location
- TX (home), CO (college)
- Posts
- 385
Slightly offtopic, but I'd think that N+1 redundancy would mean "no single point of failure" unless you specifically order, say, only one circuit for a full rack. Otherwise I'd expect A+B power per rack, and on up the line for stuff in which there's only one discrete "system". For generators I'd expect an extra one to handle a failure, and dual power circuits to each. Maybe this is unrealistic, but I'm just brainstorming here.
Also, while the ten tips are nice, the good man's signature seems to belie his intent. If the person handing out the info was a large DC customer or a DC whose main business wasn't colo I'd think differently...I see your bandwidth and raise you a gigabit
Recommendations: MDDHosting shared, Virpus high-BW VPS, 100TB/SoftLayer for awesome servers
-
06-28-2009, 04:35 PM #29Web Hosting Master
- Join Date
- Apr 2006
- Location
- Phoenix
- Posts
- 808
Well this is part of the reason I made the post - that is not the case. Normally there are about 4-6 main breakers between the street and your rack, each one are single points of failure. Let me give you an example.
When you come off your UPS (N+1 UPS) at 480v, you normally enter a distribution panel with 1 main, that 1 main then powers a couple 225amp 3/phase breakers that power your PDUs/panels. Each of the breakers (single point of failure) then feed a 480>208/120 transformer (another single point of failure) that has 1-6 panels hanging off it. Your power would come from a single breaker on one of the panels.
Lets isolate the transformer. Could you feed 2 transformers from the main distribution panel and connect them with a main tie main? You sure can, but now your paying ALOT more than you would pay just to have 2N and your still single fed to your rack, single fed from the UPS to the main distribution panel etc. Simply put, 2N is the best way to go. If anyone really wants me to go into it more and draw it all out I am happy to. I really want to make sure everyone is an educated buyer.
<<snipped>>Last edited by anon-e-mouse; 06-28-2009 at 06:54 PM.
Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
Managed Dedicated Servers | Bare-Metal Servers | Cloud Services
-
06-29-2009, 10:22 AM #30Aspiring Evangelist
- Join Date
- Apr 2008
- Location
- TX (home), CO (college)
- Posts
- 385
Interesting. I've always thought of N+1 as "no single point of failure". That is to say, if N = 1 then there would be two components in an N+1 system.
Last edited by iansltx; 06-29-2009 at 10:25 AM.
I see your bandwidth and raise you a gigabit
Recommendations: MDDHosting shared, Virpus high-BW VPS, 100TB/SoftLayer for awesome servers
-
06-29-2009, 11:06 AM #31Web Hosting Master
- Join Date
- Apr 2006
- Location
- Phoenix
- Posts
- 808
Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
Managed Dedicated Servers | Bare-Metal Servers | Cloud Services
-
06-30-2009, 02:49 AM #32THE Web Hosting Master
- Join Date
- Jan 2003
- Location
- Chicago, IL
- Posts
- 6,957
Correct, unless you have 2N at the UPS/Generator level you will have a single point of failure somewhere, cabling, PDU, etc. Note: To fully utilize a 2N configuration, you also need to be using dual corded equipment with redundant power supplies. I'm personally amazed at how many people say uptime is of utmost importance, yet only get a single power drop, to a single in-cabinet PDU, to a single power supply...
Karl Zimmerman - Founder & CEO of Steadfast
VMware Virtual Data Center Platform
karl @ steadfast.net - Sales/Support: 312-602-2689
Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
-
06-30-2009, 08:54 AM #33Web Hosting Master
- Join Date
- Dec 2001
- Location
- Houston Texas
- Posts
- 4,420
-
07-01-2009, 02:21 AM #34Not so experienced
- Join Date
- Jul 2008
- Location
- New Zealand
- Posts
- 1,225
Most important factor i would say is Network Carriers and Support. Those need to be up to standard!!!
-
07-03-2009, 12:56 PM #35Web Hosting Master
- Join Date
- Sep 2008
- Location
- Dallas, TX
- Posts
- 4,568
Thanks, really appreciate it. Used this when I was lookin'
-
07-03-2009, 02:07 PM #36Web Hosting Master
- Join Date
- Apr 2006
- Location
- Phoenix
- Posts
- 808
I am glad we could help!
Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
Managed Dedicated Servers | Bare-Metal Servers | Cloud Services
-
07-03-2009, 02:36 PM #37THE Web Hosting Master
- Join Date
- Jan 2003
- Location
- Chicago, IL
- Posts
- 6,957
With the carriers, it seems you skipped some things. First, the more carriers the better. It doesn't matter if it is a carrier neutral facility if there is only one network on-site. In addition, you would want diverse entry locations into the building and diverse paths to the building. We have seen it very commonly where a provider will have diverse entrances, but they'll just meet back together somewhere under the street, etc.
Karl Zimmerman - Founder & CEO of Steadfast
VMware Virtual Data Center Platform
karl @ steadfast.net - Sales/Support: 312-602-2689
Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
-
07-03-2009, 09:50 PM #38Web Hosting Master
- Join Date
- Apr 2006
- Location
- Phoenix
- Posts
- 808
Karl, great call you're 100% correct! I will make sure to make that change!
Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
Managed Dedicated Servers | Bare-Metal Servers | Cloud Services
Similar Threads
-
Choosing Datacenter based on it's connectivity
By festuc in forum Colocation, Data Centers, IP Space and NetworksReplies: 1Last Post: 04-05-2009, 01:38 PM -
Choosing a Datacenter
By tensiond79 in forum Colocation, Data Centers, IP Space and NetworksReplies: 14Last Post: 09-17-2008, 02:04 PM -
Urgent Suggestion Needed - Help Choosing Server / Datacenter
By Prolime Servers in forum Dedicated ServerReplies: 24Last Post: 07-13-2008, 12:44 PM -
Things to be checked when choosing a 3rd party processor?
By Nish in forum Ecommerce Hosting & DiscussionReplies: 0Last Post: 12-19-2003, 02:55 PM -
Things to consider when choosing my Dedicated Server ?!!
By iivii in forum Dedicated ServerReplies: 1Last Post: 02-05-2003, 09:55 AM