Results 1 to 34 of 34
  1. #1
    Join Date
    Feb 2001
    Posts
    57

    Peer 1 Seattle (Westin) Planned power outage. What?

    So I got an email from peer 1 detailing a 30 minute planned power outage on Mar 16th " to replace the transformer and bypass panel feeding our Seattle 7th Floor Data Center."

    "There is also a potential for short periods of power loss during a 90 minute testing phase that follows the 30 minute shut down."

    I have never heard of intentionally powering down a data centre. Is this common? I would think that in a properly designed data centre one would have multiple power feeds and have the ability to work on them separately without a service interruption.

    They go on to say "Due to the impact this will cause we are offering you the option to have a Data Center Technician power off their equipment prior to the 30 minute shut down window. You can then choose to have your equipment powered on after the 30 minute shut down or wait until the 90 minute testing is complete."

  2. #2
    Is it common, sure. It's common to power everything down on one feed, once every 3 to 5 years, for inspection and cleaning purposes at the very minimum. That's why some facilities operate dual independent A/B feeds with shared-nothing architecture - if you need to take one down, the other one remains perfectly viable.

  3. #3
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,456
    Quote Originally Posted by funkee View Post
    So I got an email from peer 1 detailing a 30 minute planned power outage on Mar 16th " to replace the transformer and bypass panel feeding our Seattle 7th Floor Data Center."

    "There is also a potential for short periods of power loss during a 90 minute testing phase that follows the 30 minute shut down."

    I have never heard of intentionally powering down a data centre. Is this common? I would think that in a properly designed data centre one would have multiple power feeds and have the ability to work on them separately without a service interruption.

    They go on to say "Due to the impact this will cause we are offering you the option to have a Data Center Technician power off their equipment prior to the 30 minute shut down window. You can then choose to have your equipment powered on after the 30 minute shut down or wait until the 90 minute testing is complete."
    I usually see a full shutdown with financial data centers since they are dead to the world on weekends.. but not normal datacenters..
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  4. #4
    Join Date
    Aug 2007
    Posts
    351
    Sounds like 2011 is going to be the year of electric issues in data centers. Both intentional and unintentional.

  5. #5
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,706
    Quote Originally Posted by funkee View Post
    So I got an email from peer 1 detailing a 30 minute planned power outage on Mar 16th " to replace the transformer and bypass panel feeding our Seattle 7th Floor Data Center."

    "There is also a potential for short periods of power loss during a 90 minute testing phase that follows the 30 minute shut down."

    I have never heard of intentionally powering down a data centre. Is this common? I would think that in a properly designed data centre one would have multiple power feeds and have the ability to work on them separately without a service interruption.

    They go on to say "Due to the impact this will cause we are offering you the option to have a Data Center Technician power off their equipment prior to the 30 minute shut down window. You can then choose to have your equipment powered on after the 30 minute shut down or wait until the 90 minute testing is complete."
    Uh, unless you yourself are also running A+B circuits with redundant power supplies in your servers, you are more than likely plugged into ONE UPS, ONE ATS, ONE BREAKER PANEL, etc.
    If any of those need maintenance or experience an outage, there is absolutely nothing you can do to avoid downtime. Unless your datacenter specifically sold you some other type of agreement, then thats how it most likely is.

  6. #6
    Join Date
    Feb 2001
    Posts
    57
    Quote Originally Posted by CGotzmann View Post
    Uh, unless you yourself are also running A+B circuits with redundant power supplies in your servers, you are more than likely plugged into ONE UPS, ONE ATS, ONE BREAKER PANEL, etc.
    If any of those need maintenance or experience an outage, there is absolutely nothing you can do to avoid downtime. Unless your datacenter specifically sold you some other type of agreement, then thats how it most likely is.
    Shouldn't the UPS sit between all that stuff and your servers? If equipment upstream from the ups needs servicing then that shouldn't be a problem and if the UPS needs servicing it can be bypassed temporarily.

    I've had space in three data centres for almost 10 years and have had notifications of all kinds of electrical work but it has always been not service impacting.

    FYI Luckily I don't actually have any equipment @ peer1 so this is just a philosophical discussion.

  7. #7
    Join Date
    Aug 2007
    Posts
    351
    @funkee

    I agree with your assessment and share your opinion.

    I find it shocking how non-redundant an ever growing number of data centers are in reality (not in their self promotion marketing materials).

    When people come down on cheap facilities and DIY non traditional data centers it's for things like described - single homed non redundant power.

    30 minute shutdown + 90 minutes of testing = 2 HOURS of downtime.

    That would take that facilities uptime for the entire year (absent any other outages) down to 99.97% roughly.

    Peer1 boasts a 100% uptime SLA seemingly standard. Obviously, this work window is some sort of exemption in the legalese I suspect.

    There are going to be a number of folks who never got the email about this work. A number of servers that don't restart right or which services don't start and config issues. There always are

    FYI: I don't have gear with Peer1 and won't any time in the future.

  8. #8
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,706
    Quote Originally Posted by funkee View Post
    Shouldn't the UPS sit between all that stuff and your servers? If equipment upstream from the ups needs servicing then that shouldn't be a problem and if the UPS needs servicing it can be bypassed temporarily.

    I've had space in three data centres for almost 10 years and have had notifications of all kinds of electrical work but it has always been not service impacting.

    FYI Luckily I don't actually have any equipment @ peer1 so this is just a philosophical discussion.
    The UPS can power equipment for at maximum between 5-10 minutes depending on load levels. UPS's are just used for the temporary interruption of power from utility feed drop to generator spinup/power up.
    Though they said transformer so they could possibly bypass over to generator power while they perform the maintenance on the transformer... but then they also mentioned the bypass panel, which could mean the ATS bypass panel, thus rendering the ability to actually transfer between utility and generator power, null.

    It happens... sometimes things just have to go down or break on their own, no matter how redundant

  9. #9
    Quote Originally Posted by funkee View Post
    Shouldn't the UPS sit between all that stuff and your servers?
    Not necessarily. There are stepdown transformers ahead of the UPS at some point, but the UPS output power usually runs through another transformer, because it's usually 3ph 480v and needs to step down to something usable by customers, like 3ph 208v. As they mentioned the bypass panel (presumably the UPS full-wrap bypass), I'm guessing the transformer they're referring to is the output transformer.

    Quote Originally Posted by pubcrawler
    When people come down on cheap facilities and DIY non traditional data centers it's for things like described - single homed non redundant power.
    Ok but there's nothing wrong with a facility owner choosing to operate this way, and there's nothing wrong with a customer buying this service as-is, so long as the facility is honest and forthright about their capabilities. Some people are ok paying double for A and B feeds, usually totaling some $750/mo for dual 110v 20a circuits. Others want to pay what this actually costs the facility, typically a bit under $200/mo for a single feed, and are willing to forego the increased reliability along with the price. Multiply that over 10 or 20 cabinets.. the difference is a Lexus and a big house.

  10. #10
    Join Date
    Aug 2002
    Location
    Seattle
    Posts
    5,512
    We were down for about 45 minutes in 2009 when Peer1 lost an entire row of cabinets in Los Angeles. They didn't know about it until we opened a ticket and someone went to look at it.

    At least this time you have warning

  11. #11
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    988
    Quote Originally Posted by funkee View Post
    I have never heard of intentionally powering down a data centre. Is this common? I would think that in a properly designed data centre one would have multiple power feeds and have the ability to work on them separately without a service interruption.
    It happens quite a bit, its called "maintenance"
    Seriously though, everything has to turn off sometimes, one way or another.
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  12. #12
    Join Date
    Jun 2003
    Location
    Las Vegas, NV
    Posts
    842
    There are certainly issues where power has to be disabled in order to perform upgrades, maintenance, or to make other changes. Based on the info provided by Peer1 in the email you shared, I'd guess that they may be doing maintenance on a low-voltage distribution transformer that feeds a distribution panel and it's related bypass - because those are downstream of any UPS gear in most cases, it's not typically possible to perform maintenance on that type of gear without taking the power offline. That's just a guess, but it would make sense as to why they can't maintain power during the maintenance.

    Even in facilities that offer a true A/B power infrastructure, in my experience many of the customers in the facility either won't elect to pay for A/B power, and those that do in many cases make implementation errors that result in a partial outage if one of their power feeds does go down. We typically recommend that our customers utilize a rack PDU that has a built-in ATS to make sure that they are truly A/B redundant, that way they don't have to use servers and other equipment with dual PSU's in order to maintain A/B power redundancy at the equipment level.
    Rob Tyree
    Versaweb - DDoS Protected Cloud and Dedicated Server Hosting
    Fiberhub - SAS70 Type-II Colocation in Las Vegas and Seattle

  13. #13
    To the original poster, I just noticed the "bad design" tag that you placed on this thread. You did know you were buying a single feed, and that invariably you'd have no redundancy *when*, not *if* they needed to take the power down for maintenance, right?

  14. #14
    Join Date
    Nov 2004
    Location
    Chicago
    Posts
    413
    I was going to reply to this after he first posted but it is another example of people (a) not understanding what they are buying and (b) having completely unrealistic expectations of services when it comes to the prices they are paying.

    As others have alluded to here, in a shared nothing environment where the cost would make most users lurking on WHT puke these kinds of issues are less common. However, for just about any service this is affordable to people buying services offered here, there is a good chance these things will happen from time-to-time. This doesn't mean that a facility is poorly design. Providers can spend endless amounts of money trying to make sure everything is always available. However, the fundamental problem is twofold:

    (1) Most customers are not going to want to pay the higher costs for such reliability ($$$-$,$$$/month).


    (2) It seems rather pointless to complain about a providers lack of redundancy especially when customers demand services for pennies and don't invest their savings into building redundancy into their applications or services.


    Therefore, the approach most end-users take is: Find least expensive provider that offers the highest levels of reliability and host my single (point-of-failure) application with said provider. When something goes wrong, point the finger at the provider.

    It would be helpful if these users remembered the motto:

    Two is one and one is none.
    Lee Evans, Owner/Operator
    LeeWare Development
    Linux Dedicated Server Grids
    http://www.leeware.com

  15. #15
    Join Date
    Feb 2003
    Location
    Detroit
    Posts
    836
    Sure, 440V UPS to transformer to service panel. This sounds like a typical setup and the only way to service it past the UPS point is to shutdown.

    This is why redundant power paths can be important. You should be able to order a second power circuit off of a independent UPS and transformer, at a premium price. This can not be built into a single circuit feed and is much more expensive to provide.
    managedway
    WE BUILD CLOUDS

    Cloud Computing | Fiber Optic Internet | Colocation

  16. #16
    While I agree with the overal theme in this thread on the philosophy that you get what you pay for, there are various power bus designs that change the redundancy of even a single bus.

    A+B feeds alone can be the most redundant of designs, however, the manner in which the A+B busses are designed also plays a big factor in the anticipated uptime of a facility.

    For example, if a facility has A+B busses, but only 1 UPS on each bus connected to seperate ATS's that all tie into a single generator, I would consider this less redundant than a single bus being fed using a Distributed, Parallel UPS bus design being fed from a bank of generators that are N+1 redundant.

  17. #17
    Join Date
    Aug 2007
    Posts
    351
    Another good thread here as the result of an unfortunate situation.

    I am now sold on in rack PDU ATS units now with A+B feed. Given that any provider we consider can do so in a redundant fashion so a maintenance thing like this doesn't turn all the power off.

    Datacenters like Peer1 promote their 100% uptime SLA, but exempt themselves with a 2 hour planned outage. I *wonder* if you are A+B powered if they can/will be providing you power during this outage? If they can provide you with A+B to survive this outage, then they need to add an asterisk next to that 100% claim and spell out the exemptions up top, not buried in some legalese or not at all. Otherwise, stop it with the 100% uptime marketing fluff.

  18. #18
    Join Date
    Oct 2005
    Location
    Tucson AZ
    Posts
    367
    This is rather amazing to me. Having gone through 3 complete power service upgrades on our facility without a single outage (including replacing multiple xfrmrs upstream of UPS units (and downstream) it seems odd they can't avoid an outage... this is what maint bypass panels, paralleling gear etc are for... are DCs these days just not wanting to kick out the little bit of extra dough for maint panels? Seems silly.

  19. #19
    Join Date
    Aug 2008
    Posts
    133
    Quote Originally Posted by Rob T View Post
    Even in facilities that offer a true A/B power infrastructure, in my experience many of the customers in the facility either won't elect to pay for A/B power, and those that do in many cases make implementation errors that result in a partial outage if one of their power feeds does go down. We typically recommend that our customers utilize a rack PDU that has a built-in ATS to make sure that they are truly A/B redundant, that way they don't have to use servers and other equipment with dual PSU's in order to maintain A/B power redundancy at the equipment level.
    Thanks for this comment. Hadn't thought of addressing the problem by "dual-homing" the PDU itself. Food for thought.

    Cheers,

    -D

  20. #20
    Quote Originally Posted by pubcrawler View Post
    Datacenters like Peer1 promote their 100% uptime SLA, but exempt themselves with a 2 hour planned outage. I *wonder* if you are A+B powered if they can/will be providing you power during this outage? If they can provide you with A+B to survive this outage, then they need to add an asterisk next to that 100% claim and spell out the exemptions up top, not buried in some legalese or not at all. Otherwise, stop it with the 100% uptime marketing fluff.
    Most DCs, telcos, etc. claiming a 100% uptime SLA do so *outside of normal maintenance windows*. Why would you not exempt yourself for planned maintenance? Most people are not foolish enough to run a multinational enterprise in the manner you're alluding to..

  21. #21
    Join Date
    Feb 2001
    Posts
    57
    For the record I don't have equipment in peer1's data center. I did, however, have a very high opinion of them until I heard about this.

    I'm surprised at the general reaction. We all agree that it's possible to construct a system that would avoid an outage. Keep in mind that this facility is in the Westin Building in downtown Seattle and a single cabinet costs ~$1,000/month. What are you paying for if not reliable power and cooling? 6sqft of space?

    Also, I'd say that it's disingenuous to offer a 100% uptime SLA if the design of your electrical system makes it impossible to deliver 100% uptime even if everything went according to plan. All legalese aside that's a bit unethical no?

    It would be like me offering 1 hour photo processing even though my photo processing machine takes 1hr 20min to process a roll.

    The costs at peer1 are about the same as facilities like equinix, FiberCloud, internap in the same building. Would you guys expect downtime like this from them?

  22. #22
    Join Date
    Nov 2004
    Location
    Chicago
    Posts
    413
    (1) So we've established that you don't have equipment in the facility. Which means that you are not a paying customer and are not impacted by this maintenance period.

    (2) Who said that the power and cooling is not reliable? This is an announced maintenance window. (see 1 + my previous post about application level redundancy.)

    (3) I wouldn't call it disingenuous or unethical unless you think insurance products are disingenuous. I would say that it is fairly common knowledge that insurance companies cover a lot of things that are (least) likely to happen. The opposite would cost more than anyone is willing to pay. Therefore, providers that offer any kind of SLA's provide the same kinds of protections. Which means in English they protect against certain kinds of disruptions while the most common ones are not covered. Furthermore, I think that you are confusing an SLA with a guarantee that something will never be down which is an unrealistic expectation.
    Lee Evans, Owner/Operator
    LeeWare Development
    Linux Dedicated Server Grids
    http://www.leeware.com

  23. #23
    Join Date
    Feb 2011
    Posts
    584
    I am not an electrician, but I can imagine replacing a bypass panel in 30 min to actually be an accomplishment.

    Datacenters do fail, and having a planned outage is much better than unannounced one. There are dependencies that people don't realize until they do. For example backup generators having cooling dependency on municipal water can result in an outage when municipal water is diverted to take care of a fire that took out the power lines feeding the datacenter. Or whatever happens to the Westin building when Cascadia fault releases its accumulated energy next time- you might feel lucky if you get access to your server again - ever.

  24. #24
    Join Date
    Aug 2007
    Posts
    351
    I for one am tired of all the fluff marketing deception in this industry.

    It's nice that the sales literature demands attention with 100% uptime, but obviously they can't hit that mark, even in an entire year based on this downtime - be it scheduled or not. Who knows if they could even keep power up on A+B feed --- it's be selling that to anyone about to be impacted if so.

    The industry really needs a 3rd party audit of companies and facilities.

    A planned outage is an outage. The game of saying it's planned and let's not count it is simply, unethical in my world.

    I considered Peer1 --- new Toronto location, but this leaves me looking elsewhere.

    Time for folks to engineer solutions and spend less time on twisty deceptive marketing and 6 point legalese with gotchas.

  25. #25
    Join Date
    Jun 2006
    Location
    Calgary, Alberta
    Posts
    688
    Quote Originally Posted by pubcrawler View Post
    I for one am tired of all the fluff marketing deception in this industry.

    It's nice that the sales literature demands attention with 100% uptime, but obviously they can't hit that mark, even in an entire year based on this downtime - be it scheduled or not. Who knows if they could even keep power up on A+B feed --- it's be selling that to anyone about to be impacted if so.

    The industry really needs a 3rd party audit of companies and facilities.

    A planned outage is an outage. The game of saying it's planned and let's not count it is simply, unethical in my world.

    I considered Peer1 --- new Toronto location, but this leaves me looking elsewhere.

    Time for folks to engineer solutions and spend less time on twisty deceptive marketing and 6 point legalese with gotchas.
    Did Peer1 not cause some big power issue in downtown Vancouver not too long ago?

  26. #26
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,706
    Quote Originally Posted by pubcrawler View Post
    I for one am tired of all the fluff marketing deception in this industry.

    It's nice that the sales literature demands attention with 100% uptime, but obviously they can't hit that mark, even in an entire year based on this downtime - be it scheduled or not. Who knows if they could even keep power up on A+B feed --- it's be selling that to anyone about to be impacted if so.

    The industry really needs a 3rd party audit of companies and facilities.

    A planned outage is an outage. The game of saying it's planned and let's not count it is simply, unethical in my world.

    I considered Peer1 --- new Toronto location, but this leaves me looking elsewhere.

    Time for folks to engineer solutions and spend less time on twisty deceptive marketing and 6 point legalese with gotchas.
    I guarantee if people paid enough they would get truly redundant solutions.
    $1000 is not a lot for a rack.

  27. #27
    Join Date
    Jun 2006
    Location
    Calgary, Alberta
    Posts
    688
    Quote Originally Posted by benj114 View Post
    Did Peer1 not cause some big power issue in downtown Vancouver not too long ago?
    Nevermind, they we're affected by the big power issue;

    http://www.peer1.com/blog/2008/07/va...-power-outage/

  28. #28
    Join Date
    Aug 2007
    Posts
    351

    *

    Quote Originally Posted by CGotzmann View Post
    I guarantee if people paid enough they would get truly redundant solutions.
    $1000 is not a lot for a rack.
    $1000 is fine for a rack... Depending on what other upsells there are and what gets included.

    We have folks selling empty featureless racks with no bandwidth and single homed power for more than that all over the place.

    Many providers probably can't even truly offer a redundant solution even if A+B power wired in your rack. Thus, auditing and real data on data centers would be a breath of fresh air. But the facility folks would hate it.

    My issue is with a place that markets heavily and claims to be a big player in the industry then has folks about to be offline for two hours.

    Sure things do happen, but this is preventable like most data center outages. It should be preventable by the facility. Otherwise, no claims of 100% uptime SLA unless duly noted that it requires additional power feed and surcharge to the customer.

    I am surprised everyone thinks Peer1 having a 2 hour outage is just swell.

  29. #29
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    If you're on a fairly typical N+1 or similar redundant power configuration and you can't stand a 30 minute planned outage, then you don't have the power redundancy you need and should pay for a 2N, or better, configuration with dual corded gear and A+B circuits. if you're on a single run, there will be issues and maintenance that affect a single circuit and that is well beyond the scope of what is possible with a UPS and generator.

    I think some people just need a better explanation of the systems as a whole. Now, you'll have an ATS feeding your UPS power, which will generally pull in utility power, but will switch to generator when needed. The UPS will filter that power and run through the periods where utility or generator is out for a period of time. Now, this maintenance seems to affect points PAST the UPS system. When the power leaves the UPS it doesn't just plug in directly from the UPS system to your rack. For a larger UPS unit it will likely output at 480v 3 phase, your servers won't run at 480v 3 phase, and the UPS isn't made to have hundreds of feeds pulled off it directly anyway. This power from the UPS is distributed to separate PDUs, which act as step-down transformers, separate out the phases so that you can have 120v, 208v, or 208v 3 phase power, and to be distributed to the cabinets (these steps may be done by a single unit or some of the steps may be split among more units). This step is required to get you power at levels you can actually use and to distribute the power to hundreds of individual circuits. Now, if you have a single power circuit, you're being fed from a single one of these PDUs and a single breaker. If something happens to that PDU/transfer/distribution panel or breaker or a maintenance is required on those parts of the system, you'll have an outage. That is not an issue with the way it was engineered or designed, that is simply how it works and is the service you ordered, if you have N+1 (or similar) redundancy.

    Now, if that is not acceptable to you, then get a facility with 2N or better redundancy, there you'll have an A feed and a B feed, fed off of separate UPS systems, separate PDUs, separate breakers, etc. The sides should be concurrently maintainable, so maintenance on one side will not affect the other. While your A side is down or being worked on, B side should still be up. For most, the 30 minutes of maintenance caused every 3-10 years on an N+1 system isn't worth it, but others are willing to pay the added costs for that sort of configuration. if 2N isn't possible for you and you demand N+1 has that reliability, just tell me how it would be even possible to be connected to one breaker, one distribution panel, one transformer, and expect zero downtime with maintenance on any of those things? Are you expecting gear that requires no maintenance EVER? What exactly do you propose as the solution?

    Then, there also seems to be a mis-understanding about what an SLA is. An SLA is not a guarantee of uptime, it is set terms to define what happens if there is an outage. You cannot just look at it as a 100% SLA and assume you'll have 100% uptime, you need to read the specific terms of the SLA, and everyone does it differently. If they gave you a 100% SLA, covered all outages, but only gave you a 1% credit for every day they had an outage, what good is that for you if they have a 12 hour outage and you get 1% back? If they allow 2 hours a month due to maintenance in their SLA, then you have to plan for 2 hours of maintenance in one month as being a possibility (in this case they said a 30 minute outage, not sure where a 2 hour outage is coming in). There is nothing misleading or dishonest, you just need to read and understand the terms. Again, an SLA is not an uptime guarantee, it simply outlines the terms for compensation when an outage does happen.

    LoginTech, how did you possibly replace a transformer entirely with NO downtime unless it was an A+B type configuration? To replace a transformer you'll need to de-energize it, and anything downstream from that transformer would then be powered down, that is unless you're using in-rack UPS systems instead of DC-wide UPSes and then I think that would still be unsafe.

    And this talk about auditing and standards, why? Did anyone in this case lie about their configuration? What exactly was incorrect or misleading? It should be relatively simple to see for yourself what the power redundancy is and where the single points of failure are. When you get a circuit installed, ask your provider to show you and explain the full path of the power. Something like that should help explain a lot of what all the confusion in this thread is about.
    Last edited by KarlZimmer; 03-03-2011 at 03:24 AM.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  30. #30
    Join Date
    Aug 2007
    Posts
    351
    @KarlZimmer

    Thanks for being a fountain of information. Great post and one that goes in my long term notes file.

    From your experience how common are facilities today that can offer a 2N or better redundancy, as you put it? (always thought any real data center should be able to offer 2N +, but haven't gone shopping for it lately. We just keep hot systems on standby at other facilities.

  31. #31
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by pubcrawler View Post
    @KarlZimmer

    Thanks for being a fountain of information. Great post and one that goes in my long term notes file.

    From your experience how common are facilities today that can offer a 2N or better redundancy, as you put it? (always thought any real data center should be able to offer 2N +, but haven't gone shopping for it lately. We just keep hot systems on standby at other facilities.
    Depends on what you're looking for. If you're in a facility that truly has highly critical environments most will be 2N or better, though those sorts of facilities often aren't used for standard web hosting, or similar. Equinix facilities, most facilities specifically targeting telcos (though sometimes only DC plants are 2N+), facilities for major corporations/financial firms, etc. To take full advantage of 2N, you're going to also need to have the A+B feeds in your cabinet, separate PDUs, and then redundant power supplies, all that has added costs as well. Now, I'd imagine that most of the facilities on WHT are N+1, or effectively N+1 facilities in the way they're typically used.

    As an example, for our 350 E Cermak facility we have a 2N build and all of our financial, medical, and major corporate customers use A+B feeds and use all dual corded gear. We noticed that most of our customers who were service providers, web hosting, general dedicated servers, small business, etc. were simply using single corded equipment and not taking advantage of A+B. When we built our new facility at 725 S Wells we used that knowledge and built the facility specifically for those types of users, simply looking for a lower cost and not needing the A+B feeds they're not going to use anyway. This let us built the facility at N+1 (still offering 2N out of 350 if needed) and pass those savings on infrastructure costs on to customers.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  32. #32
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,706
    There are not many, even CoreSite themselves has told me the demand for higher redundancy has diminished. Many corporations are now happy with N or N+1 configurations.

    Supply is a factor of demand, if demand for N+2 was there, there would be many more facilities offering it. So you can reason, the lack of N+2 facilities is a factor of the lack of demand for them.

  33. #33
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by CGotzmann View Post
    There are not many, even CoreSite themselves has told me the demand for higher redundancy has diminished. Many corporations are now happy with N or N+1 configurations.

    Supply is a factor of demand, if demand for N+2 was there, there would be many more facilities offering it. So you can reason, the lack of N+2 facilities is a factor of the lack of demand for them.
    I think it is moreso that the companies demanding 2N (or better) are building their own facilities or going to the wholesale model, and companies like CoreSite don't fit into either of those models. I would agree though, in the retail colo market, N+1 has pretty much become the standard except for MAJOR hubs/interconnection points, mainly due to demands for the telcos.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  34. #34
    Join Date
    Aug 2002
    Location
    Seattle
    Posts
    5,512
    Quote Originally Posted by CGotzmann View Post
    There are not many, even CoreSite themselves has told me the demand for higher redundancy has diminished. Many corporations are now happy with N or N+1 configurations.

    Supply is a factor of demand, if demand for N+2 was there, there would be many more facilities offering it. So you can reason, the lack of N+2 facilities is a factor of the lack of demand for them.
    We'll do this with routers and cloud nodes, otherwise single feeds are OK.

Similar Threads

  1. Transfer from Westin Seattle -> Vancouver
    By elektrica in forum Colocation and Data Centers
    Replies: 14
    Last Post: 12-17-2009, 03:06 AM
  2. colocation in westin building in seattle
    By jimmy in forum Dedicated Hosting Offers
    Replies: 0
    Last Post: 02-03-2005, 03:05 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •