Page 6 of 6 FirstFirst ... 3456
Results 126 to 140 of 140
  1. #126

    gnax setup?

    Quote Originally Posted by WireSix View Post
    enterprise and 'midrange' is night and day. Using a couple of cheap iscsi and FC drive chassis with 16 sata disks isn't anything on the scale of say deploying a couple EMC/Hitachi/HP cabinets with needs of up to 4A @120v per storage shelf + contollers, fc switches, etc. Add on top of that blade chassis using 20-40A@208v per.

    Stacking 32 1U servers engineered for power efficiency into a rack to consume a dead-on 80% load on circuits is much more efficient and less dense in terms of power consumption.
    why 32 servers - copied gnax setup?

    regarding the post - I think I answered in my previous one, just "enterprise" doesn't mean huge, you've remember in our DC Sun420 servers ( we've paid to throw them out ;( ) - been serving them if you remember - nowhere near power requirements compare to 1 U dual quad Xeons or blade servers.
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219
      0 Not allowed!

  2. #127
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,662
    Quote Originally Posted by tulix View Post
    why 32 servers - copied gnax setup?

    regarding the post - I think I answered in my previous one, just "enterprise" doesn't mean huge, you've remember in our DC Sun420 servers ( we've paid to throw them out ;( ) - been serving them if you remember - nowhere near power requirements compare to 1 U dual quad Xeons or blade servers.
    no, infact it is nothing like GNAX's setup. I'm sure they would be happy to comment on their infrastructure if they so choose.

    32 servers of a specific configuration arranged on a specific number of APC strips with public and private switching, and physical KVM access for facility based management presents the optimal rack layout for power density, airflow, and cable management for how WE do dedicated servers.

    Is this the right method for you? Probably not, is it the best method for everyone? No, absolutely not. However, it is the best way for us to manage our dedicated deployments.
      0 Not allowed!

  3. #128

    The reason why I've mentioned gnax

    Quote Originally Posted by WireSix View Post
    no, infact it is nothing like GNAX's setup. I'm sure they would be happy to comment on their infrastructure if they so choose.

    32 servers of a specific configuration arranged on a specific number of APC strips with public and private switching, and physical KVM access for facility based management presents the optimal rack layout for power density, airflow, and cable management for how WE do dedicated servers.

    Is this the right method for you? Probably not, is it the best method for everyone? No, absolutely not. However, it is the best way for us to manage our dedicated deployments.
    The reason why I've mentioned gnax is that you didn't have this setup before gnax and now you do and I know that they have this setup too So why it looks to me so gnaxesh

    And yes, 32 servers could be a good option, just wasting 2 29xx or 1 29xx 48 port big number of ports - I am sure you have that "optimal" setup (gnax is/was using the same setup, so how I know that you have it too )

    It is OK, I am just picking on you
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219
      0 Not allowed!

  4. #129
    Hey, I know WHT is not a facebook, but I have some business to do - I'll see you all later on - have to take a break, just got a call from potential large customer who is interested in our raised flor environment (just kidding about the floor, the rest is true)
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219
      0 Not allowed!

  5. #130
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,662
    Quote Originally Posted by tulix View Post
    The reason why I've mentioned gnax is that you didn't have this setup before gnax and now you do and I know that they have this setup too So why it looks to me so gnaxesh

    And yes, 32 servers could be a good option, just wasting 2 29xx or 1 29xx 48 port big number of ports - I am sure you have that "optimal" setup (gnax is/was using the same setup, so how I know that you have it too )

    It is OK, I am just picking on you
    I'm afraid your knowledge of what GNAX has / does is very misinformed. I'm sure if you asked them what they deploy and how they deploy it they would be happy to tell you.

    We've been deploying the same basic design for the last 3 1/2 years. Over time it evolves as platforms (cpu platform) and drive technologies evolve but has more or less been the same basic configuration. This has worked well enough to take us from a reseller of dedciateds to a small deployment of 14 original desktop type servers when we first started to hundreds of dedicated servers, core switching and routing infrastructure, storage arrays, etc and now vmware's vSphere cloud solutions.

    As our business environment and customer demands evolve, we move with it all the while maintaining a stable, profitable, and expanding business with a customer base we're fanatic about supporting. (oops hope rackspace doesn't come after me for that one!)

    Thank you very much for your input as always we wish everyone in our market the best of luck as healthy competition helps our overall industry to grow and innovate.
      0 Not allowed!

  6. #131
    Join Date
    Jul 2003
    Location
    Atlanta
    Posts
    337
    Quote Originally Posted by tulix View Post
    Actually, since you've asked - you are welcome to see our DC to see allocated space for additional ACs when we will need them, including chillers and units itself.

    You'll be able to see our other one (DC) where we put completely new set of ACs - we've built already 5 DCs - used to that already

    Regarding generator - we have several options (right now we have 3). One, which feeds THAT DC, will not be enough to support 1000AMPs, but 2 other ones have extra capacity (Ryan didn't know that or forgot to tell you ) We can just reroute power and since we are using independent UPSes (n+1) - that would be peace of cake. Sorry for so many details, but if you need more info, may be private exchange will be preferable way for communication

    But you guys are good - You make me feel almost sorry that I have raised floor environment (looks like nobody else has).

    Does anybody has raised floor environment at all?
    Well it sounds like you have your plan for that space and thats great. Hope it all works out . Dont be sorry for having raised floor, as i said before, it sells and if it works for your model then who cares.
    Gary Simat
    Total Server Solutions - Bare Metal - Private Cloud - Managed Infrastructure - Colocation - US Based Support
    Atlanta - Dallas - Phoenix - Los Angeles - Seattle - Chicago - Weehawken - New York City - Vancouver - Toronto - London - Amsterdam - Tokyo - Sydney
      0 Not allowed!

  7. #132

    Hey, forgot to mention

    Quote Originally Posted by garysimat View Post
    Well it sounds like you have your plan for that space and thats great. Hope it all works out . Dont be sorry for having raised floor, as i said before, it sells and if it works for your model then who cares.
    Actually two other ones (DCs) have raised floor but ACs are not blowing underneath the floor - based on you logic the best of two worlds - bingo!

    These DCs, that SweetT was referring too.

    We're cool, looks like, don't have to remove raised floor from the third DC.

    Honestly, lets forget about raised floors - not gonna even to respond on raised floor related issues in this thread.

    Gary, when you're gonna show up in Atlanta? Probably will be good to see you!
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219
      0 Not allowed!

  8. #133
    Join Date
    Jul 2003
    Location
    Atlanta
    Posts
    337
    Quote Originally Posted by tulix View Post
    Gary, when you're gonna show up in Atlanta? Probably will be good to see you!
    Whenever Tony sets up the event.
    Gary Simat
    Total Server Solutions - Bare Metal - Private Cloud - Managed Infrastructure - Colocation - US Based Support
    Atlanta - Dallas - Phoenix - Los Angeles - Seattle - Chicago - Weehawken - New York City - Vancouver - Toronto - London - Amsterdam - Tokyo - Sydney
      0 Not allowed!

  9. #134
    Join Date
    Jun 2005
    Posts
    3,455
    Ok so will nobody help me build a raised floor at home now?

    In second life IBM has a virtual datacenter, which emulates their real life datacenter and they show life in 3D who they are doing now to cool their datacenters and you can see the air flow on it.

    It looks like a game but you can actually control real hardware from their real life DC. They called it the first 3D controlled datacenter. They builded it to show people about air flow and cooling.

    As for this topic I have readed it completely and my wrong impressions are:

    Georgia is poor so they cant bring more power to town.
    DC use 32 servers per rack instead of 42 because of,..., the same?
    Power is an issue if you want to put a full rack of 42 quad cores.
    The DCs their have outages, network, power, etc. So why did someone choosed Atlanta at all?

    Im sorry but that is the impression I have. I never had a single dowtime in The Planet, Server Beach (peer1) or Softlayer DC in years. Except when The Planet blow up but it just took one old backup server with it, and it was offline for 2 days.

    Also, if raised floor is just marketing, who came up with that stupid idea? I cannot imagine someone did now discover the setup of hundreds of DCs makes actually nothing to the hot air or its not worth the money. I mean, hello !! Why did they spend so much on them, I would have just hang the racks on the rooftop instead and climb up there, so the servers would be over my head.
    Last edited by nibb; 05-11-2009 at 02:51 AM.
      0 Not allowed!

  10. #135
    Join Date
    Jul 2003
    Location
    Atlanta
    Posts
    337
    Why don't you read the entire thread, everything you have mentioned is already covered.
    Gary Simat
    Total Server Solutions - Bare Metal - Private Cloud - Managed Infrastructure - Colocation - US Based Support
    Atlanta - Dallas - Phoenix - Los Angeles - Seattle - Chicago - Weehawken - New York City - Vancouver - Toronto - London - Amsterdam - Tokyo - Sydney
      0 Not allowed!

  11. #136
    Join Date
    Jun 2005
    Posts
    3,455
    Quote Originally Posted by garysimat View Post
    Why don't you read the entire thread, everything you have mentioned is already covered.
    Yes I did. Did you read my post? Have you look at the Second Life IBM new cooling methods?

    Colo4Dallas, the new space they have build has also raised floor and they builded it recently not in the 90s.

    You dont need to get sensitive, i was just doing talk, since this is webhostingtalk.

    From a logic point of view maybe raised floors doesnt make sense. Cool air goes always down, thats why air conditioners are always installed on a high place in the room. Raised floor is the opposite as it blows the cool air from below and the heat up. Probably there are other setups for this as well and millions of ways to do it.
      0 Not allowed!

  12. #137
    Join Date
    Jul 2003
    Location
    Atlanta
    Posts
    337
    For HIGH density, raised floor is not the way to go. Sure you can argue about C4D but I don't view them as a really high density datacenter. You cannot comment on all the providers offering hot/cold aisle containment? What about the Supernap which is way higher density? If you knew the past of raised floors you would know why it was developed in the first place. There is no reason to say it is "bad" because its not, it serves its purpose in certain applications. But with newer solutions for high density it just does not make sense. Do some research on it and find out for yourself.
    Gary Simat
    Total Server Solutions - Bare Metal - Private Cloud - Managed Infrastructure - Colocation - US Based Support
    Atlanta - Dallas - Phoenix - Los Angeles - Seattle - Chicago - Weehawken - New York City - Vancouver - Toronto - London - Amsterdam - Tokyo - Sydney
      0 Not allowed!

  13. #138
    Join Date
    Jun 2005
    Posts
    3,455
    I did not said C4D is high density. What ever you mean with HIGH density, since the more units you stack per feet the more you save in colo and space, so allot of DC actually try to be as high density they can, the more power, cooling, and servers you can put per square feet the better. You probably mean blades as density servers, but if i can put 42 quad cores in a rack I would call it high density as well if it can handle the cooling and power.

    I looked at SuperNAP. It looks very good, but the best DC I see is coming is NAP of the Capital Region. If I ever had to choose a DC it would be that one. I agree with you on something not each DC is for each application. Regarding NAP of the capital region it looks a DC build for the goverment or secure data. SuperNAP looks almost the same but with high density:
    http://www.terremark.com/datacenters/ncrmovie.aspx

    I dont try to make a debate about raised floot, I care less if a DC has it or doesnt has it, if they can keeps things running and cool I dont care how they do it but that they do it.
      0 Not allowed!

  14. #139
    WHTers, I think this is a great place to share with your experience, knowledge, questions and much more. I think this is a bad place for showing your negative emotions, which I've noticed quite of few in here. I think nibb has a right on expressing his opinion about the subject that I've said that won't mention in this thread. I think nibb has a great experience in (for US) international markets and can share with us on statistics on different features of data centers in South America. Year ago I've met a very nice gentleman from one of the large telco's in South America (still have a key chain for keys for one of my cars). What I've noticed - nobody abroad knows about 55/56 Marietta st. - everybody knows only NY and (I believe) SF telx. We (Atlantians) may put some thoughts together on how to get into the market that only NY telx has for now. I would welcome gnax and its management to be a part of that team if we will be able to come up with a plan on how to get there (to get out info about ATL telco buildings and naps).
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219
      0 Not allowed!

  15. #140
    Join Date
    Aug 2003
    Location
    Pittsburgh
    Posts
    3,490
    This thread has gone far off its rather focused topic, so I'm going to close it.
      0 Not allowed!

Page 6 of 6 FirstFirst ... 3456

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •