Results 1 to 6 of 6
  1. #1
    Join Date
    Nov 2003
    Location
    Toronto, Ontario
    Posts
    641

    Server Current Draw

    I got some APC remote-reboot port units, with 24 ports per unit, and I have 24 servers on them. The unit is 30A so it can handle sustained 24A, and with 24 single cpu celeron,p4 with single driver I see a load of 18-20 amps, it goes up and down depending, I am wondering am I taking it too close? this seems like the most economical and clean way to do it, i get 24 servers per rack, one apc per rack, 1 30a circuit per rack...... however if every server pulled a lot, it might overload, but the chances of that are low right? i did some tests getting all the servers working really really hard and i can get to 24 and bounce above it.

    i'm not worried about all the machines rebooting at the same time, i can ensure that does not happen. but like anything, if every server pumped 100meg it would cause just as much a problem as if they all pull 2A electricity at the same time. chances seem low enough.


    what range current draw do you guys see on a single machine, single p4/celeron with 1 or 2 drives.
    Kevin

  2. #2
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    5,954
    You should be fairly save (given that load draw), generally we find that the typical P4's will run around 0.6A per server at idle, and between 0.75 and 1.00A per server at full tilt. Most dual processor configurations run 1.5 - 2.0A (dual xeon's that is). Generally inrush current is around 2x the normal load, so you should have plenty more then enough given those numbers.

  3. #3
    Join Date
    Aug 2002
    Location
    Atlanta, GA
    Posts
    1,114
    For planning purposes we assume 1 amp per processor. For a device like an APC we plan on the average load being no more than 75% of the device rating.

    You always have to keep growth in mind when building your network. By using the above method we've been able to move up from Celerons to P4s to Xenos without having redo any electrical work as we grow.
    SiteSouth
    Atlanta, GA and Las Vegas, NV. Colocation

  4. #4
    Join Date
    Jul 2000
    Location
    Colorado Springs, CO
    Posts
    2,280
    Originally posted by porcupine
    You should be fairly save (given that load draw), generally we find that the typical P4's will run around 0.6A per server at idle, and between 0.75 and 1.00A per server at full tilt. Most dual processor configurations run 1.5 - 2.0A (dual xeon's that is). Generally inrush current is around 2x the normal load, so you should have plenty more then enough given those numbers.
    Which is a big part of colo that most people overlook. You wont get more than 8-10 dual xeons in a 20amp cabinet so it doesnt matter you are paying 42u. Just something for people to keep in mind for colo situations, power is a big factor.
    Greg Landis | Founder Jaguarpc - Keeping websites happy since 1998
    Managed IT Solutions - Business hosting | Virtual Private Servers | Cloud VPS Hosting | Dedicated servers | Backup service
    Follow us @ Facebook.com/Jaguarpc | Twitter: @JaguarPC | (888)-338-5261 | sales @ jaguarpc.com

  5. #5
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    5,954
    Originally posted by Jag
    Which is a big part of colo that most people overlook. You wont get more than 8-10 dual xeons in a 20amp cabinet so it doesnt matter you are paying 42u. Just something for people to keep in mind for colo situations, power is a big factor.
    No argument here, as far as I'm concerned thats why blades are useless presently. When you have an entire room worth of equipment (say 1000sqft) centered around wiring/powering/cooling an area of roughly 200sqft persay, you quickly realise that something is just not right. Most blade configurations can require as much as 200A per rack at full tilt (~2 blades per unit, ~90 blades per rack, dual processor blades, etc.), yet I do not know of a *single* data center that will deploy power in that manner.

  6. #6
    Join Date
    Jul 2000
    Location
    Colorado Springs, CO
    Posts
    2,280
    Originally posted by porcupine
    No argument here, as far as I'm concerned thats why blades are useless presently. When you have an entire room worth of equipment (say 1000sqft) centered around wiring/powering/cooling an area of roughly 200sqft persay, you quickly realise that something is just not right. Most blade configurations can require as much as 200A per rack at full tilt (~2 blades per unit, ~90 blades per rack, dual processor blades, etc.), yet I do not know of a *single* data center that will deploy power in that manner.
    Neither do I! Can you imagine what a nightmare that is for ...well everyone. Glade you brought up blades. They are a very sweet idea on paper but with present technology and large power requirements they are just completely impracticle.
    Greg Landis | Founder Jaguarpc - Keeping websites happy since 1998
    Managed IT Solutions - Business hosting | Virtual Private Servers | Cloud VPS Hosting | Dedicated servers | Backup service
    Follow us @ Facebook.com/Jaguarpc | Twitter: @JaguarPC | (888)-338-5261 | sales @ jaguarpc.com

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •