Results 1 to 6 of 6
  1. #1
    Join Date
    Sep 2006
    Location
    US
    Posts
    14

    PC towers vs. servers

    I have been looking at some pictures of datacenters. Why do some web hosting companies have what looks like PC towers on shelves, while others have servers on racks?

  2. #2
    Well when your referring to 'pc towers' some companies do use standard pc hardware... but there is also servers which look similar to a pc tower - if you look on dell you'll see.

    Personally, I'd never go with a company who didn't openly say 'server grade hardware' was being used.

  3. #3
    Join Date
    Jan 2003
    Location
    Texas, where else?
    Posts
    1,571

    Cool

    Most often what you think are PCs are "tower" servers as described above and may be in the co-lo section of a datacenter OR a datacenter isn't equipped for cooling high density (rack or blade servers) OR they need very high spec machines for some reason (10 Hard Drives) that just won't go in a rack unit and don't want to cluster several smaller servers (example: maybe big storage servers used for backups)

    All these or any combination can come into play but most datacenters will have "racks" for most of their servers. In general the "PC" looking tower servers are used in offices & such for work groups.
    Datacenters normally use racks but there are uses for both.
    Again generally speaking, the "towers' are not efficient for datacenters because you can put more rack servers in the same space, however if you have need for "monsters" with lots of hard drives, multiple processors, tons of RAM etc. in one box they generate a LOT of heat so stacking them in racks must be done with great attention to cooling that.
    The same way now you see "blade" servers (like rack servers turned on their side) =more servers per sq. ft. but while that can be a good deal it creates more heat per sq. ft., more need for more backup power per sq. ft. etc.
    It's not a "good or bad" thing, as long as both are quality servers with good wiring & connections, battery backups etc.
    But most buildings constructed to be from the "ground up" datacenters will have mostly racks for efficiency except the possible co-lo section if they have shelf space to take customer's towers.
    Last edited by DDT; 09-09-2006 at 07:16 AM.
    New Idea Hosting NO Overselling-Business-Grade, Shared Only! New-In House Design Team.
    High Speed & Uptime; , DIY Pro-Site Builder-Daily Backups-Custom Plans, All Dual Xeon Quad Intel servers w/ ECC DDR3 RAM SCSI RAID minimums.
    We Concentrate on Shared Hosting ...doing one thing and doing it VERY well

  4. #4
    Join Date
    Jun 2001
    Location
    Gotham City
    Posts
    1,849
    Towers are economical compared to rackmount PCs, almost half the cost somtimes. They're not an effective solution if the company is actually colocation unless they contain some very high spec hardware like DDT has mentioned. Generally rackmount servers are a good expensive for profitability in the long run.

    Putting high spec hardware in a 'tower' does not make it a PC, towers do have benefits such as better cooling and the capacity to hold more hard drives.

    Personally I think racks look cooler :-D
    [[ Reyox Communications / USA based cloud servers & support / 9 years of hosting websites ]]
    [[ Affordable ASP.NET4, ColdFusion, PHP & MS-SQL, MySQL, cPanel/WHM & Windows Reseller Hosting + Virtual Private Servers ]]
    (www.reyox.com) - Mention WHT and get a discount on your first month!

  5. #5
    Join Date
    Jan 2003
    Location
    Texas, where else?
    Posts
    1,571

    Cool

    I'll only take exception to the statement "towers have better cooling". That would be true in a stand-alone situation. However heat can a datacenter's worst enemy and a huge expense.
    The problem with towers in datacenters is the inability to seal & direct airflow optimally.
    Usually they overcome this by just keeping the room very cold. However by their free-standing nature towers draw in and exhaust air according to their individual design.
    This is very inefficient requiting a large room to be kept exceptionally cool.
    Nowadays racks & datacenters are designed to provide hot & cold zones. Since all rack mounts tend to take air in the front & out the back in a very similar fashion the front side of the racks can be made a cold zone where the cold air is entering the room. In a well designed datacenter racks are arranged in "squares" where backs of 2 rows face each other creating a "hot" area where all the exhaust is coming out. Then ducts located in those areas trap and carry hot carry air directly from there back to the cooling system keeping the cool-hot "circle" going and if designed properly the cool air continually flowing through the servers rather than just going all-over the room creating a hotter overall environment and uneven cooling for some servers. With energy costs so high it becomes less expensive to engineer around this rather than the "old days" methods of just keeping the whole room at ~68 degrees and hoping everything stayed cool, usually bringing cool air in the top and out through the raised floor.
    Some datacenters are even going to totally sealed rack units with built in cooling systems that supplement the room cooling with additional "on-demand" cooling directing it by sensors to the areas that are hottest at the moment then exhausting it from the enclosed "hot zone" in the back.
    It's an ever-evolving thing for datacenters as new technology comes along and since other needed items like battery backups etc. all generate some heat also.
    It's also an evolving cost-benefit situation since some of these new sealed rack cooling systems can cost close to $200K for a single "bank" of racks with the cooling, (before a single server is installed) and the how much cooling-cost vs. how much expense in infrastructure saves is an on-going equation.

    I'll also add that there have been many articles on why clustering or even using multiple lower-spec servers is preferable to having one "monster" server for most web applications. All based basically on the "putting all your eggs in one basket" approach. 1,000 clients on 4 servers means if a server fails 250 people are mad. If one mega-server with 1,000 accounts fails everybody is mad
    Not always true but makes sense for most hosts. (and those are just random numbers, not meant to indicate what any one server can or cannot handle in terms of # of accounts.)
    Last edited by DDT; 09-09-2006 at 10:52 AM.
    New Idea Hosting NO Overselling-Business-Grade, Shared Only! New-In House Design Team.
    High Speed & Uptime; , DIY Pro-Site Builder-Daily Backups-Custom Plans, All Dual Xeon Quad Intel servers w/ ECC DDR3 RAM SCSI RAID minimums.
    We Concentrate on Shared Hosting ...doing one thing and doing it VERY well

  6. #6
    I dot recommend coming to any conclusions by looking at pictures of servers is datacenters. You should probably ask the host/data center about their server specification and should never make any judgments by only looking at pictures.
    [DC'S] SINGAPORE. AUSTRALIA. JAPAN. INDIA. CHINA HK. USA. UK. NETHERLANDS. GERMANY. SOUTH AFRICA
    MyCompanyWeb™: Start Your Own Professional Web Hosting Company in 24 Hours
    SkyNetHosting.Net - SEO Hosting. Reseller Hosting. Shared Hosting & VPS - 14 Years in Business!
    Dedicated IP + SSL + WHMCS + Domain Reseller + Master Reseller + 100% SSD + End User Support and More!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •