Results 1 to 10 of 10
  1. #1

    Recommended data center and server configuration?

    Hi,

    We are in the process of looking to have our website moved to a collocated solution. So far we push data at 50MBit/sec, with 200k uniques/day. We looked that Colo4Dallas and they look very reliable. How is your experience with them? Do you have any tips or tricks related to what package we should get?

    We plan to purchase 2 servers (web and db). So far we settled on this model:
    Dell PowerEdge R910
    - R910 Chassis for up to Sixteen 2.5-Inch Hard Drives
    - 4 x Intel Xeon E7520 1.86GHz, 18M cache, 4.80 GT/s QPI, Turbo, HT, 4C, 800MHz Max mem
    - 32GB Memory (8x4GB), 1066MHz, Dual Ranked RDIMMs for 2 Processors

    I would appreciate if you could share your experience.

    Regards,

    Floren
    Last edited by T3CK; 03-06-2011 at 03:37 PM.

  2. #2
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by T3CK View Post
    Hi,

    We are in the process of looking to have our website moved to a collocated solution. So far we push data at 50MBit/sec, with 200k uniques/day. We looked that Colo4Dallas and they look very reliable. How is your experience with them? Do you have any tips or tricks related to what package we should get?

    We plan to purchase 2 servers (web and db). So far we settled on this model:
    Dell PowerEdge R910
    - R910 Chassis for up to Sixteen 2.5-Inch Hard Drives
    - 2 x Intel Xeon E7520 1.86GHz, 18M cache, 4.80 GT/s QPI, Turbo, HT, 4C, 800MHz Max mem
    - 32GB Memory (8x4GB), 1066MHz, Dual Ranked RDIMMs for 2 Processors

    I would appreciate if you could share your experience.

    Regards,

    Floren
    Floren,

    The big question would be what are your actual needs, do you have i/o requirements that need you to use a chassis such as the R910 with it's 16 x 2.5"? Would you not be better served by a larger number of smaller web servers with a simple load balancing configuration to provide for greater availability?

    With only a 2-server configuration what is your strategy for availability and backups? What happens if one system or the other goes down? As this would be a colocated environment is going with a major vendor such as Dell/HP for warranty / parts replacement a business requirement or can you work with a 'white-box' vendor like supermicro and instead keep onsite spares?

    After all the hardware is out of the way you also have to look into the rest of your infrastructure, power PDU(s) switched or not? what network equipment? do you need public 10/100/1000? firewalls, ips/ids? What about systems management and management of your network equipment, would you need a provider that offers management colocation services?

  3. #3
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132

  4. #4
    Quote Originally Posted by RyanD View Post
    Floren,

    The big question would be what are your actual needs, do you have i/o requirements that need you to use a chassis such as the R910 with it's 16 x 2.5"? Would you not be better served by a larger number of smaller web servers with a simple load balancing configuration to provide for greater availability?

    With only a 2-server configuration what is your strategy for availability and backups? What happens if one system or the other goes down? As this would be a colocated environment is going with a major vendor such as Dell/HP for warranty / parts replacement a business requirement or can you work with a 'white-box' vendor like supermicro and instead keep onsite spares?

    After all the hardware is out of the way you also have to look into the rest of your infrastructure, power PDU(s) switched or not? what network equipment? do you need public 10/100/1000? firewalls, ips/ids? What about systems management and management of your network equipment, would you need a provider that offers management colocation services?
    Hi Ryan,

    IMO, smaller (but more powerful) setups allow you to maintain the server a lot easier, at the expense of rare (but possible) hardware failures. We are not at a stage where we should look at solutions powered by Cassandra, etc. Personally, I think Dell offers a good warranty on replacement parts, even if Supermicro makes excellent products. It will be something we will look at with the colo center and see what they offer as service, before we make any type of hardware purchase.

    Right now, we have 3 web + 2 database servers. The idea is: by using 2 beefy machines, we eliminate the replication (even if MySQL 5.5.8 is very elegant in that area) and avoid any load balancing complex solutions (for now).

    Recently, I setup a large forum with over 7,000 users online on a single machine you see listed above web powered by nginx, php-fpm and mysql. The server load is averaging between 1 and 3. So 2 machines will be more than sufficient, taking into consideration that the new setup is smaller.

    I understand your concerns related to availability. Nginx allows you to implement a very efficient load balancing system. But then again, what will happen if the MySQL server fails? We are going into a totally different direction, ending up purchasing a setup we currently have.

    The question is: how many times we saw servers failing and at what frequency?
    Most of failures are related to power supply or disks, so a redundant solution mixed with a RAID10 setup will suffice, IMO. If a failure occurs, it should be corrected promptly by the warranty. So we are putting in equation 2 keys:
    - do I stick with a complex setup that will require a lot more maintenance (and additional costs) OR
    - simplify my solution and take a chance that a major failure occurs every 1-2 years, fixable in max 1 day.

    I'm open to any suggestions how we should approach this situation, in order to simplify the current setup we have.

    Looking forward to your reply.

    Regards,

    Floren
    Last edited by T3CK; 03-06-2011 at 04:21 PM.

  5. #5
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by T3CK View Post
    Hi Ryan,

    IMO, smaller (but more powerful) setups allow you to maintain the server a lot easier, at the expense of rare (but possible) hardware failures. We are not at a stage where we should look at solutions powered by Cassandra, etc. Personally, I think Dell offers a good warranty on replacement parts, even if Supermicro makes excellent products.

    Right now, we have 3 web + 2 database servers. The idea is: by using 2 beefy machines, we eliminate the replication (even if MySQL 5.5.8 is very elegant in that area) and avoid any load balancing complex solutions (for now).

    Recently, I setup a large forum with over 7,000 users online on a single machine you see listed above web powered by nginx, php-fpm and mysql. The server load is averaging between 1 and 3. So 2 machines will be more than sufficient, taking into consideration that the new setup is smaller.

    I understand your concerns related to availability. Nginx allows you to implement a very efficient load balancing system. But then again, what will happen if the MySQL server fails? We are going into a totally different direction, ending up purchasing a setup we currently have.

    The question is: how many times we saw servers failing and at what frequency?
    Most of failures are related to power supply or disks, so a redundant solution mixed with a RAID10 setup will suffice, IMO. If a failure occurs, it should be corrected promptly by the warranty. So we are putting in equation 2 keys:
    - do I stick with a complex setup that will require a lot more maintenance (and additional costs) OR
    - simplify my solution and take a chance that a major failure occurs every 1-2 years, fixable in max 1 day.

    About backups, a small 1U server will suffice to run nightly database backups.

    Looking forward to your reply.

    Regards,

    Floren
    Sounds like you already have everything covered on the hardware side, if the risk of a reduced server count is within a level of acceptable risk to your business then it is a fine implementation.

    Indeed using something like nginx, varnish or another engine as a reverse proxy/lb works great. Generally we prefer to use a clustered LVS setup for more budget deployments but we also deploy alot of nginx/varnish/etc for clients and it's indeed great for pushing every last ounce of performance out of some piece of hardware.

    The only thing I would then go back and look at in terms of placing your equipment into a facility is what are the actual power draw requirements of that hardware, if you are looking to only colo 3-4 servers a provider may not allow you to place X# of amps within a say 10U space.

  6. #6
    Quote Originally Posted by RyanD View Post
    Sounds like you already have everything covered on the hardware side, if the risk of a reduced server count is within a level of acceptable risk to your business then it is a fine implementation.

    Indeed using something like nginx, varnish or another engine as a reverse proxy/lb works great. Generally we prefer to use a clustered LVS setup for more budget deployments but we also deploy alot of nginx/varnish/etc for clients and it's indeed great for pushing every last ounce of performance out of some piece of hardware.

    The only thing I would then go back and look at in terms of placing your equipment into a facility is what are the actual power draw requirements of that hardware, if you are looking to only colo 3-4 servers a provider may not allow you to place X# of amps within a say 10U space.
    Thanks for the input Ryan, really appreciated.
    Obviously, the warranty on hardware parts will be discussed with the collocation host, and decide then what is the best route, before we purchase anything. If they offer a great support on Supermicro parts, we will go for it.

    Can you give me more details related to the power setup you mentioned earlier?
    ... a provider may not allow you to place X# of amps within a say 10U space.

    Regards,

    Floren

  7. #7
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by T3CK View Post
    Thanks for the input Ryan, really appreciated.
    Obviously, the warranty on hardware parts will be discussed with the collocation host, and decide then what is the best route, before we purchase anything. If they offer a great support on Supermicro parts, we will go for it.

    Can you give me more details related to the power setup you mentioned earlier?
    ... a provider may not allow you to place X# of amps within a say 10U space.

    Regards,

    Floren

    Those R910's as you have spec'd them will most likely draw between 6-10A power @ 120v which would put a max of 2 of those on a single 20A 120v circuit (not recommended with 4 x 750W psu's it could easily blow a circuit with 2 of them) so a 30A 120v circuit would be ideal, if you look at the fact that you are only consuming 8U of space with those, it is unlikely a provider would let you use 30A in 1/4 cabinet.

  8. #8
    I see, thanks for explaining. What solution would you recommend?

  9. #9
    I would parallelize as much as possible (i.e., multiple 1U servers and a load balancer). That would be the cheapest, the most reliable, and the most scalable solution. You can start with half a dozen servers, and then add new units as your website grows.

    Multiprocessor solutions are really meant for the situation, where it is hard (or impossible) to break up a task for processing on several weakly interconnected machines.
    For example, solving a huge partial differential equation is relatively hard to parallelize. And when it is parallelized, the interconnect quality (mainly, latency) is very important, and special networking solutions (e.g., Infiniband instead of Ethernet) are employed.
    On the contrary, your task is very easily parallelizable. So you should make the most of this fact.

  10. #10
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    Grab an R810 with 4x quads and a MD1220 disk shelf loaded with 24 146gb drives.

    R910 should have never been built IMO.
    'Ripcord'ing is the only way!

Similar Threads

  1. CentOS - IP configuration for data center
    By Mitsurugi in forum Hosting Security and Technology
    Replies: 8
    Last Post: 07-13-2005, 06:00 AM
  2. Which data center to get a dedicated server?
    By gigafast in forum Dedicated Server
    Replies: 30
    Last Post: 06-21-2004, 09:28 PM
  3. Recommended Configuration for Dedicated Server
    By SEOptimizer in forum Dedicated Server
    Replies: 18
    Last Post: 12-04-2003, 09:06 AM
  4. Replies: 16
    Last Post: 02-08-2003, 11:37 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •