Results 1 to 11 of 11
  1. #1

    Best approach for high availability?

    We are trying to develop a design for a cluster of web servers at a single data center which will ensure that the cluster, as a whole, has very, very high availability.

    Each of the servers is basically identical. They all serve web pages, and each can act relatively independently of the other.

    Our initial thought was to put the servers behind a redundant, hardware-based, load balancing solution. The load balancers would be able to detect if any single webserver went down, and would redirect traffic to the other servers in that case.

    An alternative -- less expensive -- solution that has been suggested to us is to use EtherChannel. We're concerned, however, that it might not be as robust an approach as the load balancer solution.

    Can anyone make any suggestions as to how we can best assure the high availability of our cluster?

    Cost is an important consideration for us.

    In addition, we'd prefer a solution that allowed for session maintenance. That is, once a user starts interacting with a particular server in our cluster, it would be nice if they could continue interacting with that particular physical server.

    As an aside, I'll mention that we intend to implement this approach at two different data centers in different geographic regions, and use round robin DNS to further ensure high availability of the overall set of servers.

    Thoughts? Recommendations? Criticisms? ;-)

    Thanks!

  2. #2
    Hello bsimple.

    I have some experience running clusters geographically distributed servers. According to this the best solutions is:

    • Interworx panel to help you to manage the cluster. Cheap and with all what you need for this job.

    • DNS round robin to be sure that the nameserver system is always up. DNS Made Easy is a cheap and excellent service for this.

    Hope this helps..
    Guille

  3. #3
    Thanks, Guille, for your thoughts.

    We're looking at DNS Made Easy for our round-robin DNS, and so far they look quite good. I suspect we'll go with them.

    As to Interworx, I think we're somewhat hesitant to implement a software-based solution because of the learning curve associated with that option. I'd add that we're also not so keen on a software-based approach because we have concerns about its reliability.

    We considered Pound (http://www.apsis.ch/pound/) as a potential software solution, just because it's so darn simple, and also UltraMonkey (http://www.ultramonkey.org/), but, everytime, we came back to the learning curve and reliability issues I mentioned.

    Still, I appreciate the input. Will give it some more thought.

  4. #4
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by bsimple
    An alternative -- less expensive -- solution that has been suggested to us is to use EtherChannel. We're concerned, however, that it might not be as robust an approach as the load balancer solution.
    Are you sure that's the right term?

    EtherChannel was Cisco's proprietary link bundling mechanism which has since evolved into 802.3ad / LACP link aggregation. I guess you could argue that aggregating a couple links together improves your redundancy, but in reality it's not going to be statistically significant. If your server crashes or the switch its connected to goes down you're still out of luck. (you can't bundle across switches, so you basically get 2 connections up to a single switch/point of failure)
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  5. #5
    My understanding is similar to yours, Spaethco, that most (many?) implementations of the functionality of EtherChannel typically are 802.3ad-based implementations.

    You raise a point, however, that has been in the back of our mind. Namely, the EtherChannel-type solution leaves the switch as a single point of failure. And there's not really a way around that, is there?

    A solution with redundant load balancers, on the other hand, presumably would be set up so you don't have that sort of single point of failure. Right?

  6. #6
    Join Date
    Feb 2002
    Location
    Vestal, NY
    Posts
    1,381
    Will you be supporting mostly static websites or are you going to have a lot of dynamic content? The more scripts and high-usage database driven dynamic sites, the more complicated this can get.
    H4Y Technologies LLC .. Since 2001!!
    "Smarter, Cheaper, Faster" - SMB, Reseller, VPS, Dedicated, Colo hosting done right.

    ZERO PACKETLOSS, ZERO DOWNTIME Dedicated and Colo - USA: IA, CA, NC, OR, NV
    **http://h4y.us** **http://iwfhosting.net**
    Voice: (866)435-5642. *** askus at host4yourself d0t com

  7. #7
    The answer to your question, John[H4Y], is "yes and no."

    No, the pages won't be served up from a database.

    But, yes, most of the pages will be dynamically generated in the sense that most of them are JSP pages served up by Tomcat.

  8. #8
    Join Date
    Jan 2004
    Location
    North Yorkshire, UK
    Posts
    4,164
    To load balance across multiple datacentres you should probably look at anycast for good availability, using DNS is not a reliable solution.

    Ideally at each location you want multiple aggregation links from the provider, multiple load balancers, and multiple switches, with mutliple servers sitting behind them. This will also allow for the session persistance you're looking for if configured properly.

    I did a quick diagram - see attached - this setup would allow for multiple component failure which is very rare, you could easily achieve very high availability with protection against single component failure by removing the additional meshing between the routers and load balancers, and the load balancers and switches.

    We've been managing bigish clusters for years and I've never had multiple component failure, so in reality you can probably do without the extra links.

    Dan
    Attached Thumbnails Attached Thumbnails setup.gif  
    █ Dan Kitchen | Technical Director | Razorblue
    █ ddi: (+44) (0)1748 900 680 | e: dkitchen@razorblue.com
    █ UK Intensive Managed Hosting, Clusters and Colocation.
    █ HP Servers, Cisco/Juniper Powered BGP Network (AS15692).

  9. #9
    With regard to John[H4Y]'s point, I want to say that there is something between dynamic and static pages: pages generated by PHP-MySQL, for example, but which can be converted to static. This is is the case of my websites and the websites I host (90% of these pages fall into this category).

    Anyway, I think that this problem has to be considered when making the decision of what to priorize for a given budget: hardware or network quality, not when designing the cluster.
    Guille

  10. #10
    Thanks for the additional comments, Dan and Guille.

    The diagram you included is quite helpful, Dan. I have one follow-up question, however.

    You have the traffic from each load balancer going through switches before the traffic actually gets to the individual servers. Is that necessary? What function are the switches serving?

  11. #11
    Join Date
    Jan 2004
    Location
    North Yorkshire, UK
    Posts
    4,164
    Well since most load balancers only come with a few ethernet ports (usually 2) you really need a switch on either side to enable you to hook up the multiple servers, nothing fancy going on there, just a plain ethernet switch.

    HTH

    Dan
    █ Dan Kitchen | Technical Director | Razorblue
    █ ddi: (+44) (0)1748 900 680 | e: dkitchen@razorblue.com
    █ UK Intensive Managed Hosting, Clusters and Colocation.
    █ HP Servers, Cisco/Juniper Powered BGP Network (AS15692).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •