Results 1 to 27 of 27

Thread: load balancing

  1. #1
    Join Date
    Mar 2011
    Posts
    393

    load balancing

    How exactly does load balancing work?

  2. #2
    Join Date
    Jul 2008
    Location
    Minneapolis, MN
    Posts
    276
    A load balancer is assigned the IP address that accepts requests for whatever service you are load balancing.

    Then you set up multiple servers behind the load balancer that all do the same thing (Web servers, mail servers, etc.). The load balancer then continuously tests to make sure the machines are online, and redirects traffic to them based on rules you set up. You can let it know how you want to balance traffic (number of connections, to the server with the lightest load, etc.).

    That's a very basic description. There is a lot more that you can do with it.

    Do you have any more specific questions about how it works?
    01 Networks / Hosting and Consulting Services
    Pay as you Go hosting -- the cheapest prices in town.
    Zimbra (Network Edition and Open Source) Hosting
    100% full uptime guarantee / 24x7x365 support

  3. #3
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    Good description. And to add, If you have multiple load balancers, you can also use heartbeat and / or DNS failover to build some really good HA into your site(s). I have customers we employee this method with, and I have a handful with 7+ years of continuous uptime. Load balancing also alleviates strain from a single web server, and can even help secure your back end machines. If they can only talk to your front-end load balancer, then you don't have to allow normal 0.0.0.0 traffic to them.
    Hope that helps
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  4. #4
    Join Date
    Mar 2011
    Posts
    393
    Well the goal is to achieve 100% uptime. I'm wondering how we'd achieve a setup where a site would be replicated over multiple servers and yet the database would be universal across all servers so anyone accessing the site always sees the latest content.

  5. #5
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    It depends on how many nodes you need, and the type of database vs read / writes, but I have used master-master, master-slave-slave+ (all writes done on a master), or even mysqldump periodically straight to other node databases (-h option) . There are quite a few ways to set it up, but the best way really depends on traffic, database use, size, etc.
    One of the most popular methods I use its two master-master (giving no single point of failure for writes), and slave them out to other nodes. Default mysql connection for the local site is localhost, but fails over to a secondary and even tertiary host.
    My biggest setup includes 2-4 load balancers in failover mode, with 10-15 backend web nodes, and 4+ mysql nodes (riding on specific www nodes for the most part).
    The site sells Potassium Iodide, so when the Japan reactor issues hit, it went from 100-300 unique's day to 350K per node for almost a week. Not a second of downtime Long story short, even just using VPSs, it was rock solid. "Divide and conquer" works well!
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  6. #6
    Join Date
    Mar 2011
    Posts
    393
    Know a good host that would help us with something like this?

  7. #7
    You can check the offers section of this forum , where you can find multiple hosts providing the load balancing facility.
    || Eminds Infotech || 9th Year of Server Management Solutions ||
    || Server Management || 24x7 Technical Support || Cloud Management ||
    || 24x7 Live Chat Support || VPS Management || Server Migrations ||
    || https://www.24x7cloudservermanagement.com||

  8. #8
    Join Date
    Mar 2011
    Posts
    393
    Thanks man. What are the known providers? The pros of the business?

  9. #9
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    To make it truly redundant, I would suggest multiple hosts, geographically dispersed.
    Many of the hosts themselves to not delve into something this complex, but really, it's not too hard to setup, if you have some basic linux skills. I just use the ol' LAMP stack, and HAProxy for the load balancing, but throw in DNS or Heartbeat failover as appropriate. (DNS FO normally). A 3rd party tech is usually the way to go.
    UNIXy is one host that does some good HA, albeit slightly different methods.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  10. #10
    Join Date
    Mar 2011
    Posts
    393
    So who is the best loadbalancing setup out there. Using multiple hosts makes sense

  11. #11
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    That really depends on what you are serving, size you need on each server, how dynamic the files or database is...
    Can you give some specifics? Especially if you are running like a shopping cart or very dynamic db, as opposed to like a Wordpress or Joomla install that an admin may post something ever so often.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  12. #12
    well no host will be able to offer you their hosting here so its better you check the offers section or either best way is google your requirements you will find the right solution provider


    Quote Originally Posted by zahirw View Post
    Thanks man. What are the known providers? The pros of the business?
    || Eminds Infotech || 9th Year of Server Management Solutions ||
    || Server Management || 24x7 Technical Support || Cloud Management ||
    || 24x7 Live Chat Support || VPS Management || Server Migrations ||
    || https://www.24x7cloudservermanagement.com||

  13. #13
    Join Date
    Mar 2011
    Posts
    393
    We're running a social website, high database activity, LAMP setup, between 500 - 1000 unique + 5000 - 15000 pageviews a day.

  14. #14
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    What is your budget, if I may ask?
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  15. #15
    Join Date
    Mar 2011
    Posts
    393
    No budget in mind yet, we've never done this before so we're not sure

  16. #16
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    That would easily fit into a few budget VPSs. if you want load balancer redundancy, 4 VPSs plus a DNS Made Easy account for DNS failover. Basically, Create two web servers with master-master on the databases, and rsync for files between them. Setup one HAProxy load balancer with these two as your backend www nodes, then, setup another HAProxy exactly the same, using it as the A record DNS failover, should your primary go down.
    The load balancers can also be web servers themselves (I just use port 81...the LB takes care of the 80>81 port redirect since you name the port on each server) which means you can even survive both web servers flaking. I commonly use the failover load balancer as a backend web server also...may as well have it doing something while it's waiting on #1 to fail, huh?
    You can get that done for about 20-50 bucks a month, including the DNS failover, IF you know Linux and can use unmanaged budget VPS servers. (Not including the config of all this...that would be, of course, dependent on if you did it or hired someone, but generally it's a one time fee...maintained is usually straight forward, and is just changing lines in a config file from that point forward)
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  17. #17
    Join Date
    Mar 2011
    Posts
    393
    Great! Sound good & complicated

    Any load balancing services or a guide to do so out there?

  18. #18
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    It's not too awfully complicated. The HAProxy install and config is pretty straight forward.
    I have a "jump start" config I use mostly and just change node names / IPs.
    Other than that, just just rsyncing your files on the schedule you want, and probably the hardest part is choosing your two mysql master-master nodes and setting those up, but even that's not too hard.
    http://www.howtoforge.com/mysql_mast...er_replication

    For haproxy, one of the quickest guides I've found here on WHT:
    http://www.webhostingtalk.com/showthread.php?t=627783

    Rsync, that depends on how you config your servers...if you have one that's "master", or just copy any changed files on any machine to all the others. Using key-based ssh for this alleviates any password headaches.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  19. #19
    Join Date
    Nov 2009
    Posts
    544
    It seems that it is not complicated at all. We have developed several sites (Joomla! and WordPress) that can fail over to another vendor. Currently we use *NIX but anything should be possible. We use a master / slave setup without concerning ourselves with rsync or replication problems.

    Changed files are tarred and MySQL is dumped on a schedule and sent to the slave by FTP (server to server) and simple code on the slave untars files and loads the db update. Any number of DNS fail over policies / services can be used.

    This type of setup has made me wonder what hosting support staff are for.

  20. #20
    Join Date
    Mar 2011
    Posts
    393
    Thanks mugo

    What I'm looking to setup is an active website so essentially there would be a mirror of a fully functional website up and running in the event that the site goes down.

    We're planning on hosting one version on rackspace & one on vps.net. We need to sync it in such a way that the sites have identical db data & files always because users are constantly updating data & uploading files. So if one sever goes down, the next one comes on with all the data on it so the user faces a zero loss

  21. #21
    Join Date
    Mar 2011
    Posts
    393
    Also, traditionally, we've had only one host that houses all the dns,a,txt & cname records. These sort out the subdomain & emails entries. So if the host goes down, so does our mail. I'm assuming that these entries would now be housed on the load balancing layer but what if the load balancing layer goes down?

  22. #22
    Join Date
    Feb 2008
    Location
    Houston, Texas, USA
    Posts
    2,955
    Quote Originally Posted by zahirw View Post
    Also, traditionally, we've had only one host that houses all the dns,a,txt & cname records. These sort out the subdomain & emails entries. So if the host goes down, so does our mail. I'm assuming that these entries would now be housed on the load balancing layer but what if the load balancing layer goes down?
    It's best to house and/or offload DNS record hosting to a third party like DNSMadeEasy. Leverage what they have built. Plus it's very affordable. DNSME can also do LB and failover at the DNS/IP level so you'd just need to set up HTTP LB. I know Mugo's been doing this LB/failover/replication business for quite a while and has a very good understanding of the trade offs. Might as well get with him to set it up for you.

    Best
    UNIXy - Fully Managed Servers and Clusters - Established in 2006
    [ cPanel Varnish Nginx Plugin ] - Enhance LiteSpeed and Apache Performance
    www.unixy.net - Los Angeles | Houston | Atlanta | Rotterdam
    Love to help pro bono (time permitting). joe > unixy.net

  23. #23

    With that much traffic you may not need a load balancer

    Quote Originally Posted by zahirw View Post
    We're running a social website, high database activity, LAMP setup, between 500 - 1000 unique + 5000 - 15000 pageviews a day.
    Sorry, but with that much traffic you may not need a load balancer. If you want to have it just for the sake of it, you may put a hardware load balancer in front of 2 web servers and one database behind.

    The most critical will be to be able to handle security (hacking attempts - that load balancers wouldn't help) and spikes in the usage - again the load balancers will not help.

    We had DDos attack that took out load balancer in 30 minutes. Had to remove to make site functional.

    I am surprised to hear that load balancer helped to handle 30 times increase in popularity (unless I've mistaken) - the system worked not because of the load balancer but because was underutilized during normal operation.

    I would be careful with load balancers, they are good for static content, but then caching servers are much better. With multiple websites sharing the static content is a problem, unless you'll start using shared storage. The minute you'll start shared storage - problems similar to shared db server. Again, for dynamic web sites db is always a problem. Mater/master, master/slaves - they don't work well in many cases unless system is undeutilized. The minute you'll add replication - load on the server will increase. Overutilized server will crash more often than not overutilized. We've been in the social media business before even this terminology showed up. A lot of challenges there. And once you'll really would need load balancers/etc. cost will increase a lot and 1000 visitors per day won't cover the expenses. We had 500,000 visitors per day and we still preferred to run it on a single system, although we had a distributed system with load balancers and master/master data base running in parallel. We still have the first one (single system with distributed fuctionality) - we've dissambled the load balanced solution (which was including load balancer, web servers, storage servers, Fiber/NFS server, master/master database, caching servers and much more).

    My point is here - use load balancer or for business that brings money and system will be relativerly underutilized. For social media wait untill you'll really needed it.
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  24. #24
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    Quote Originally Posted by zahirw View Post
    Also, traditionally, we've had only one host that houses all the dns,a,txt & cname records. These sort out the subdomain & emails entries. So if the host goes down, so does our mail. I'm assuming that these entries would now be housed on the load balancing layer but what if the load balancing layer goes down?
    As UNIXy said, and I do this very thing myself, DNS, mail, and any other services should be separate.
    DNSME also has an mx backup, where they will spool mail should your main MX go down...of course, you can also configure more than one mail exchanger yourself, and put in some redundancy right up front.

    The main idea, as you are catching, is separate everything as much as possible, and don't leave a single point of failure, to the tolerance that you can drill down to.
    Separate companies, services, networks, backbones, and even geographically as much as possible. You want to make sure, if you pick two services, they aren't..say...close to each other geographically or network-wise, and *especially* make sure they aren't in the same DC! I've had someone do that before, thinking they had 3 different companies, and they had just happened to pick 3 in the same building. That's an extreme, and rare, but, you get the idea on what to double check.

    Since you are doing just a failover site that's passive until site A goes down, it's going to be a little easier, as you aren't worried about duping live data between two hot sites.

    Good luck man, sounds like you are on your way to doing some good stuff.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  25. #25
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,001
    Quote Originally Posted by tulix View Post
    Sorry, but with that much traffic you may not need a load balancer. If you want to have it just for the sake of it, you may put a hardware load balancer in front of 2 web servers and one database behind.

    The most critical will be to be able to handle security (hacking attempts - that load balancers wouldn't help) and spikes in the usage - again the load balancers will not help.

    We had DDos attack that took out load balancer in 30 minutes. Had to remove to make site functional.

    I am surprised to hear that load balancer helped to handle 30 times increase in popularity (unless I've mistaken) - the system worked not because of the load balancer but because was underutilized during normal operation.

    I would be careful with load balancers, they are good for static content, but then caching servers are much better. With multiple websites sharing the static content is a problem, unless you'll start using shared storage. The minute you'll start shared storage - problems similar to shared db server. Again, for dynamic web sites db is always a problem. Mater/master, master/slaves - they don't work well in many cases unless system is undeutilized. The minute you'll add replication - load on the server will increase. Overutilized server will crash more often than not overutilized. We've been in the social media business before even this terminology showed up. A lot of challenges there. And once you'll really would need load balancers/etc. cost will increase a lot and 1000 visitors per day won't cover the expenses. We had 500,000 visitors per day and we still preferred to run it on a single system, although we had a distributed system with load balancers and master/master data base running in parallel. We still have the first one (single system with distributed fuctionality) - we've dissambled the load balanced solution (which was including load balancer, web servers, storage servers, Fiber/NFS server, master/master database, caching servers and much more).

    My point is here - use load balancer or for business that brings money and system will be relativerly underutilized. For social media wait untill you'll really needed it.
    He's wanting this for redundancy, not so much for heavy load. IP session persistence gets around the dynamic / cache issue. I have quite a few behind load balancers that are very heavily cached...a few using drupal which write sessions to the DB, and there are no problems.
    During all my hears of setting up and maintaining load balncing infrastructures, I just don't see many issues at all. In the contrary, they have always "saved the day", with hardly any down-side issues, other than maybe log aggregation..that's my only gripe, really. Having to aggregate the log files for whatever you are balancing into meaningful data.

    Load balancing is scalable, cheap (if done right), and I would say get on board early, so you are not scrambling when you need the backends, redundancy, or both.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  26. #26
    Join Date
    Mar 2011
    Posts
    393
    Hey Mugo, right again, I need to have this setup for high availability, not so much load balancing. Like I mentioned, we just want to setup a middle layer that ensures that we never face any downtime, this assuming that both hosts dont go down in which case, we're pretty much screwed

  27. #27
    Join Date
    Nov 2009
    Posts
    544
    zahirw;

    I think Mugo and yourself are over thinking the issue, throwing additional hardware into the mix really over complicates it.

    The simplest, most effective method I have found is:

    Changed / added files are tarred and MySQL is dumped on a schedule and sent to the slave by FTP (server to server) and simple code on the slave untars files and loads the db update. Any number of DNS fail over policies / services can be used.

    This process can be scheduled as often as you like and it provides an always ready site to fail over to. Until this process was figured out, the database was (as stated by others) the hardest part of the equation since most of our sites are database dependent. Clustering and replication always brought their own issues into the mix.

Similar Threads

  1. Replies: 0
    Last Post: 06-14-2010, 08:43 AM
  2. Replies: 0
    Last Post: 05-25-2010, 03:35 AM
  3. Replies: 0
    Last Post: 04-26-2010, 03:07 AM
  4. Replies: 0
    Last Post: 04-13-2010, 02:16 AM
  5. Replies: 0
    Last Post: 04-07-2010, 01:51 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •