Results 1 to 15 of 15
  1. #1

    Software Load Balancers

    I did a quick (informal) survey of a bunch of hosting companies about how common load balancers were in that environment, and they actually seem to only be used in very large hosting facilities.

    However, a couple of them suggested that I instead look at the colo and data center market? (Duh, seems obvious now!)

    Presumably, anyone running colo or DR/failover has a load balancer... but I'm wondering if the software load balancer has made it into this market, or is it all hardware based F5 stuff (and such)? I read that DevOps was making the software based load balancers more attractive... but is that just hype?

    Thanks!

  2. #2


    Well, I can suggest some opensource alternatives for software load balancers. Not sure on support part, though.

    L4 load balancing - LVS(Linux Virtual Server) + Keepalived(VRRP based failover) - can configure N+N scaling

    L7 load balancing - HAProxy + Keepalived - Using DNS RR load balancing, can configure to scale linearly

    These are quite popularly used.


  3. #3
    Join Date
    Oct 2006
    Location
    US/EU/UK
    Posts
    4,886
    We have found it easier to use VMware's Automated Fault Tolerance as it is a part of an IT ecosystem and works in virtualized environments, instead of going for any stand-alone solutions... and we're happy about it.
    HostColor.com Edge Infrastructure - US Dedicated Servers & Europe Dedicated Hostingsince 2000
    In 50 U.S. Edge Data Centers & 80 POPs worldwide
    24/7 Support ★★ Support Tickets - LiveChat - Phone

  4. #4
    Join Date
    Dec 2009
    Posts
    2,297
    Quote Originally Posted by HostColor View Post
    We have found it easier to use VMware's Automated Fault Tolerance as it is a part of an IT ecosystem and works in virtualized environments, instead of going for any stand-alone solutions... and we're happy about it.
    This method may keep a VM available (if a host fails it eliminates the outage time while the VM would start on a new host), but it does not scale out for performance.

  5. #5
    Join Date
    Oct 2006
    Location
    US/EU/UK
    Posts
    4,886
    Hi @Alec, thanks for your remarks. The topic is load-balancing. Should a load-balancing feature any scale out functionality? What is the benefit of adding to or removing an instance from a system, when it comes to balancing the load for the purpose of achieving continuos availability?
    HostColor.com Edge Infrastructure - US Dedicated Servers & Europe Dedicated Hostingsince 2000
    In 50 U.S. Edge Data Centers & 80 POPs worldwide
    24/7 Support ★★ Support Tickets - LiveChat - Phone

  6. #6
    Join Date
    Dec 2011
    Posts
    1,460
    Quote Originally Posted by StevenAntonucci View Post
    I did a quick (informal) survey of a bunch of hosting companies about how common load balancers were in that environment, and they actually seem to only be used in very large hosting facilities.

    However, a couple of them suggested that I instead look at the colo and data center market? (Duh, seems obvious now!)

    Presumably, anyone running colo or DR/failover has a load balancer... but I'm wondering if the software load balancer has made it into this market, or is it all hardware based F5 stuff (and such)? I read that DevOps was making the software based load balancers more attractive... but is that just hype?

    Thanks!
    The "hardware" stuff is just pre-configured hardware with a fancy UI stapled on that allows you to... wait for it... configure the software

    All loadbalancers are software load balancers. You can buy a prefab LB like those from F5, or roll your own on your own hardware. If you've never done it before then of course, the prefab devices are going to be both faster and easier to set up where-as the roll-your-own solution can be done on ridiculously cheap hardware. Seriously - you can pick up used Dell 1950s for $30 off eBay or Craigslist and turn them all into LBs easily capable of handling several megabits of traffic each.
    "I've seen spam you people wouldn't believe. Routers on fire off the OCs of AGIS. I watched MXes burning in the dark near the Cyberpromo Gateway. All those moments will be lost in time, like tears in rain. TTL=0."

  7. #7
    Join Date
    Dec 2011
    Posts
    1,460
    Quote Originally Posted by HostColor View Post
    We have found it easier to use VMware's Automated Fault Tolerance as it is a part of an IT ecosystem and works in virtualized environments, instead of going for any stand-alone solutions... and we're happy about it.
    You are confusing High Availability with Load Balancing. The two are often paired into the same cluster, but they are completely different components.

    The fault tolerance will bring up a spare server if the primary goes down, but a load balancer evenly distributes resource requests across multiple backend servers.

    Ie, in your VMWare example you have one VM running serving requests - and a "spinning reserve" VM ready to take over immediately if the primary VM crashes.

    In a load balanced solution you have (as a fer instance) 4 VMs running, on 4 different host nodes, all four of which are serving requests simultaneously. There may also be spinning reserves to those 4 host nodes to provide high availability to the backend but we're moving into really advanced cluster concepts there.

    A load balancer sits in from of those 4 machines and evenly distributes requests to them so that the load on that cluster is... wait for it... balanced. (I crack myself up I really do) If you have 100 visitors to your web site all at the same time the LB parses out 25 requests to each of the 4 backend VMs, so that those 100 visitors don't all land on that one VM you've got and crush it into a quivering pile of unresponsive goo.
    "I've seen spam you people wouldn't believe. Routers on fire off the OCs of AGIS. I watched MXes burning in the dark near the Cyberpromo Gateway. All those moments will be lost in time, like tears in rain. TTL=0."

  8. #8
    Join Date
    Dec 2009
    Posts
    2,297
    Quote Originally Posted by HostColor View Post
    Hi @Alec, thanks for your remarks. The topic is load-balancing. Should a load-balancing feature any scale out functionality? What is the benefit of adding to or removing an instance from a system, when it comes to balancing the load for the purpose of achieving continuos availability?
    This guy below sums it up. Your version does not help with scale, which is a major use case for load balancers. A properly configured stack could be designed that the load balancer could invoke an API to scale out additional VMs PRN (as needed) to handle increases in traffic, and then destroy VMs as traffic goes down (helpful for utility billed models). For example if a clients site gets to the top of reddit and sees a huge influx in traffic that requires additional application servers.

    Quote Originally Posted by SneakySysadmin View Post
    You are confusing High Availability with Load Balancing. The two are often paired into the same cluster, but they are completely different components.

    The fault tolerance will bring up a spare server if the primary goes down, but a load balancer evenly distributes resource requests across multiple backend servers.

    Ie, in your VMWare example you have one VM running serving requests - and a "spinning reserve" VM ready to take over immediately if the primary VM crashes.

    In a load balanced solution you have (as a fer instance) 4 VMs running, on 4 different host nodes, all four of which are serving requests simultaneously. There may also be spinning reserves to those 4 host nodes to provide high availability to the backend but we're moving into really advanced cluster concepts there.

    A load balancer sits in from of those 4 machines and evenly distributes requests to them so that the load on that cluster is... wait for it... balanced. (I crack myself up I really do) If you have 100 visitors to your web site all at the same time the LB parses out 25 requests to each of the 4 backend VMs, so that those 100 visitors don't all land on that one VM you've got and crush it into a quivering pile of unresponsive goo.
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  9. #9
    Join Date
    Jan 2010
    Posts
    308
    It's very common to see both hardware and software load balancers in a deployment. The hardware stuff is great in that they often have specialized hardware for things like SSL offload, switching, generating SYN cookies, etc. If you truly want to scale up your infrastructure to handle huge loads of traffic, hardware LB's are generally the best route.

    Software "load balancers" are also very common in today's setups even for basic things like running a webserver. For example, let's say you run a PHP app. You install nginx alongside PHP FPM. Nginx routes the PHP traffic to PHP FPM using FastCGI and serves the static content off disk directly. Is it a webserver? Is it a load balancer? Here, nginx gets to claim both. This setup is very popular so it's easy for the "software LB's are super popular" crowd to state.

    So you'll see how it's very common for hardware LB's to hand off a pool of software LB's in even the most basic application stack. Facebook does this as well. They run (their own) hardware LB's that hand off to a pool of Linux servers running IPVS, which then sends requests via IP tunnels to proxygen webservers (which use DSR for return traffic).

    SneakySysadmin is right in that you could buy some cheap PE1950's (but god why? -- power hungry, ancient PCIe, slow memory, slow CPU) to build a freeware-based LB to handle a few megabits of traffic. You won't get more than 150k PPS through it, but it would probably suffice for very basic needs. You can also buy used LB's from F5 and A10 on eBay for a few hundred bucks that do the same thing and a bit easier to configure.

    Personally, I wouldn't run a small site on only software LB's. They're too easy to DDoS with a simple 5M PPS SYN flood.

  10. #10
    Join Date
    Dec 2011
    Posts
    1,460
    Quote Originally Posted by scurvy View Post
    SneakySysadmin is right in that you could buy some cheap PE1950's (but god why? -- power hungry, ancient PCIe, slow memory, slow CPU) to build a freeware-based LB to handle a few megabits of traffic.
    'Cuz you can literally pick 'em up for $30 a pop... and power hungry? Hmm, gonna beg to differ there, and the systems management tools Dell provides lets you specify power consumption and caps anyway.

    Quote Originally Posted by scurvy
    You won't get more than 150k PPS through it
    Uhm, wat?

    I'm watching a load balancer right now that is handling a sustained 4Mbps and 879 active connections (at the time I looked). This is a Quad Core Opteron circa 2012 with 512MB of RAM.

    You are vastly overestimating the hardware requirements needed by any load balancer.

    Quote Originally Posted by scurvy
    Personally, I wouldn't run a small site on only software LB's. They're too easy to DDoS with a simple 5M PPS SYN flood.
    5 million packets per second is a "simple" DDoS for hardware LBs to handle? Uhm...

    OK, but then again most small site operators can't afford the $20,000 to $50,000 in hardware costs it's going to take to mitigate such a "simple" attack as that.
    "I've seen spam you people wouldn't believe. Routers on fire off the OCs of AGIS. I watched MXes burning in the dark near the Cyberpromo Gateway. All those moments will be lost in time, like tears in rain. TTL=0."

  11. #11
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    I've only ever used software load balancers, mainly haproxy free edition http://www.haproxy.org/ or via nginx for load balancing. For free haproxy software load balance, it can easily handle 500,000 unique ip visitors/day on an old Xeon X34xx without much load at all. Though leaning more to Nginx for load balancing now that more advanced options for HTTPS/SSL are in Nginx 1.11 branch i.e. HTTP/2 and dual ECDSA + RSA ssl certificate support etc.

    Haproxy has a paid enterprise version https://www.haproxy.com/products/hap...prise-edition/ and a Haproxy Aloha Appliance version https://www.haproxy.com/products/mai...oad-balancers/. Both I haven't used though.
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  12. #12
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    fyi, haproxy sizing documentation guide at http://cbonte.github.io/haproxy-dcon...o-1.6.html#3.5
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  13. #13
    Join Date
    Jan 2010
    Posts
    308
    Quote Originally Posted by SneakySysadmin View Post
    'Cuz you can literally pick 'em up for $30 a pop... and power hungry? Hmm, gonna beg to differ there, and the systems management tools Dell provides lets you specify power consumption and caps anyway.
    We'll just agree to disagree. The PE1950 is far from power efficient, nor does it have much support for any kind of newer acceleration, offload, etc. It's an old, slow machine. You'll pay way more to run the thing than what you paid to acquire it.

    Quote Originally Posted by SneakySysadmin View Post
    I'm watching a load balancer right now that is handling a sustained 4Mbps and 879 active connections (at the time I looked). This is a Quad Core Opteron circa 2012 with 512MB of RAM.
    That's great. I said 150k PPS. There's a difference between PPS and BPS.

    Quote Originally Posted by SneakySysadmin View Post
    You are vastly overestimating the hardware requirements needed by any load balancer.

    5 million packets per second is a "simple" DDoS for hardware LBs to handle? Uhm...
    No, I'm not. 5M PPS is braindead simple for any hardware LB made in the past decade. They all use basic FPGA's to handle the SYN cookie generation. It doesn't impact the rest of the forwarding performance at all. I didn't pull 5M PPS out of the air either. It's the median sized SYN flood I saw last year on my network. Any host with a 10gig NIC can put 5M SYN's/sec on the wire without breaking a sweat. It's not a monumentally large number.

    Quote Originally Posted by SneakySysadmin View Post
    OK, but then again most small site operators can't afford the $20,000 to $50,000 in hardware costs it's going to take to mitigate such a "simple" attack as that.
    As I mentioned, eBay is flooded with this stuff. Super cheap when going the used route.
    Last edited by scurvy; 06-06-2016 at 10:21 PM.

  14. #14
    Join Date
    Jan 2010
    Posts
    308
    I like nginx, but there are gotchas with it. It does some real eye-opening things that can cause a ton of pain if you're not aware of them. Here's an example of one: https://news.ycombinator.com/item?id=11217477

    Most of the really good load balancing features are only available in Nginx Plus, which is a paid product. The free product's offerings are very basic (in the proxy module).

  15. #15
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    Quote Originally Posted by scurvy View Post
    I like nginx, but there are gotchas with it. It does some real eye-opening things that can cause a ton of pain if you're not aware of them. Here's an example of one: https://news.ycombinator.com/item?id=11217477

    Most of the really good load balancing features are only available in Nginx Plus, which is a paid product. The free product's offerings are very basic (in the proxy module).
    believe that was fixed in open source version https://trac.nginx.org/nginx/ticket/488#comment:8 as at Nginx 1.9.13

    The 91c8d990fb45 commit changes proxy_next_upstream logic to do not retry non-idempotent requests by default if a request has been already sent to a backend, and introduces additional proxy_next_upstream non_idempotent parameter to restore the old behaviour if needed.
    http://nginx.org/en/docs/http/ngx_ht..._next_upstream
    non_idempotent
    normally, requests with a non-idempotent method (POST, LOCK, PATCH) are not passed to the next server if a request has been sent to an upstream server (1.9.13); enabling this option explicitly allows retrying such requests;
    and same for fastcgi_next_upstream too http://nginx.org/en/docs/http/ngx_ht..._next_upstream

    And there's also lua based healthchecks for upstreams https://github.com/openresty/lua-res...am-healthcheck
    Last edited by eva2000; 06-07-2016 at 10:02 AM.
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

Similar Threads

  1. Redundant software load balancers?
    By Daniel15 in forum Hosting Security and Technology
    Replies: 5
    Last Post: 05-20-2012, 03:50 PM
  2. Software Load Balancers
    By dcabbar in forum Dedicated Server
    Replies: 1
    Last Post: 09-16-2005, 03:57 PM
  3. load balancers
    By Tracert in forum Colocation, Data Centers, IP Space and Networks
    Replies: 19
    Last Post: 03-11-2004, 03:09 PM
  4. Load Balancers?
    By BeDifferentSolutions in forum Colocation, Data Centers, IP Space and Networks
    Replies: 8
    Last Post: 10-13-2003, 03:07 PM
  5. Replies: 6
    Last Post: 10-17-2002, 09:31 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •