Page 4 of 4 FirstFirst 1234
Results 76 to 88 of 88
  1. #76
    Join Date
    Apr 2009
    Location
    UK
    Posts
    824
    I would love some info too!!

    Merry Christmas guys/girls

  2. #77
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,007
    Quote Originally Posted by -Sandro- View Post
    Sorry to revive this old thread but I'm really interested in this method aka using the webserver and haproxy on the same server, I got really confused on how this works. Can anyone or mugo explain this better?
    Sure, it's not complicated at all, really. I won't go into haproxy config or apache config, other than just the ports.

    Basically, all you are doing is running haproxy on port 80, and your web server on port 81 (or whatever port you choose), looping back to your server.

    Make the entry in haproxy.cfg and define your listening port as 80 -

    listen your-webfarm 0.0.0.0:80

    Under this entry, you define your "real" backend www server as such, remember to put in the port your www server is running on (port 81 at 192.168.1.5 for this example) -

    server identifier 192.168.1.5:81 cookie cookiename check inter 2000 fall 3

    So, this makes haproxy listen on port 80, and talk to 192.168.1.5 port 81 for the real backend server.

    From there, you only need to change the default port of your web server from 80 to 81...for Apache, depending on your config, it's either the "Listen" directive or the port at the end of your virtualserver directive. Most often its just changing

    Listen 80
    to
    Listen 81

    Restart your www server, start or restart haproxy. The server is then simply looping back from port 80 to 81.

    My "usual" use for this is as a last-ditch fallback, where the real www servers are "elsewhere", and if they ALL fail, then use "myself" as a backup. You can mix ports on your real backend servers, just name the ports in your haproxy "server" line when setting them up.

    To set the server to only work in case all others fail, you can simply add "backup" to the end of your "server" line in haproxy, like:

    server identifier 192.168.1.5:81 cookie cookiename check inter 2000 fall 3 backup

    Then, haproxy only brings this www server up if all your other non-backup (without the backup directive at the end) backend servers are down.

    Hope that helps!
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  3. #78
    Join Date
    Oct 2007
    Posts
    237
    Thank you! It's exactly what you said last that confused me probably cause I still didn't' get it . If this haproxy server is down how would it fail back to others or do anything? I mean if you're basing your safety on DNS forwarding what is the point of having haproxy servers on every server to control traffic if all of them can become points of failure? Can you just put the web servers on them?

    I'm confused about the usage of multiple servers as a whole not the single server configuration
    Last edited by -Sandro-; 12-27-2012 at 02:23 PM.

  4. #79
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,007
    This config would assume your haproxy is up, but the other backends nodes are down. If all others "non-self" www servers go down, you fallback to "self".

    I use more than one haproxy server and DNS failover to take care of the "what if haproxy itself is down" issue.

    In most cases, if your primary haproxy goes down, it's not going to be an "all dead" issue, usually your actual backend servers are still fine, especially if you have done "good" and set your actual web servers up via some other provider / network. Never put all your eggs in one basket.

    For true HA, you want at least two Haproxy servers looking at your back end nodes, which, themselves, should be spread around strategically geographically AND via different vendors. Then, setup DNS Failover via someone like DNSMADEEASY with low TTLs to detect, switch, and notify when an haproxy node goes down. I use TTL of 90 seconds, and have never had any caching issues in 8 or so years. It just plain works.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  5. #80
    Join Date
    Oct 2007
    Posts
    237
    Quote Originally Posted by mugo View Post
    This config would assume your haproxy is up, but the other backends nodes are down. If all others "non-self" www servers go down, you fallback to "self".

    I use more than one haproxy server and DNS failover to take care of the "what if haproxy itself is down" issue.

    In most cases, if your primary haproxy goes down, it's not going to be an "all dead" issue, usually your actual backend servers are still fine, especially if you have done "good" and set your actual web servers up via some other provider / network. Never put all your eggs in one basket.
    So you'll always have a primary haproxy that handles all the incoming traffic and "redirects" it accordingly to various backends (thus applying also a load balancing) but if this one happens to be down the DNS will save you by pointing the users to one of the other backends, did it get it right? And one of those backends will temporarily become the primary haproxy load balancing between the remaining online servers.

    Quote Originally Posted by mugo View Post
    For true HA, you want at least two Haproxy servers looking at your back end nodes, which, themselves, should be spread around strategically geographically AND via different vendors. Then, setup DNS Failover via someone like DNSMADEEASY with low TTLs to detect, switch, and notify when an haproxy node goes down. I use TTL of 90 seconds, and have never had any caching issues in 8 or so years. It just plain works.
    Is it really a good idea in terms of performance having haproxy and backends in different locations? The traffic would need to go back to haproxy via the Internet so you could encounter slowdowns and a certain added latency

  6. #81
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,007
    To the first point -
    Absolutely correct. Using my selected DNS failover service, I can go up to 5 IPs deep on failover from the DNS standpoint. If you have your servers geographically and provider diverse, it's takes so many failures to have a true outage, if it actually happened, you would probably not be worrying about your clients / www data at that point.

    To the second point -
    Put it this way, on a simple budget VPS setup, using 4 HAProxy and 6-8 backend VPS apache servers have I had a client that took over 2 million hits per hour during the Japan earthquake aftermath...The client sells radiation survival supplies. The other three HAProxy servers were there for redundancy, just in case, but the primary held it all. I had to manually switch them up only because I started capping the data transfer limits for the haproxy servers individually. I keep a couple of high TB transfer haproxy servers hanging out in case there is a repeat. As for performance, as long as you have good backend servers, you should have no problems. HAproxy seriously impressed me in it's ability through that fiasco. I hadn't been able to make it fail on intentional stress tests, and knew it was very robust, but it still hasn't failed on one of the largest spikes I've witnessed. Not bad for a freebie!

    There have been other big stress tests events, but that was by far the largest. It worked so well the client offered up a hefty bonus
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  7. #82
    Join Date
    Oct 2007
    Posts
    237
    Quote Originally Posted by mugo View Post
    To the first point -
    Absolutely correct. Using my selected DNS failover service, I can go up to 5 IPs deep on failover from the DNS standpoint. If you have your servers geographically and provider diverse, it's takes so many failures to have a true outage, if it actually happened, you would probably not be worrying about your clients / www data at that point.
    Cool but that makes me think again. Instead of having haproxy to balance load why not just use DNS that will do both RR and failover, if that exists. Meaning that it will randomly pick servers to serve traffic (load balancing, well not really but it will spread the load somehow) and "block" a dead IP if that server goes down while still RRing. Wouldn't this eliminate the need of haproxy?

    Quote Originally Posted by mugo View Post
    To the second point -
    Put it this way, on a simple budget VPS setup, using 4 HAProxy and 6-8 backend VPS apache servers have I had a client that took over 2 million hits per hour during the Japan earthquake aftermath...The client sells radiation survival supplies. The other three HAProxy servers were there for redundancy, just in case, but the primary held it all. I had to manually switch them up only because I started capping the data transfer limits for the haproxy servers individually. I keep a couple of high TB transfer haproxy servers hanging out in case there is a repeat. As for performance, as long as you have good backend servers, you should have no problems. HAproxy seriously impressed me in it's ability through that fiasco. I hadn't been able to make it fail on intentional stress tests, and knew it was very robust, but it still hasn't failed on one of the largest spikes I've witnessed. Not bad for a freebie!
    So in this configuration you didn't have www and haproxy on the same VPS? WOW 2m/h ...that must have been fun (not really for the people affected by the earthquake) and very rewarding! What provider do you use? Did you ask the host to combine the monthly BW traffic of the backends to the haproxy considering everything has to travel trough that?
    By switching them on what do you mean? You changed the configuration on the primary haproxy allowing others to be used? I suppose not because you said you were using all the BW so you had to redirect the clients directly to the other haproxy servers. So you changed the DNS settings?

    Out of curiosity do you also make the websites you host? How do you handle synchronizations between all those backends? Where do you store static files? Do you have multiple databases all synced?

    I admire you

  8. #83
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,007
    Quote Originally Posted by -Sandro- View Post
    Cool but that makes me think again. Instead of having haproxy to balance load why not just use DNS that will do both RR and failover, if that exists. Meaning that it will randomly pick servers to serve traffic (load balancing, well not really but it will spread the load somehow) and "block" a dead IP if that server goes down while still RRing. Wouldn't this eliminate the need of haproxy?



    So in this configuration you didn't have www and haproxy on the same VPS? WOW 2m/h ...that must have been fun (not really for the people affected by the earthquake) and very rewarding! What provider do you use? Did you ask the host to combine the monthly BW traffic of the backends to the haproxy considering everything has to travel trough that?
    By switching them on what do you mean? You changed the configuration on the primary haproxy allowing others to be used? I suppose not because you said you were using all the BW so you had to redirect the clients directly to the other haproxy servers. So you changed the DNS settings?

    Out of curiosity do you also make the websites you host? How do you handle synchronizations between all those backends? Where do you store static files? Do you have multiple databases all synced?

    I admire you
    RR is not a good failover method. I won't go into the DNS client issues, but contracictory to popular thinking, a downed node does NOT automatically redirect. You just get a timeout. Some browsers are starting to take this into account, but it's not reliable by any stretch. It's not even good in an AD DNS integrated domain on a corporate network, so getting various uncontrolled clients to comply is a show stopper.
    HAProxy was designed to handle the load, and handle it eloquently.
    I've tried most other LBs, even use some in a corporate environmental, but so far, I've seen nothing handles quite as nice as HAproxy.
    It's better than single server with DNS failvoer because it measures your least active node, and sends the next request to it, rather than just some random RR chain. If you have really nice, hefty servers, it will send most requests to them until they become slower at responding than your "other" servers that may be in your farm. By design, it takes the load off where the load needs to be taken off.

    No, I keep the "normal" operation as haproxy > separate web servers, unless all backends are down, then it may or may not serve from itself, depending on the client, configuration, and other factors.
    Since the backend www servers are classically on other hosts, I don't ask them to aggregate, but, they usually don't do this anyway. What you buy is what you get. Some dedicated / colo service may provide this, though.
    To switch haproxy "main" server, yes, it's a manual switch done one of two ways: Either change the main IP (world catches up in 90 seconds), or just turn off your main haproxy service, which causes DNS to move to the next good server. I prefer to manually switch IPs when I can. With my use, it's very rare you ever have to touch anything once it's setup, unless you have some global emergency or incident that drives unprecedented traffic to the sites.

    On designing sites, I'm no designer by any stretch, but I do "design" some WP and other CMS sites, if you want to call it that. Most clients bring their own site, I just provide the level of hosting they require.

    Thanks for the admire, but, it's really just learned behavior through other "solutions" failing miserably.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  9. #84
    Join Date
    Oct 2007
    Posts
    237
    Quote Originally Posted by mugo View Post
    By design, it takes the load off where the load needs to be taken off.
    You are right I didn't think about that, RR could be pretty useless if the destination is random. But I was talking more about the failover provided by DNS vs failover done via haproxy

    Quote Originally Posted by mugo View Post
    Since the backend www servers are classically on other hosts, I don't ask them to aggregate, but, they usually don't do this anyway. What you buy is what you get. Some dedicated / colo service may provide this, though.
    I still can't get over the fact that you in this way would need A LOT of traffic allowance from the haproxy servers since they will receive the cumulative traffic of the backends.
    It would be great if you could use haproxy without the proxy function itself meaning the client would receive data directly from the backend...imagine how amazing this would be if you had servers all around the world and choose the closest location.
    I guess by "geographically" you meant just on different host/networks cause with a proxy there's no proximity advantage!

    What about the synchronization question?

  10. #85
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,007
    Actually, you get the failover you rely on mostly from HAProxy, but, the DNS come in when the actual HAProxy node is affected. Haproxy watches the web servers, but who's watching HAProxy? DNS FO, that's who.
    Depending on the DNS FO service you use, there may also be an option for geo-dns options, I know DNSME has this. So, in essence, you could setup multiple haproxy machines geographically isolated, and send traffic to them based on proximity. That's very do-able, although I haven't found the need for it personally (yet...).

    Really, the aggregated traffic on your haproxy node depends on the content you are delivering up, if it's straight html and not a lot of heavy graphics, etc., then it really doesn't add up to much. Most VPS / Dedi server give 1-2TB of transfer (if not, use one...) per service, and it still takes quite a while to hit this limit. Even at the 2m hits / hr rate, it took a few days to hit bw limits. My own config takes about 15 min to setup a new host, so you have plenty of time. If you keep a high bw node in your back pocket, you can switch to this at a moments notice.
    I've never hit the bw limit in almost a decade of doing this type of service, other than the one global radiation incident, and I believe it was hit then because this client is "well known" in the industry...he's been on CNN, Storage Wars, etc. so hit notoriety is above normal, and the sites been around for freekin' ever.

    There is a similar, more CLI tool called "balance" that I use for some occasions that does exactly what you asked about...it simply and transparently redirects ports / IPs, and doesn't transfer the entire payload through itself. It has grouping and failover also, so it can very well fit in certain circumstances. I use it more as a port re-directory with failover, but it has yet to fail me either, and I've had one instance running on our internal corporate network for about 6 years now. It just works.

    Synchronization...that is taken site-specific. Factors are if the site is db driven, straight html / php files, etc., and of course, how often the client updates.

    For files I generally do RSA key based rsync from a deemed "master" file server on a regular interval, usually 15 min or so.

    For databases, there are a few options, and it depends on the particular site, how many servers, etc...but I use either master-slave, master-master, straight mysqldumps across the wire, or even scripted dumps / imports if the db does not change very often. All that usually has to be decided around what the client uses for their site.
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  11. #86
    Join Date
    Oct 2007
    Posts
    237
    Thank you for your help mugo
    I learned a lot from you in just a few hours

  12. #87
    Join Date
    Mar 2009
    Location
    Austin Tx
    Posts
    2,007
    No prob Sandro, hope it comes in useful!
    This is the best signature in the world....Tribute!
    (It is not the best signature in the world, no. This is just a tribute)

  13. #88
    Join Date
    Oct 2007
    Posts
    237
    I sent you a PM

Page 4 of 4 FirstFirst 1234

Similar Threads

  1. VPS and Failover - how does it work?
    By advv in forum VPS Hosting
    Replies: 4
    Last Post: 08-05-2011, 03:29 AM
  2. [Very Urgent] OVH Failover IPs failed to work. Need expert advice/help
    By kohkindachi in forum Systems Management Requests
    Replies: 17
    Last Post: 12-04-2010, 05:15 AM
  3. HAproxy + failover on same nodes as apache?
    By 1EightT in forum Hosting Security and Technology
    Replies: 2
    Last Post: 11-19-2010, 03:54 PM
  4. Seems wikipedia's DNS failover fails to work shortly
    By NelsonT in forum Web Hosting Lounge
    Replies: 0
    Last Post: 03-26-2010, 08:49 PM
  5. haproxy help
    By artificialman in forum Dedicated Server
    Replies: 4
    Last Post: 08-16-2008, 01:28 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •