Results 1 to 3 of 3
  1. #1

    How to aggregate bandwidth from multiple datacenter feeds?


    I am trying to figure out how I can send and receive more than 1 gbit/s of bandwidth out of my cluster load-balanced colocated webservers.

    If I get a second gigabit feed from the datacenter, how do I get the internet to route and load balance traffic to the same public ip address over both gigabit feeds? Fault tolerance would be nice, too.

    The obvious hack is to use DNS to round-robin load balance traffic over each gigabit feed. However, that has issues (caching, fault tolerance, etc...). I'm guessing there are more elegant ways of handling this.

    The way I have my cluster setup now is:
    1) two LVS director's (primary, and fail-over) with a shared public ip address.
    2) The active LVS director redirect's incomming port-80/port-443 traffic to my cluster of backend webservers via "direct server return".
    3) Return traffic from each webserver goes directly back to the client, bypassing the LVS director.
    4) I'm using the default gateway provided by the datacenter. (What would happen if I got a feed directly from a carrier? Would they provide a default gw?)
    5) My web sessions are stateless -- so, I don't need connection tracking (shared secret MD5 hash for cookie authorization).

    I'm using keepalived to detect dead webserver's on the LVS director's.

    This setup allows me to fully saturate the upstream bandwidth, route around dead webserver's, and automatically handle dead LVS director's.

    Now, without using DNS load balancing, how do I handle more than 1gbit of inbound/outbound traffic with network connection redundancy?

    I've thought about using multiple egress gateway's (each a seperate gigabit uplink from the ISP) to load balance outbound traffic:
    *For each webserver, do:
    1) enable IP_ROUTE_MULTIPATH in the kernel (2.6)
    2) 'ip route add default scope global nexthop via <GIGABIT GATEWAY IP ADDR#1> dev bond0 weight 1 nexthop via <GIGABIT GATEWAY IP ADDR#2> dev bond0 weight 1'

    I would do this on each of the webserver's. That would balance outbound traffic (via direct server return) over multiple gigabit links and provide fault tolerance (new tcp connections would take place on the good uplinks -- existing connections on bad links would fail). However, that still doesn't handle in-bound traffic (to the LVS director's)

    I'm running all of my servers (webservers, LVS directors's, email servers, admin server) on a stackable Nortel 5510 gigabit switch (3/8 switches in the stack are populated). The gigabit feed from the datacenter is connected into the same switch. The LVS director's are running an iptables firewall. Since the webserver's don't have publically routable IP addresses (I'm hiding ARP responses on each web server for "direct server return), they are not running firewalls.

    Any suggestions on how to increase aggregate inbound bandwidth above 1gbit w/o using DNS load balancing?

    Thanks in advance!

    I'm using a flat network topology (is this a bad design? -- I don't need a backend database):
    I wanted to avoid cramming all of my bidrectional traffic through a single firewall or load-balancer (hence, the use of direct return and the decision to setup seperate firewalls on each server that has a public ip address).

    (Gigabit Datacenter Feed) <-----> (Nortel Baystack 5510 Gigabit L3 switch with 144 ports; supports up to 384) <-----> (2x LVS_Director's with shared public IP address and iptables firewall) + (32x webserver's with RFC 1918 private ip addresses and no firewall; dual-bonded gigabit ports) + (1x SSH login admin server with public ip address and iptables firewall + port knocking) + (2x SMTP server's with public IP address and iptables firewall; DNS load balanced)

    My two DNS server's are leased server's running at two different datacenter's.

  2. #2
    I'm really sorry about the double posting. I thought my last post didn't go through -- forums are running a tad slow today.

    If you're an admin, please delete the older posting.

    Sorry again.

  3. #3
    Join Date
    Oct 2004
    Get a good layer 3 switch that supports etherchannel, such as a Cisco 4908-L3.
    Do two fiber GigE uplinks to the carrier, bonded using etherchannel.
    Come out of that into fiber gige cards for your large bandwidth servers, and into your 5510 for everything else. Install fiber GigE cards in the 'return' webservers

    Check out the Intel Pro 1000MF server adapters (64 bit) -- we do 800+ mbps per server with them; and they are fairly cheap.
    Last edited by Dennis Nugent; 11-09-2006 at 06:50 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts