Other than a faster CPU which is better for BGP convergence time (not an issue anyways if used just for core switching), there aren't many compelling benefits aside from support life. Compared with a 3BXL you have the same route table limit, same port density and Netflow is still very broken. I see no compelling reason to upgrade from a 3BXL... and if you're starting fresh you might as well look to other, more capable platforms with proper netflow/sflow support.
Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U Since 2003 - Ashburn VA + San Diego CA Datacenters
The SUP720 can do VSS as well, although I still don't understand the appeal of taking 2 operationally independent switches and merging them into a common failure domain.
Only the 720-3c and 3cxl have vss. It's like virtual chassis or stacking, you have one config across 2 routers. If one fails the other keeps running and no one knows the difference as long as you have connections to both from every other device. Its really helpful if you run your vlans from the core you don't need hsrp or or similar that wastes ips for layer 3 redundancy. From what I understand it works basically the same as having 2 sups but in different chassis so now you are also protected against backplane failure.
Plus you essentially double your backplane bandwidth because you have 2 separate backplanes. Not that anyone routing over 1tbps is using a 6500
Austin - PeakServers Support PeakServers.com | Web Hosting, VPS, and Dedicated Servers with Fully Managed Support Options
It's like virtual chassis or stacking, you have one config across 2 routers.
Exactly. Common control plane, common failure domain. You hit a software issue that results in something like CEF table corruption, you've now hosed both switches instead of just having a glitch on one.
Originally Posted by PeakServers-Austin
If one fails the other keeps running and no one knows the difference as long as you have connections to both from every other device.
Only true in a few failure scenarios. If you take the time to test it out, there are a number of scenarios where an issue on one switch results in the complete failure of both switches until the VSS standby supervisor reloads.
Originally Posted by PeakServers-Austin
Its really helpful if you run your vlans from the core you don't need hsrp or or similar that wastes ips for layer 3 redundancy.
Except if you're running HSRP, and you have something that locks up the control plane access on your primary gateway switch, you can most likely still log into the HSRP standby and adjust the priority to get them to come active. Once you lose the control plane on the active SUP of a VSS pair, there is nothing you can do remotely to get it back until someone powers down the active switch to get the standby to assert control.
VSS/vPC/MLAG/etc all carry the common downside that they need to synchronize state across a number of processes in order to function correctly. Spanning-tree topology, adjacency tables, CEF/FIB tables, LACP/PAGP/Channel state, and traffic hashing algorithms for things like port channel load balancing or equal-cost multipath routing (including hash seed synchronization) all need to sync perfectly across the switches or you risk running into an Ethernet flooding scenario and subsequent network meltdown.
All of these solutions create configurations that look awesome on paper (Hey! My end devices are connected to 2 upstream switches!) but you create something that is more fragile and delivers less uptime in the process.
Interesting to know that a service like github uses its own datacenter or own setup , I thought they were simply hosted on some 3rd party datacenter.
I guess they've grown exponentially.
Going back to the VSS, always thought of the virtual switching as a scary topic to setup into production , more than scary i think that troubleshooting issues when things go south can be a nightmare. Just read the complete explanation on github and well ... even with the vendor on the phone they still battled for hours with that situation.
We've seen no reason to upgrade from SUP720's for use as distribution switches. If you're using the 6500's for handling a good amount of BGP it probably makes sense as the SUP720's are pretty horrid for that. Yes it does 80 Gbit/sec per slot, but for the 8 port 10 GigE cards, we don't have ANY ports over 5 Gbit/sec anyway and don't know many people that need to push that much over one line card. Even then, it is cheaper to add another line card than get Sup2T's and new line cards and/or DFCs.
Last edited by KarlZimmer; 02-06-2014 at 09:09 PM.