Results 1 to 40 of 40
  1. #1
    Join Date
    May 2009
    Location
    Indonesia
    Posts
    216

    Add more Core 6509 or Nexus 7010

    I need to replace my current 4900 with a new one, and i'm thinking to buy another 6509 or Nexus 7010 (switch from 1Gbps to 10G).

    Any suggestion would be appreciate.

  2. #2
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,456
    The nexus models are really catching fire as of late.. I keep hearing more and more about them..
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  3. #3
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    We're deploying all 7018s these days.

    The 7010 has vertical slots and front-to-back airflow in a configuration that makes it 21RU.

    The 7018 has horizontal slots and is a side breather like the 6500s, but for the extra 8 slots it's only 25RU.

    The hardware seems to perform as advertised, but NX-OS is still a bit of a nightmare for stability and feature parity with IOS.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  4. #4
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Or.. juniper.. EX8200 series.. juniper has a cisco replacement program.. free 8208 chassis sup/power/etc if you buy 3 discounted line cards.

    Avoid the cisco nightmare going from cat, to ios to nx-os and above. Cisco needs to pull their act together and take a lesson from juniper and stop changing their CLI around.

    The 6500 is a solid platform however its an aging dinosaur nexus is replacing it but you really should consider juniper.

  5. #5
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by Spudstr View Post
    The 6500 is a solid platform however its an aging dinosaur nexus is replacing it
    For high density 10gig/40gig/100gig, absolutely Nexus is the Cisco chassis of choice.

    The 6500 still has some life left. The SUP-2T will bump all 65xx-E chassis to 80gigs per slot, and the platform will see 40gig optics and further line card development.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  6. #6
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by spaethco View Post
    For high density 10gig/40gig/100gig, absolutely Nexus is the Cisco chassis of choice.

    The 6500 still has some life left. The SUP-2T will bump all 65xx-E chassis to 80gigs per slot, and the platform will see 40gig optics and further line card development.
    yes, cisco just can't seem to decide what they want to do with themselves. Why bother continuing the 6500 after all these years other than they have soo many deployed and the nightmare of a fork lift upgrade which would tempt people into other venders.. which they see as a big risk in their assessment

  7. #7
    Nexus all the way! It is a great platform!

  8. #8
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by Spudstr View Post
    yes, cisco just can't seem to decide what they want to do with themselves. Why bother continuing the 6500 after all these years other than they have soo many deployed and the nightmare of a fork lift upgrade which would tempt people into other venders.. which they see as a big risk in their assessment
    For as much as Cisco is a single company, the various product lines all come out of different business units within Cisco. The 6500 BU is actually in competition with the Nexus 7k BU, and the sales teams work to sell products from the various BUs to customers. Interestingly enough, the Nexus 5k and Nexus 7k are separate competing BUs now -- the 5k was supposed to be L2/FCoE, and the 7k was supposed to be the L3 data center platform. Now the 5548 has the Layer3 add-on module, and the 7k is starting to get some FCoE line cards in the works.

    In general, Cisco is targeting the 6500 to be a wiring closet switch for enterprise customers because of its ability to do high-density high-wattage PoE supporting things like mass deployments of video phones.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  9. #9

    Thumbs down 6500 vs. Nexus

    I have two Nexus and man what a pain, the VPC technology is buggy and has shut me down twice. Cisco says spend more mony, Gold Parners tell me to gut them and go back to 6500's. If you have alot of money to spend and like the pain of an Immature Core Switch, the Nexus may suit you. The 6500's are satble and proven.

  10. #10
    Join Date
    Dec 2005
    Location
    NYC
    Posts
    428
    Nexus is still somewhat buggy from some firsthand experiences. I'd go with the 6500's still. The multiple BU's is supposedly being cut down and consolidated in the very near future i.e. first quarter next fiscal year (next month). What sort of impact that'll have is questionable. The nexus is a great platform but I don't particularly trust it, it feels like its still somewhat in beta.
    Edge 1, LLC
    http://www.edge1.net | 800.392.2349
    Cisco SMARTnet & Licensing Specialists | Datacenter/Network Design & Management Consulting | Cisco New & Certified Refurb Equipment Sales

  11. #11
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    It's important to distinguish that most of the bugs in the Nexus7k are software bugs in NS-OX. Even though code quality leaves much to be desired, the data path runs completely in hardware with the OS only maintaining things like TCAM for the FIB / adjacency tables. In a full production setting, we haven't encountered a data-path affecting bug in the 24+ months we've been using the platform.

    That said, we specifically avoid the features like vPC and FabricPath which remain unproven in a production setting, and have failure conditions which deliver an unacceptable risk for our network. If L3 fails to converge, usually the worst you run into is some subnets become unreachable. If your L2 link / path protocols fail to converge, it's game over.

    The Nexus 7k has a long development within Cisco and was previously called the DC-3 platform which was always slated to be the evolution for the 6500 platform.

    When it comes down to the Nexus product line and things like the 5k/2k solutions, that's something that's never left the lab for us. Neat concept, ridiculous cheap for Cisco hardware, but most of the simple failure scenarios result in critical outages. For example, if you try the Cisco deployment model of linking a 2k to a pair of upstream 5ks, if you shut off the links to one of the 5ks, at least 50-60% of the time the 2k will drop offline and won't come back unless you dump the configs on both upstream 5ks and rebuild it from scratch. Scary stuff.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  12. #12
    Join Date
    Apr 2002
    Location
    Seattle, WA
    Posts
    955
    Juniper or Force10 IMO.

    Nothing wrong with Cisco, just that these days your et a lot more bang for your buck with Juniper or Force10, especially if the rep knows they are competing against Cisco. I've seen Juniper give away QFX against a Nexus deal. Even if by some miracle, Cisco gives you better than 60 off list, Juniper will still beat it with a nicer switch.
    I <3 Linux Clusters

  13. #13
    Join Date
    Dec 2001
    Location
    Atlanta
    Posts
    4,419
    6500 for now. like eric said - still a new platform. 7k will be the way to go but I would give it till 2012 and take a look then. Just keep checking until you hear good stuff. By then there should be plenty of used gear out there as well.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  14. #14
    Join Date
    Nov 2010
    Posts
    190
    How many ports do you need?

    e.g.take two HP A5820-24XG-SFP+ switches(JC102A), stack them with direct attach cables, put in 4 power supplies and add 5 years of full service (UV902E)for ~ 28000,-€ incl. VAT.

    specification sheet: http://h18000.www1.hp.com/products/q.../13791_na.HTML

  15. #15
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    Can't go wrong staying cisco 6500, esp with the sup2t out now. We just replaced our 6506/sup720-3bxl combo with 6506-E/sup2t!
    'Ripcord'ing is the only way!

  16. #16
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by Visbits View Post
    We just replaced our 6506/sup720-3bxl combo with 6506-E/sup2t!
    That's a neat trick considering all orders are currently in New Product Hold (NPH) for the SUP-2T until first customer ship in October.

    The only SUP-2T hardware right now is early field trial and demo gear.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  17. #17
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,456
    Quote Originally Posted by spaethco View Post
    That's a neat trick considering all orders are currently in New Product Hold (NPH) for the SUP-2T until first customer ship in October.

    The only SUP-2T hardware right now is early field trial and demo gear.
    Ouch..
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  18. #18
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    Quote Originally Posted by spaethco View Post
    That's a neat trick considering all orders are currently in New Product Hold (NPH) for the SUP-2T until first customer ship in October.

    The only SUP-2T hardware right now is early field trial and demo gear.
    Didn't say we had it installed, its all on order :-)

    <<snipped>>
    Last edited by bear; 08-03-2011 at 02:11 PM.
    'Ripcord'ing is the only way!

  19. #19
    Join Date
    Apr 2002
    Location
    Seattle, WA
    Posts
    955
    Quote Originally Posted by Visbits View Post
    Didn't say we had it installed, its all on order :-)

    <<snipped>>
    You sort of did say you installed it. Generally speaking, past tense means it's past, not future.

    i.e... replaced means it's all done, replacing means it's on order..
    Last edited by bear; 08-03-2011 at 02:11 PM.
    I <3 Linux Clusters

  20. #20
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,456
    Quote Originally Posted by Visbits View Post
    Didn't say we had it installed, its all on order :-)

    <<snipped>>
    I read it as you installed it as well. <<snipped>>
    Last edited by bear; 08-03-2011 at 02:12 PM.
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  21. #21
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    When you install a lot of routers and networking equipment you consider it "replaced" the combination in your mindset/order template.

    Replace would be used in the *consideration* process. You guys must not deal with enough hardware.

    I think we will replace our 6506 with -e and sup2t's.

    We have replaced our 6506 with -e and sup2t.

    We are in the process of replacing our 6506.

    I think option #3 is more clear than the other 2 in this situation.
    'Ripcord'ing is the only way!

  22. #22
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    I think it's disingenuous to rave about hardware that you haven't tested or deployed (while implying with your wording that you have them in operation) -- that's all. This is going to require a completely new code train for the 6500 in 12.2(50) while everything existing currently is at 12.2(33).

    One of the key downsides to the SUP-2T is that it still doesn't support In Service Software Upgrades (ISSU) the way the Nexus platform does. For the N7k and new 5048/5096 switches, for anything that doesn't require an EPLD update you can push a software upgrade with 0 downtime because unlike the 6k architecture the Nexus allows for the control plane to be frozen and resumed.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  23. #23
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    Quote Originally Posted by spaethco View Post
    I think it's disingenuous to rave about hardware that you haven't tested or deployed (while implying with your wording that you have them in operation) -- that's all. This is going to require a completely new code train for the 6500 in 12.2(50) while everything existing currently is at 12.2(33).

    One of the key downsides to the SUP-2T is that it still doesn't support In Service Software Upgrades (ISSU) the way the Nexus platform does. For the N7k and new 5048/5096 switches, for anything that doesn't require an EPLD update you can push a software upgrade with 0 downtime because unlike the 6k architecture the Nexus allows for the control plane to be frozen and resumed.
    Unlike your data center everything we in our environment is designed around redundancy and we can take a core router offline for firmware upgrades without a care in the world.

    Dual Sup Dual Routers, do it once do it right!
    'Ripcord'ing is the only way!

  24. #24
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by sailor View Post
    6500 for now. like eric said - still a new platform. 7k will be the way to go but I would give it till 2012 and take a look then. Just keep checking until you hear good stuff. By then there should be plenty of used gear out there as well.
    I agree completely, we've just seen too many issues with the Nexus while the 6500 are great solid boxes, though do to their limitations we've been decreasing their workload and only using them for distribution instead of core. Just haven't been impressed with Nexus all that much and have been looking to Brocade and Juniper to fill the gaps.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  25. #25
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by Visbits View Post
    we can take a core router offline for firmware upgrades without a care in the world.
    It's a big deal when your 6500s are access switches, and the attached systems are the back-end data warehouses for EMR. Even with geographically distributed resources, we're still understandably touchy about taking certain things offline.

    Quote Originally Posted by KarlZimmer View Post
    we've just seen too many issues with the Nexus
    On which Nexus platform? With which features? We see a lot of issues with the new feature crap (vPC, FeX/Nexus2k, FCoE) and with some management functions (SNMP issues mostly).

    We haven't encountered any data forwarding issues with standard configuration and routing on the n7k (ie, treat it like a big 6500, don't turn on any special features) -- I'm just curious if others have.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  26. #26
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by spaethco View Post
    On which Nexus platform? With which features? We see a lot of issues with the new feature crap (vPC, FeX/Nexus2k, FCoE) and with some management functions (SNMP issues mostly).

    We haven't encountered any data forwarding issues with standard configuration and routing on the n7k (ie, treat it like a big 6500, don't turn on any special features) -- I'm just curious if others have.
    It was about a year ago that we tested them. Had both 5k and 7k in testing. Basically, it seemed most everything that made them worthwhile to switch to had odd bugs/issues, so then there was no reason to switch. It was also then learning a completely new CLI, and when we already have people familiar with IOS, Brocade, and Juniper that is a hard addition/change to make as well.

    If the idea is to not turn on any of the added and/or special features then what is the point? If you're using them as just access switches what is the point? I guess there are some select cases they make sense, but I don't see how it makes sense in most cases.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  27. #27
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    Quote Originally Posted by spaethco View Post
    It's a big deal when your 6500s are access switches, and the attached systems are the back-end data warehouses for EMR. Even with geographically distributed resources, we're still understandably touchy about taking certain things offline.
    You should run dual access switches and each distribution switch has a link to each access switch. We run 2x 1G in PortChannel to each access switch and use OSPF+iBGP to balance the traffic out of the network.

    On 90% of our network we even utilize bonding to the server with 2 switches per cab for redundancy. I guess our work load/environment isn't typical but dual access switch still makes sense to me.
    'Ripcord'ing is the only way!

  28. #28
    Join Date
    May 2006
    Location
    NJ, USA
    Posts
    6,456
    Quote Originally Posted by Visbits View Post
    You should run dual access switches and each distribution switch has a link to each access switch. We run 2x 1G in PortChannel to each access switch and use OSPF+iBGP to balance the traffic out of the network.

    On 90% of our network we even utilize bonding to the server with 2 switches per cab for redundancy. I guess our work load/environment isn't typical but dual access switch still makes sense to me.
    SpaethCo's setup (from what I have heard) is far more complex and detailed then yours will ever be.


    Quote Originally Posted by Visbits View Post
    Unlike your data center everything we in our environment is designed around redundancy and we can take a core router offline for firmware upgrades without a care in the world.

    Dual Sup Dual Routers, do it once do it right!
    still loling at your advice
    simplywww: directadmin and cpanel hosting that will rock your socks
    Need some work done in a datacenter in the NYC area? NYC Remote Hands can do it.

    Follow my "deals" Twitter for hardware specials.. @dougysdeals

  29. #29
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by KarlZimmer View Post
    If the idea is to not turn on any of the added and/or special features then what is the point?
    The 6500 platform will never have 100gig interfaces, will never support converged IO because it doesn't have a central arbiter to support DCE featuresets, and will never support ISSU.

    Today you can get a 32 port 10gig blade for the N7k that has 230gbps of backplane attachment. The backplane on the N7k is fully upgradable with hot-swap fabric modules, as well.

    Those are generally reasons enough for us to invest in the platform -- it has a future for higher traffic demands.

    Quote Originally Posted by Visbits View Post
    You should run dual access switches and each distribution switch has a link to each access switch.
    That's a great idea. Glad we did that, except we're using 4 x 10Gig L3 uplinks from our access switches. In healthcare, we work on a slightly bigger scale than the web hosting market.

    Still, server-side NIC teaming / redundancy doesn't have a track record of being 100% reliable in our environment. It's much easier to schedule maintenance activities when your anticipated impact to production traffic is nil.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  30. #30
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by spaethco View Post
    The 6500 platform will never have 100gig interfaces, will never support converged IO because it doesn't have a central arbiter to support DCE featuresets, and will never support ISSU.

    Today you can get a 32 port 10gig blade for the N7k that has 230gbps of backplane attachment. The backplane on the N7k is fully upgradable with hot-swap fabric modules, as well.
    I didn't think the Nexus had 100 Gbit/sec cards yet either. If you care about 100 GigE why not go for a platform that is actually shipping 100 GigE cards? The Nexus 7k has converged IO? I only see Ethernet cards. I generally don't buy things counting on features that don't yet exist, might just be me but I like seeing that things work as promised without issue before counting on it as an upgrade.

    I guess if ISSU is that big of a deal for you, sure I guess. We've generally found ways around that being an issue and wouldn't really see it as a reason for an entirely new platform.

    I agree, the platform has some promise, yet I still have yet to see it deliver on most of it while a lot of the other players seem primed to run laps around Cisco. Also not saying that is certainly the case either, but the things coming out from Cisco have been a lot less interesting than the things coming out from basically everyone else.

    If you're looking at an entirely new platform, and a new OS, etc. why go with the Nexus over Brocade or Juniper gear if your only concern is long term growth with converged IO and 100 GigE?
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  31. #31
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by Visbits View Post
    You should run dual access switches and each distribution switch has a link to each access switch. We run 2x 1G in PortChannel to each access switch and use OSPF+iBGP to balance the traffic out of the network.

    On 90% of our network we even utilize bonding to the server with 2 switches per cab for redundancy. I guess our work load/environment isn't typical but dual access switch still makes sense to me.
    That is similar to what we're doing for our cloud and HA setups. That setup works great and we haven't had any issues, even with some odd failures on the switch side. Can also manually move traffic to reboot devices at the distribution or core levels with no noticeable downtime, thus no need for ISSU. On the access switch level the NIC will detect the down and resume without a hitch, we've not seen a case where that hasn't worked, though we're certainly not working in all cases.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  32. #32
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by KarlZimmer View Post
    I didn't think the Nexus had 100 Gbit/sec cards yet either.
    Nobody has a shipping standards-based card yet, but I've seen the 40/100 cards for the N7k in San Jose.

    Quote Originally Posted by KarlZimmer View Post
    If you care about 100 GigE why not go for a platform that is actually shipping 100 GigE cards?
    The bigger issue is I'm still waiting for Tellabs to certify 100GigE interfaces in our DWDM gear. Hashing 50+gbps of traffic sucks, so being able to scale to, say, 2x100G instead of dozens of 10Gig regional links is high on our priority list.

    Quote Originally Posted by KarlZimmer View Post
    The Nexus 7k has converged IO? I only see Ethernet cards.
    Only Ethernet cards are orderable today, true. Still DCE is the key enabling featureset for converged IO -- if you don't have port-to-port bandwidth reservation, it's a non-starter. This is also critical in low latency transaction environments even for IP traffic alone.

    Quote Originally Posted by KarlZimmer View Post
    I agree, the platform has some promise, yet I still have yet to see it deliver on most of it while a lot of the other players seem primed to run laps around Cisco.
    No doubt there -- the Juniper Qfabric stuff is incredibly compelling, in particular. We tend to be focused more on Cisco hardware because of some key partnerships we have with them.

    Quote Originally Posted by KarlZimmer View Post
    If you're looking at an entirely new platform, and a new OS, etc. why go with the Nexus over Brocade or Juniper gear if your only concern is long term growth with converged IO and 100 GigE?
    I guess it depends on how deeply you're committed to certain proprietary Cisco technologies. Features like Bi-directional Forwarding Detection (BFD) are critical for sub-second routing failover within our environment. Could we engineer around it? Sure. Would it be a pain the ass? Most likely. In our business, things like TrustSec are also becoming increasingly important to augment our existing 802.1x deployment.

    Still the key reason to go Nexus over 65xx/SUP-2T is investment protection; you're going to have higher densities of big interfaces available to you on the N7k than you will on the 6500. The SUP-2T is great if you're already heavily invested in the 6500 platform. There is still a darkside in that all DFC3s need to be replaced with DFC4s, and early adopters are going to have to go through the minefield of new code bugs on the 6k all over again. We got units from the first shipping batch of VS-SUP720-10G-3C which meant running 12.2(33)SXH code, and we have a fair number of unique CSC Bug IDs that we logged with our initial deployment. I think we're going to let some other folks take the hit on finding all the new service outage opportunities the Cisco folks will introduce in 12.2(50) on the SUP-2T.

    NX-OS was complete crap 2 years ago -- you couldn't do basic things like use access lists to limit connections to vty interfaces for management. SNMP support was sparse, with Cisco attempting to push the industry to monitoring through XML-based data exchange instead. It's still far from perfect, but at least it's mature enough to be used in a production environment now for most configurations. (again, assuming you don't drink the Cisco marketing koolaide and try to turn on every feature they're selling)
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  33. #33
    Join Date
    Apr 2002
    Location
    Seattle, WA
    Posts
    955
    I'm going to a customer site on Wednesday to do a VMware install on Juniper QFX. I'm pretty pumped. Working with a VAR for so long, I've drank a lot of Juniper/Arista/Force10 kool aid and we don't really do anything with Cisco any more. Even at 65% off list, Juniper is crushing Cisco on pricing. Arista switches are just plain bad ass.

    One of our customers noticed immediate copy speed gains going from Nexus to Arista and is replacing their Nexus core with Arista now.
    I <3 Linux Clusters

  34. #34
    Join Date
    Apr 2002
    Location
    Seattle, WA
    Posts
    955
    Quote Originally Posted by Visbits View Post
    When you install a lot of routers and networking equipment you consider it "replaced" the combination in your mindset/order template.

    Replace would be used in the *consideration* process. You guys must not deal with enough hardware.

    I think we will replace our 6506 with -e and sup2t's.

    We have replaced our 6506 with -e and sup2t.

    We are in the process of replacing our 6506.

    I think option #3 is more clear than the other 2 in this situation.
    Wow, you aren't very good at backpedaling. Post 15 you say it's out and you replaced it. I'm pretty sure we play with far more hardware than you.

    Either way, this isn't anything more than simple English, past tense means past. Oh, and get over yourself, your ego is insane in this thread.
    I <3 Linux Clusters

  35. #35
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Quote Originally Posted by spaethco View Post
    Nobody has a shipping standards-based card yet, but I've seen the 40/100 cards for the N7k in San Jose.
    Brocades cards aren't standards based? I know they're shipping.

    I guess I agree that there are uses for the Nexus over the 6500s, just not in how we use them, as we're no longer using Cisco at the higher levels the Nexus may be useful for us since other vendors have been much more impressive lately.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  36. #36
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    Quote Originally Posted by x86brandon View Post
    Wow, you aren't very good at backpedaling. Post 15 you say it's out and you replaced it. I'm pretty sure we play with far more hardware than you.

    Either way, this isn't anything more than simple English, past tense means past. Oh, and get over yourself, your ego is insane in this thread.


    Thanks for your comments, they've been extremely beneficial to my life!

    I don't have time to PLAY with hardware, to busy getting things done making money. Also to busy to practice my "backpedaling" skills, I'm happy to see you've noticed this!
    'Ripcord'ing is the only way!

  37. #37
    Join Date
    Apr 2011
    Posts
    44
    Stay with the 6500 for now. The NX OS still has alot of bugs. Alot of people complain about the bugs, errors.

    One of the major IX exchanges in north america switched to them a few months back, and are continously having OS issues.

    That being said, Cisco works greatly to fix the issues... But if you want to be safe, and your config will work in the 6500 chasis... Stay with it!

  38. #38
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Quote Originally Posted by Susan Doran View Post
    The NX OS still has alot of bugs. Alot of people complain about the bugs, errors.
    We do regular bug scrubs and testing with the gear -- most of the issues relate to things like vPC. Incidentally enough, you run into the same issues on the 6500 if you run VSS.

    Cross-chassis port-channels are a road to hell paved with good intentions. Architecturally, it makes all kinds of sense to implement an L2 port channel between a pair of upstream switches to have connectivity survivability in the case of an upstream switch failure. The reality is when you're trying to glue this crap together in software like they do in VSS and vPC, you're setting yourself up for a really bad day in your future. The best case scenario with either of those options is if the neighbor switch just flat out loses power and goes completely dead; the protocols are designed to deal with a clean failure like that. The problem, of course, is that this stuff never fails "clean" - you hit a bug, which triggers some failure state that the process doesn't know how to deal with, you hit race conditions fighting over which chassis should be master, and the whole thing melts down. In the interest of setting up an architecture that is "high availability" you actually create something that is more prone to failure.

    The other issue we've seen is high CPU utilization with NetFlow export. No surprise there - you see the same thing on the 6500s depending on what level of detail you're looking to export.

    The hot NX-OS specific bug we're tracking now is a memory leak in 5.1(3) that triggers a sup reload after several weeks. This is a non-impacting event with redundant sups, but it's still not good.

    Quote Originally Posted by Susan Doran View Post
    One of the major IX exchanges in north america switched to them a few months back, and are continously having OS issues.
    It would be really great if specific bugs (or at least a general description of the *type* of issue) could be stated so community members could do the appropriate research.

    Selfishly, we run 7018s in production so we have a vested interest in making sure these are stable switches. We test for everything we can in the lab, but obviously we're not going to hit every failure scenario that might come up in the wild. If we know what to look for, we can try and replicate the failure in our lab and use our leverage as a large customer to try to push Cisco to get things fixed.

    I don't want to knock the 6500 -- it's been a great chassis for us over the years, and it's still the bulk of our environment. (we're managing a bit over 50,000 ports on a couple hundred 6509s) In our case, we get better density on the Nexus because we're able to replace (4) access 6509s with a pair of 7018s in DC IDFs. We also get the flexibility of additional options for 10gig access capacity as we get more serious about building higher density virtualization environments.

    If you're limiting yourself to Cisco as a vendor, either option will work fine. I still think the N7k is much more production ready than people are implying here though. Now, if you want to bring the 5k/2k stuff into the mix, I could write for days about why I won't deploy that solution.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  39. #39
    We had to make the same choice recently and went for the Cisco 6509E for pricing reasons with 10G (VSS setup). But we do not have such high demands on future upgradeability.

  40. #40
    Join Date
    Apr 2011
    Posts
    44
    Everyone has preferences. If you have the money, And time, check out all vendors as they all make excellent product.

    You also need to understand a road map 3 to 5 years out on where you need to be with your network demands.

    And, your availability to support and spares.

    If this is mission critical, have spares and design a redundant and resilliant network.

    That's the best advise... There's always great gear out there... But mOst of the time, budget comes into play.

Similar Threads

  1. Cisco 6509 with Sup 720 3BXL - Good Core Router ?
    By Alex Pinto in forum Colocation and Data Centers
    Replies: 29
    Last Post: 09-21-2010, 11:05 AM
  2. Cisco 6509 w/SUP2 For Sale
    By NodePlex in forum Other Web Hosting Related Offers
    Replies: 4
    Last Post: 02-04-2010, 07:50 PM
  3. IOS on 6509 with WS-X6K-S2-MSFC2
    By NodePlex in forum Colocation and Data Centers
    Replies: 2
    Last Post: 03-31-2009, 08:49 PM
  4. Buying a Cisco 6509-E
    By whoppe in forum Colocation and Data Centers
    Replies: 39
    Last Post: 01-25-2008, 02:04 AM
  5. Extreme BlackDiamond v.s. Cisco 6509
    By s.h.a.zz.y in forum Hosting Security and Technology
    Replies: 18
    Last Post: 06-11-2003, 07:35 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •