Page 1 of 2 12 LastLast
Results 1 to 40 of 54
  1. #1
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889

    Cisco vs. Foundry Networks

    Hello there,
    I'm working on my network plans and my network engineer is set on getting a Cisco 6509 to put behind our Juniper M20. I was just wondering what everyone's opinion was here of the Cisco 6509 vs. say the Foundry Bigiron II+ or similar Foundry products. The Foundry products seem to be cheaper, and a bunch of people seem to use them and really like their performance, etc. The switch would be used to hook colocated clients to and feed off to other switches for our shared hosting servers and dedicated servers. What are the advantages and disadvantages of both these products? Do you have any suggestions or comments, things I should look out for with either?

    What management module should I get with the Foundry? We were planning on the SUP2 with the 6509.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  2. #2
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083

    Re: Cisco vs. Foundry Networks

    Originally posted by KarlZimmer
    What management module should I get with the Foundry? We were planning on the SUP2 with the 6509.
    You are not planning to run SUP720? What MSFC will you run? Obviously this is being used for layer 3 customer aggregation, or you'd be using FastIron or BigIron. Are you planning to connect your customers directly to 6509 ports? That's expensive.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  3. #3
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    Was planning on the WS-X6K-S2-MSFC2, that's what the network engineer said, he was saying we needed need the SUP720, not sure why though. I normally just go by what he tells me.

    The plan is to only connect larger clients to it, like GigE/FastE feeds for full cabinets (probably half cabinets as well), feeds to the 48 port switches we'll be using for dedicated servers and shared hosting servers, etc. Do you have a suggestion that may work better for this?
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  4. #4
    Join Date
    Nov 2002
    Posts
    2,780
    SUP720 is insanely expensive since they're not available on the Used Market. It have to be brought new from cisco. a fully configured one is in the range of 100K...

    It probably makes more sense to go with the MSFC for now

  5. #5
    Join Date
    Nov 2001
    Location
    New York / New Jersey
    Posts
    753
    Considering it will be almost impossible to get your hands on a SUP720 on the 2nd hand market like Jay said, I would defiantly lean toward a foundry box. Any foundry gear would kick SUP2's a** any day. The SUP720 is nice, but it will cost you a pretty penny, and a long wait time. I know some that have been waiting 120+ days for the ones they ordered.

    You would be pretty safe with a BigIron 8K IronCore to push up to 2-3 Gigabit with out and trouble


    keep in mind Cisco is good, but when you push it hard it craps out. Sup720 can't handle small packets very well.

  6. #6
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    I agree, Foundry can push some traffic. I would suggest you look at the Jetcore models as they are prefix based and not flow based.
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  7. #7
    Join Date
    Nov 2002
    Posts
    2,780

  8. #8
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    Why not Layer 3?
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  9. #9
    I'd go with the 6509 (6513 if I had space for it). Sup 720 is nice, but if you're not putting it on the edge and running uRPF a Sup II is OK. One nice new blade is the 6748 - it's Sup720 only, but non-oversubscribed 10/100/1000 48 port. How many gigabits are you expecting to be pushing?

    Don't forget Cisco's leasing can be more cost effective than ebay if you have some credit history.

  10. #10
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    Originally posted by RackMy.com
    I agree, Foundry can push some traffic. I would suggest you look at the Jetcore models as they are prefix based and not flow based.
    No, that's wrong, and if you'd taken the time to read the archetictural notes they published when the JetCore modules were released, you would know that. The JetCore modules use the same basic technology as the IronCore. They still have the same 3 level IPv4 TCAM, however the JetCore CAM is twice as large. Some of the TCAM table entries are now twice as large, though; the layer 4 service load balancing flow entries, for example.
    The reason JetCore is less vulnerable to random source / destination DDoS is that, upon a TCAM lookup miss on the ingress module, the IronCore modules forward the entire frame to the management module; and the management module then updates the ingress module's TCAM. JetCore forwards only the first 64 bytes of the lookup miss packet to the management blade to do the same job. I posted on this in great detail a couple months ago. Search my posts for Foundry and you should find some good information.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  11. #11
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    I think you may be looking at the old code, the new code allows ACLs and such to be prefix based (from what I have read).

    Disclaimer: I am going by what Foundry has told me. If this is not true, then the statement should be "the Jetcore blades perform like prefix based switches"
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  12. #12
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    Originally posted by RackMy.com
    I think you may be looking at the old code, the new code allows ACLs and such to be prefix based (from what I have read).
    Foundry states that the JetCore modules perform like prefix-based switches, but if you actually take the time to do some research; as I have, you'll learn that JetCore is essentially the same TCAM as IronCore with some tweaks that they learned in the school of hard knocks. All forwarding in JetCore boxes is done on a flow basis, and the TCAM on ingress line-cards is updated, based on the CPU's FIB, configured ACLs, and so on, as each new flow is constructed at the ingress module.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  13. #13
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    Jeff, thanks for the information. I could probably read the whole white paper and still not fully understand it all BTW, which white paper are you reading?
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  14. #14
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    Originally posted by RackMy.com
    Jeff, thanks for the information. I could probably read the whole white paper and still not fully understand it all BTW, which white paper are you reading?
    To understand the issues discussed in this thread, you really need to first read up on IronCore, and then learn about the changes introduced by JetCore.

    http://www.foundrynet.com/products/l...IronBrief.html
    http://www.foundrynet.com/solutions/...s/JetCore.html
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  15. #15
    based on this, it doesnt look like the optimization removes the problem with the routing algorithm being flow-based but simply alleviates certain consequences of it under certain conditions. the O-notation efficiency of the algorithm remains approximately the same and i would venture a guess that in edge cases, such as high pps with packet per src/dest tuple, the thing would croak anyway. am i offbase?

    paul
    * Rusko Enterprises LLC - Upgrade to 100% uptime today!
    * Premium NYC collocation and custom dedicated servers
    call 1-877-MY-RUSKO or paul [at] rusko.us

    dedicated servers, collocation, load balanced and high availability clusters

  16. #16
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    Originally posted by rusko
    based on this, it doesnt look like the optimization removes the problem with the routing algorithm being flow-based but simply alleviates certain consequences of it under certain conditions. the O-notation efficiency of the algorithm remains approximately the same and i would venture a guess that in edge cases, such as high pps with packet per src/dest tuple, the thing would croak anyway. am i offbase?
    You are absolutely correct. I discussed this in my post about a month ago (search my posts for Foundry or JetCore or FID). The main reason the JetCore platform survives longer, though; is that the IronCore modules send the ENTIRE PACKET to the management CPU when they encounter a TCAM miss packet! The JetCore modules send only the first 64 bytes of the packet to the CPU, which means it can perform lookups and TCAM updates far more efficiently.

    When the BigIron layer 3 IronCore boxes first came out, you may remember Foundry sales and engineering folks assuring potential customers that yes, they would perform as well as boxes which use a full FIB to forward every packet. They only learned that wasn't true when random source/dest DDoS events were encountered in the real world. The same is basically true for JetCore, but the bar has been raised substantially.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  17. #17
    i did read that post, at least the post i think you meant =] cutting down on the size of the data forwarded per flow established seems to be alleviating linecard-to-mgmt banwidth contention and maybe some memory copying overhead on the mgmt card, which wouldnt even be in my top 10 of problems with flow-based routing algos. so, unless i am offbase, all they have done is they have fixed their previously broken flow-based routing implementation as opposed to somehow enhancing it so it wouldnt suck so much.

    paul
    * Rusko Enterprises LLC - Upgrade to 100% uptime today!
    * Premium NYC collocation and custom dedicated servers
    call 1-877-MY-RUSKO or paul [at] rusko.us

    dedicated servers, collocation, load balanced and high availability clusters

  18. #18
    Join Date
    Aug 2002
    Location
    Trouble will find me!
    Posts
    1,470
    Foundry seems to be used widely and especially amongst alot of hosts in Europe and .edu across the world.

    EV1 uses them across both datacenters, and from the pictures they put up on there website during the construction of the second facility (which were later removed) showed 2* Foundry BigIron 8000 with redundant MGT3 cards and the rest were GIGe port blades. (I believe they form the core distibution of ev1 network)

    EV1 seems to have alot of sucess with Foundry equipment as an example of a host pushing alot of traffic.
    Last edited by s.h.a.zz.y; 05-02-2004 at 09:32 PM.
    ^^ IM WITH STUPID!! ^^

    "The only way to overcome fear, is to challenge it head on"
    "The quickest way to get over a woman, is to get under another"

  19. #19
    Join Date
    Aug 2002
    Location
    Trouble will find me!
    Posts
    1,470
    Originally posted by jsw6
    You are absolutely correct. I discussed this in my post about a month ago (search my posts for Foundry or JetCore or FID). The main reason the JetCore platform survives longer, though; is that the IronCore modules send the ENTIRE PACKET to the management CPU when they encounter a TCAM miss packet! The JetCore modules send only the first 64 bytes of the packet to the CPU, which means it can perform lookups and TCAM updates far more efficiently.

    When the BigIron layer 3 IronCore boxes first came out, you may remember Foundry sales and engineering folks assuring potential customers that yes, they would perform as well as boxes which use a full FIB to forward every packet. They only learned that wasn't true when random source/dest DDoS events were encountered in the real world. The same is basically true for JetCore, but the bar has been raised substantially.
    Jeff, would this be a reason not to go with Foundry or can we assume that most layer3 switching equipment in the same range (cisco 6500 and extreme blackdiamond 6800) would falter under the same load?
    ^^ IM WITH STUPID!! ^^

    "The only way to overcome fear, is to challenge it head on"
    "The quickest way to get over a woman, is to get under another"

  20. #20
    While I don't have the tech specifics to compare Foundry to the Extreme right in front of me, I do know that a typical 6800 BD behaves very much the same as the foundry under heavy DDos, where the IPFDB starts to turnover very quickly and the CPU utilization skyrockets.

    However, when equipped with either the ARM or MPLS module, the BD has the ability to route based on LPM. Both of these modules act as routers that are one-armed off the switch fabric and have the ability to greatly increase the L3 traffic capability of the switch. Extreme markets the ARM with the capacity to route 6Gb's of traffic per ARM module, and I believe you can have up to 3 in a chassis.

    Like I said, this is in theory, I have never actually tried this so really can speak first hand, but it might be something worth investigating.
    Last edited by DoubleD; 05-02-2004 at 10:58 PM.

  21. #21
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    Originally posted by s.h.a.zz.y
    EV1 uses them across both datacenters, and from the pictures they put up on there website during the construction of the second facility (which were later removed) showed 2* Foundry BigIron 8000 with redundant MGT3 cards and the rest were GIGe port blades. (I believe they form the core distibution of ev1 network)

    EV1 seems to have alot of sucess with Foundry equipment as an example of a host pushing alot of traffic.
    Would this be the same EV1 who has six Juniper m20s in their Texas facility? Hrm, I think so.

    Originally posted by DoubleD
    However, when equipped with either the ARM or MPLS module, the BD has the ability to route based on LPM. Both of these modules act as routers that are one-armed off the switch fabric and have the ability to greatly increase the L3 traffic capability of the switch.
    MPLS LSPs are single flows, even though many thousands of what one would call Layer 3 IPv4 flows are encapsulated within the LSP traffic. This works the same as a layer 2 switch (Foundry, for example) forwarding a large number of layer 3 flows as a single layer 2 flow between two MAC addresses which transit the box.

    Originally posted by rusko
    i did read that post, at least the post i think you meant =] cutting down on the size of the data forwarded per flow established seems to be alleviating linecard-to-mgmt banwidth contention and maybe some memory copying overhead on the mgmt card, which wouldnt even be in my top 10 of problems with flow-based routing algos. so, unless i am offbase, all they have done is they have fixed their previously broken flow-based routing implementation as opposed to somehow enhancing it so it wouldnt suck so much.
    Foundry boxes have 8Gb/s cross-bar fabric interconnecting all the modules, including the management modules. The problem with IronCore is that entire TCAM miss packets were copied to the management module's memory and processed by the CPU, as opposed to the first 64 bytes in JetCore implementations.
    What they have done in JetCore is pretty much what you have described -- they have fixed some very broken aspects of the IronCore archeticture, but it's still the same flow-based archeticture with few changes from IronCore to JetCore. Some big symptoms have been relieved.

    Flow-based boxes aren't intrinsically bad. The problem is that vendors of flow-based archetictures have marketed their boxes to carriers and large hosting companies who do have both DDoS events and large enough amounts of traffic that CAM thrashing is a problem even under normal traffic conditions in the core.
    Procket, who you will begin hearing more and more about over the next couple of years, makes a flow-based box that is capable of operating in these environments for one reason alone. Their box can update the ingress modules' CAM with new flow entries as quickly as new, single-packet, flows can arrive at the wire rate on their biggest (oc768c) interfaces. Why are they flow-based, then? Who knows. But they are going to make it work, and I suspect they are already making a lot of other vendors of flow-based routing platforms feel out-paced.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  22. #22
    Originally posted by jsw6
    Would this be the same EV1 who has six Juniper m20s in their Texas facility? Hrm, I think so.
    for the sake of completeness, i believe what you meant to convey is that ev1 uses foundries for layer 2 where they excel. we are talking layer 3 here.

    i would be interested in a good real-life comparison of 65xx and ironcore and jetcore bigirons in a core switch layer 3 capacity. both throughput under normal conditions as well as handling of reasonably large ddos events. my guess that a combination of a bigiron and an m5 would work best, but its a hunch.

    paul
    * Rusko Enterprises LLC - Upgrade to 100% uptime today!
    * Premium NYC collocation and custom dedicated servers
    call 1-877-MY-RUSKO or paul [at] rusko.us

    dedicated servers, collocation, load balanced and high availability clusters

  23. #23
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    Interesting stuff, Yes EV1 does use BigIrons but I know they had a problem with one of the newer cards. I am not sure if it's the BI MG8 or the BI JetCore (think it was the MG8). Yes, they also use Juniper.

    I was looking at the release notes of the latest OS and they also have implemented CAM/CPU protection in the event of over utilization. From what I have read, you can set a threshold on CAM/CPU utilization tp perform an action based upon meeting that set threshold. The actions are changing the age limit of the CAM entries to a shorter time, drop/flood unknown unicast and drop/flood unknown multicast.

    Jeff, I am interested in your thoughts? The decrease in the CAM entries age could help, but I would think it may do more harm than good. (again, I am not a network gear head just trying to learn)
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  24. #24
    decreasing the age is supposed to purge the entries associated with the bogus (spoofed) flows faster. however, this will only help if they can be purged faster than new flows come in. to me, at least in certain situations, this sounds like a recipe for more thrashing.

    i know you asked for jeff, but what the heck =]

    paul
    * Rusko Enterprises LLC - Upgrade to 100% uptime today!
    * Premium NYC collocation and custom dedicated servers
    call 1-877-MY-RUSKO or paul [at] rusko.us

    dedicated servers, collocation, load balanced and high availability clusters

  25. #25
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    As is often the case, I agree with rusko. A lower CAM entry age really isn't helpful when new flows appear quickly enough to take the CPU to 100%. It's unable to populate the ingress module TCAM with new flow entries to keep up.
    Dropping "unknown unicast," as in uRPF filtering, still involves setting up flow entries on the ingress modules; the same applies to multicast flows.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  26. #26
    Join Date
    Aug 2002
    Location
    Trouble will find me!
    Posts
    1,470
    Originally posted by jsw6
    Would this be the same EV1 who has six Juniper m20s in their Texas facility? Hrm, I think so.
    .... the original poster did mention he had a Juniper m20 which would sit infront of the switch, hence the reason I mentioned it.

    It was as a FYI, I was suprised to see them still using the MGT3 in their new facility.
    Last edited by s.h.a.zz.y; 05-03-2004 at 08:05 AM.
    ^^ IM WITH STUPID!! ^^

    "The only way to overcome fear, is to challenge it head on"
    "The quickest way to get over a woman, is to get under another"

  27. #27
    Join Date
    Dec 2000
    Location
    Indianapolis, IN
    Posts
    1,748
    If you want a good core switch that does layer 3 you might want to look at Extreme as well http://extremenetworks.com/libraries...roducts/bd.asp

    We are putting a fully loaded 6808 in tomorrow and anther one in our other DC. They are very nice switchs and can even act as a router if needed..

  28. #28
    Join Date
    Aug 2002
    Location
    Trouble will find me!
    Posts
    1,470
    Originally posted by Vortech
    If you want a good core switch that does layer 3 you might want to look at Extreme as well http://extremenetworks.com/libraries...roducts/bd.asp

    We are putting a fully loaded 6808 in tomorrow and anther one in our other DC. They are very nice switchs and can even act as a router if needed..
    BD has the same issues as Foundry on Layer3, it does not perform as well. L2 it is a very good switch.

    Also remember the BigIron supports bi-directional rateshape on 1 single port, if you want to do the same on the BD you'll need one (empty) loopback port extra.

    The BigIron have a better suite of features than the BD last time I checked.
    ^^ IM WITH STUPID!! ^^

    "The only way to overcome fear, is to challenge it head on"
    "The quickest way to get over a woman, is to get under another"

  29. #29
    Join Date
    Dec 2000
    Location
    Indianapolis, IN
    Posts
    1,748
    I thought they both did Layer3 and BGP better good. We plan to put our BD in tomorrow.. I guess we will be able to tell then..

    I hope it works in our case... LoL It will suck pulling out all of our cisco stuff to upgrade to the BD and then it not be able to do the L3 as well.. Only one way to find out I guess..

  30. #30
    Join Date
    Aug 2002
    Location
    Trouble will find me!
    Posts
    1,470
    Originally posted by Vortech
    I thought they both did Layer3 and BGP better good. We plan to put our BD in tomorrow.. I guess we will be able to tell then..

    I hope it works in our case... LoL It will suck pulling out all of our cisco stuff to upgrade to the BD and then it not be able to do the L3 as well.. Only one way to find out I guess..
    They "do" both do Layer3 and pretty well but once you begin to add QOS and enabling features from what I have heard the performace drops fair bit. As the previous posts above suggest that under heavy load of packets/ddos the cpu would overload due to the way they are engineered.

    Layer2 you will have no problems, they are very very good at that.
    Someone around here who uses them maybe able to give more insight.
    ^^ IM WITH STUPID!! ^^

    "The only way to overcome fear, is to challenge it head on"
    "The quickest way to get over a woman, is to get under another"

  31. #31
    both extreme's and foundry's bgp implementations lease a lot to be desired. in a few words, dont do it.

    as far as using it for layer 3, pretty much the same concerns as with foundry apply - it is flow based and with high pps + low packet/src dest tuple ratio will die a rather spectacular death. whether this is something that matters to you is a different question.

    paul
    * Rusko Enterprises LLC - Upgrade to 100% uptime today!
    * Premium NYC collocation and custom dedicated servers
    call 1-877-MY-RUSKO or paul [at] rusko.us

    dedicated servers, collocation, load balanced and high availability clusters

  32. #32
    Join Date
    Jan 2003
    Location
    Chicago, IL
    Posts
    6,889
    I'm glad I find ways to start such interesting discussions. :-)

    Also, I find all this to be quite informative as I am far from being a network engineer, but this helps me understand some of the products and issues.
    Karl Zimmerman - Steadfast: Managed Dedicated Servers and Premium Colocation
    karl @ steadfast.net - Sales/Support: 312-602-2689
    Cloud Hosting, Managed Dedicated Servers, Chicago Colocation, and New Jersey Colocation
    Now Open in New Jersey! - Contact us for New Jersey colocation or dedicated servers

  33. #33
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    Wouldn't ACLs in place on the ingress port help stop some of the internal - external type of random sourced attacks? The Jetcore put these in Hardware so they do not need to touch the CPU.
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  34. #34
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    Originally posted by RackMy.com
    Wouldn't ACLs in place on the ingress port help stop some of the internal - external type of random sourced attacks? The Jetcore put these in Hardware so they do not need to touch the CPU.
    No, layer 3 flows have to be created in the TCAM of the ingress module(s) where the traffic is arriving. For this reason, even packets that will be discarded by your configured ACLs will eventually overwhelm the IronCore boxes, and JetCore too at a high enough new flow rate.

    Originally posted by rusko
    both extreme's and foundry's bgp implementations lease a lot to be desired. in a few words, dont do it.
    This is another good point. Not only do Foundry have problems with being a flow-based box unable to create new flows quickly enough to keep up with arriving packets, but they also have some real problems with their BGP speaker. For example, Foundry ignores MED unless you use always-compare-med. Anyone with BGP expertiese will recognize that as a big problem which totally negates the usefulness of MED. Some people rewrite MED when they learn routes and use it as another local-preference anyway, so in that configuration it is not a problem; but if you are using MED as a second local-preference, you remove your ability to direct traffic to an upstream provider (or peer, or customer) based on their IGP metric values. That's a big engineering decision, and some folks like the results they get with it, but eventually it bites everyone who does it as they grow.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  35. #35
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    No, layer 3 flows have to be created in the TCAM of the ingress module(s) where the traffic is arriving. For this reason, even packets that will be discarded by your configured ACLs will eventually overwhelm the IronCore boxes, and JetCore too at a high enough new flow rate.
    But because it does not have to hit the CPU, it can do it much quicker and be able to handle more pps. IronCore has to forward all packets to the CPU, Jetcore does not.
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  36. #36
    Join Date
    Aug 2002
    Location
    Seattle
    Posts
    5,512
    I've been wondering many of the same things myself. My staff and consultants have been reccomending a 6509 over a BigIron 4000 for BGP but the reasons why are some what mixed. I've been told anything from problems with BGP on the BigIron to unacceptable performence with our level of traffic (well under 100 - 800 megabit depending on time of day and level of DDoS attacks being absorbed).

    Any thoughts?

  37. #37
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    What type of BGP are you looking to do? I have not heard of too many issues with iBGP set-ups, I think it depends on your configuration.

    BTW, the flow model is not that bad it really depends on your set-up (as with anything).
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  38. #38
    Join Date
    Feb 2004
    Location
    Louisville, Kentucky
    Posts
    1,083
    I'm going to start sending WHT invoices for posting about Foundry, I swear. ;-)
    Your consultants are recommending against using BigIron for BGP because it's not that great in any layer 3 environment. The JetCore modules make some improvements, but yes, all TCAM miss packets do consume CPU resources on the management module, even if they are filtered by ACLs. Contrary to RackMy.Com's statement, the first 64 bytes of every packet are copied to the CPU, and a flow entry is created on the ingress module's TCAM to discard the packet, and any others associated with that flow until it expires.
    Then there's the issue rusko brings up. Foundry's BGP implementation isn't great. It ignores MED unless you use always-compare-med. Tell this to your consultant, and if they don't immediately realize why that is a serious problem, fire them and get a smarter consultant.
    Foundry boxes are great for layer 2 aggregation. They are inexpensive and the problems of their flow-based archeticture don't manifest themselves in this environment. Beyond that, I wouldn't recommend them for any layer 3 application where DDoS events may impact service.
    Jeff at Innovative Network Concepts / 212-981-0607 x8579 / AIM: jeffsw6
    Expert IP network consultation and operation at affordable rates
    95th Percentile Explained Rate-Limiting on Cisco IOS switches

  39. #39
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    Contrary to RackMy.Com's statement, the first 64 bytes of every packet are copied to the CPU, and a flow entry is created on the ingress module's TCAM to discard the packet, and any others associated with that flow until it expires
    Interesting, Foundry has been telling people that not all packets make it to the CPU.

    Wire Speed Policy Based Routing & ACLs
    The new architecture provides line rate policy based routing and filtering in hardware, without requiring that the first packet of the flow goes to the management module. This makes the architecture suitable for environments requiring routing and access policies, yet have very bursty applications.

    Jeff, send me your bill. It's been worth it
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

  40. #40
    Join Date
    Apr 2001
    Location
    St. Louis, MO
    Posts
    2,508
    I just talked with an SE at Foundry and he said if you put in an antispoof ACL on the ingress port, the CPU will only see flows that make it past the ACL.
    Mike @ Xiolink.com
    http://www.xiolink.com 1-877-4-XIOLINK
    Advanced Managed Microsoft Hosting
    "Your data... always within reach"

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •