Results 1 to 8 of 8
  1. #1
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,955

    iscsi - multipath vs. bonded/team NICs

    Does anyone have experience comparing the two? Any comments on why one would be better over the other?



    Kind Regards,

  2. #2
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,955
    Oh C'mon people. I know there are SOME people on this board with more advanced experience.

    No Comments?

    No Suggestions?

  3. #3
    Join Date
    Jul 2004
    Location
    Island of Oahu, Hawaii
    Posts
    671
    The two configurations that you want to compare are very different so it is hard to compare. Multi-path implies extra redundancy and a higher cost. Where the bonded/team NIC config does not have multiple paths just multiple NICs. Also, with the bond config you can configure virtual IPs so if failover occurs the same address is used.

    The two configs you want to compare can be used in a combined fashion to have bonded NICs and Multiple paths. So comapring them is tough since they don't really accomplish the same goal. My feeling is that if you have a failover server configuration you will want to use bonded NICs. But again, if you have the budget and you need the uptime then redundancy is a must.

  4. #4
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,955
    How about this setup - comparing mp and teaming on an iscsi initiator with 3 nics. If you use just mp it will/can route across each of the individual nics, where as with a teaming solution you have a vip that you can route the traffic to locally. In this setup they are essentially the same architecture, but different technologies.

    Thoughts?

  5. #5
    Join Date
    Jul 2004
    Location
    Island of Oahu, Hawaii
    Posts
    671
    Personally I prefer the multipath scenario. In the Fibre Channel SAN world this is the preferred method. Using iSCSI this can be done at a much lower cost than Fibre Channel. Everything is pretty inexpensive these days including the iSCSI initiators.

    I think what might help me understand better what you are trying to accomplish if you describe what it is you want/need from this set up. Are you looking for failover? If so are you considering a true multipath with redundant networks/hardware?

  6. #6
    Join Date
    Jul 2006
    Location
    Detroit, MI
    Posts
    1,955
    We're after throughput. Until 10GigE becomes cost-effective we are stuck with bonded NICs. I'd like to start exploring alternate configurations for our iSCSI SAN setup that could increase out throughput.



    Kind Regards,

  7. #7
    Join Date
    Jul 2004
    Location
    Island of Oahu, Hawaii
    Posts
    671
    If you are after throughput then your network is going to have to support speeds higher than 1Gb/sec. For instance, the links between switches and routers can be a bottleneck. If you have 1 trunk link between switches then you will limited to this speed no matter how many NICs you bond together. Also, if you are going to have continuous iSCSI high throughput it can affect the rest of your traffic throughput. Bottom line is if your network set up to handle a higher throughput of 1Gb then your bonded NIC scenario is probably the better solution.

  8. #8

    Multiple NIC's

    I have two Intel SSR212MA iSCSI SAN's w/ 4 GB NIC's each, connected to a single, dedicated switch. I'm configuring the SAN's as a local cluster (LeftHand SAN IQ), using 2-way replication so if one SAN dies, the volumes keep available. I've been trying to figure out for days what config is the fastest, but can find no authoritative info, only opinions. I'm finally opting for bonding each SAN's 4 NIC's into a 4GB trunk using 802.3ad Link Aggregation on the SAN's and switch, since my best guess is this will produce the highest performance. An interesting thing that many folks don't know is that a single server talking to the SAN has a max speed of 1GB even if the server also has multiple NIC's teamed. The greater-than-1GB only occurs when multiple servers try to access the SAN simultaneously. I didn't know this before going down this path & doing testing followed by lots of research.
    Thanks,
    Ted Cole

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •