Page 1 of 4 1234 LastLast
Results 1 to 25 of 86
  1. #1

    OnApp Bonding issue

    We are trying to get our install completed with OnApp + Integrated Storage. However running into an issue when we try to bond the two ports. After selecting the two NIC's in the HyperVisor settings (onapp), and settig up LACP on the switch, it doesn't work.

    Can't figure out if this is a OnApp issue or a switch configuration issue. Unfortunately OnApp wont be available to assist until 3AM (different time zones).

    Anyone know if these switch settings are wrong?


    xe-0/0/18 {
    ether-options {
    802.3ad ae0;
    }
    }
    xe-0/0/19 {
    ether-options {
    802.3ad ae0;
    }
    }


    ae0 {
    mtu 9216;
    aggregated-ether-options {
    lacp {
    active;
    }
    }
    unit 0 {
    family ethernet-switching {
    port-mode access;
    vlan {
    members VLAN_SAN;
    }
    }
    }
    }

  2. #2
    Join Date
    Nov 2012
    Posts
    428
    Few threads about this kicking around.....

    When you select your interfaces to bond, its creating a bond0 (round robin). So you do NOT setup any port grouping or LACP. You will need to create a VLAN for each link in the bond.

    Server 1, Link A - VLAN100, Link B - VLAN101
    Server 2, Link A - VLAN100, Link B - VLAN101
    Server 3, Link A - VLAN100, Link B - VLAN101

  3. #3
    Weird, all I did was change the LACP mode on the switch to none instead of access and it started working....

  4. #4
    Join Date
    Nov 2012
    Posts
    428
    It really depends on the switch. On most, you will get MAC Flapping due to the way bonding works; both interfaces use the same MAC Address. Thats why the VLANs are sometimes required.

    Run an iperf test to make sure you are getting the correct speed.

  5. #5
    Join Date
    Jun 2013
    Posts
    45
    If you are on a paid license then they provide 24/7 support and from the reviews I have read their tech support is top notch.

  6. #6
    Please tell me that you have paid support

  7. #7
    Quote Originally Posted by chmuri View Post
    Please tell me that you have paid support
    Comes with the license, I'm not aware of any other support license/options.

    We were able to get all the bonding to work after some tweaking in the custom config's for the ********t. OnApp Support team has been pretty beneficial while we test/benchmark the service.

  8. #8
    Join Date
    Nov 2012
    Posts
    428
    Quote Originally Posted by CloudVZ View Post
    Comes with the license, I'm not aware of any other support license/options.

    We were able to get all the bonding to work after some tweaking in the custom config's for the ********t. OnApp Support team has been pretty beneficial while we test/benchmark the service.
    Just wondering, what type of tweaks did you make?

  9. #9
    Quote Originally Posted by awataszko View Post
    Just wondering, what type of tweaks did you make?
    Our bonding seems to be fine. All that we did was change the custom config from the default bonding that OnApp has (round robin) to something else (LACP for storage servers, and Mode6 for HV's).

    Issue now appears not to be the network. We have 20Gbps going to each server for the storage network. We then created a datastore with 8 SSD's with 2 Copies, and 4 stripes settings. That configuration is slower than a single SSD when benchmarked on a test Virtual Machine. We tested the servers storage network using iperf commands, and there is plenty of throughput. Creating 2 copies, and 4 stripes with OnApp Storage kills performance. I think we may be better off using our 9271-8iCC raid controllers and do the striping there, and not with OnApp. Then configure a new datastore in OnApp with just 2 copies, no stripes.

    Here is 1 SSD being benchmarked over the network:
    running IO "sequential read" test...
    result is 389.24MB per second

    running IO "sequential write" test...
    result is 327.31MB per second

    running IO "seq read/seq write" test...
    result is 162.65MB/152.18MB per second

    running IO "random read" test...
    result is 101.65MB per second
    equals 26022.5 IOs per second

    running IO "random write" test...
    result is 139.70MB per second
    equals 35764.2 IOs per second

    running IO "rand read/rand write" test...
    result is 49.46MB/49.69MB per second
    equals 12662.0/12721.8 IOs per second
    Here is a virtual machine running the benchmarks with OnApp Datastore configuration of 8 SSDS (2 Copies, 4 stripes):

    running IO "sequential read" test...
    result is 99.57MB per second

    running IO "sequential write" test...
    result is 199.24MB per second

    running IO "seq read/seq write" test...
    result is 72.53MB/76.92MB per second

    running IO "random read" test...
    result is 39.17MB per second
    equals 10027.8 IOs per second

    running IO "random write" test...
    result is 31.56MB per second
    equals 8079.8 IOs per second

    running IO "rand read/rand write" test...
    result is 24.27MB/24.35MB per second
    equals 6212.2/6232.8 IOs per second

  10. #10
    Join Date
    Aug 2011
    Location
    Dub,Lon,Dal,Chi,NY,LA
    Posts
    1,839
    That's a surprisingly low seq read.

    I would have expected 4-5x that for that config.

  11. #11
    Quote Originally Posted by dediserve View Post
    That's a surprisingly low seq read.

    I would have expected 4-5x that for that config.
    Yep...us too. We wont be able to do 2 Copies, 4 stripes yet. Just doesn't seem right. The only thing that is somewhat decent is the IOP's. Didn't expect more SSD's to degrade performance this this much.

  12. #12
    Join Date
    Nov 2012
    Posts
    428
    Are you running your disks tests Hypervisor level or within the VM?

    I'd bug the heck outta support until they give you a decent response. Are you speaking with Julian or John? Those are the real storage experts.

    Did you take a look at this and report back your finding using their metrics?
    https://onappdev.atlassian.net/wiki/...mance+Analysis

  13. #13
    Quote Originally Posted by awataszko View Post
    Are you running your disks tests Hypervisor level or within the VM?

    I'd bug the heck outta support until they give you a decent response. Are you speaking with Julian or John? Those are the real storage experts.

    Did you take a look at this and report back your finding using their metrics?
    https://onappdev.atlassian.net/wiki/...mance+Analysis

    Didn't see that wiki page until now. However, we have pretty much done every test that is found on that page. We have done multiple single drive datastore tests from different HV's, and multiple tests on multi-drive datastores from different HV's.

    The trend issue appears to be when the data is striped accross different servers. If the datastore sits on 1 machine, it goes pretty quick. When it has to read/write from 2 machines, it bogs down a LOT.

  14. #14
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,409
    did you ping the guys from the storage team at OnApp, as Andrew said, John or Julian would love to dig into this and sort it out.
    Ditlev Bredahl. CEO,
    OnApp.com + Cloud.net & CDN.net

  15. #15
    Quote Originally Posted by eming View Post
    did you ping the guys from the storage team at OnApp, as Andrew said, John or Julian would love to dig into this and sort it out.
    Yes, Julian was sent an email a couple days ago by a colleague with some benchmark results and how to reproduce it. Perhaps he's busy with 3.1 atm.

  16. #16
    Join Date
    Nov 2012
    Posts
    428
    He's down here at HostingCon so he may not be quick to action. Open up a ticket with support and they will ensure someone takes a look at it.

  17. #17
    Quote Originally Posted by awataszko View Post
    He's down here at HostingCon so he may not be quick to action. Open up a ticket with support and they will ensure someone takes a look at it.

    Ah, that makes sense.

  18. #18
    We were able to make a couple adjustments on the KVM nodes to increase the disk performance. However, we just can't seem to budge the sequential read even though everything else is amazing after some tweaks. I guess we can't find the right config to increase 'sequential read'


    running IO "sequential read" test...
    result is 74.47MB per second

    running IO "sequential write" test...
    result is 794.46MB per second

    running IO "seq read/seq write" test...
    result is 95.31MB/93.56MB per second

    running IO "random read" test...
    result is 177.17MB per second
    equals 45355.2 IOs per second

    running IO "random write" test...
    result is 82.82MB per second
    equals 21202.5 IOs per second

    running IO "rand read/rand write" test...
    result is 43.43MB/43.32MB per second
    equals 11119.2/11089.0 IOs per second

    (Performed on a test CentOS VM with: 512MB Ram, 1 Core, 20GB Disk)

    The IOs per second is pretty impressive, which is generally the most important thing.
    Last edited by CloudVZ; 06-28-2013 at 07:36 PM.

  19. #19
    Join Date
    Aug 2011
    Location
    Dub,Lon,Dal,Chi,NY,LA
    Posts
    1,839
    Quote Originally Posted by chmuri View Post
    Please tell me that you have paid support
    As far as I know, you don't get onapp storage on the free license...
    Last edited by foobic; 07-01-2013 at 05:43 AM. Reason: fixed quote

  20. #20
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,409
    Quote Originally Posted by dediserve View Post
    As far as I know, you don't get onapp storage on the free license...
    OnApp Storage (SANity) is actually incl. in the free version, but with a 500gb FUP*
    Ditlev Bredahl. CEO,
    OnApp.com + Cloud.net & CDN.net

  21. #21
    Join Date
    Aug 2011
    Location
    Dub,Lon,Dal,Chi,NY,LA
    Posts
    1,839
    Very generous

  22. #22
    Has anyone deployed OnApp + Storage using Juniper switches yet, preferably the Juniper EX4550 (or EX4500) series? If anyone has deployed OnApp with Ex4500 series, are there any problems with the remote path performance of the SAN network? We are trying to troubleshoot it with OnApp, but trying to do our part to ensure it's not the switch causing problems for the SAN network. Hate to see them spend so much troubleshooting the SAN remote path performance if there is a switch configuration issue for the SAN network. Right now, everything seems to be fine with out tests regarding network performance

  23. #23
    Join Date
    Aug 2013
    Location
    Virginia
    Posts
    12
    We are thinking about testing it out. If we run in to anything, I can let you know. Of course, by the time we actually start testing, you may have figured it out but we'll see...

  24. #24
    Join Date
    Jul 2011
    Location
    ATL,DFW,PHX,LAX,CHI,NJ
    Posts
    700
    Quote Originally Posted by CloudVZ View Post
    Has anyone deployed OnApp + Storage using Juniper switches yet, preferably the Juniper EX4550 (or EX4500) series? If anyone has deployed OnApp with Ex4500 series, are there any problems with the remote path performance of the SAN network? We are trying to troubleshoot it with OnApp, but trying to do our part to ensure it's not the switch causing problems for the SAN network. Hate to see them spend so much troubleshooting the SAN remote path performance if there is a switch configuration issue for the SAN network. Right now, everything seems to be fine with out tests regarding network performance

    Should be no problem with a EX4550 it has 960GB/s of bandwidth full line rate for 10GB/s ports.

    Look at your NIC's use Intel for SAN only! make sure offloading is enabled and your RX and TX ring buffers are set to max possibly.
    █ Total Server Solutions
    OnApp Cloud Solutions, CDN, DNS, Load Balancers, and Hybrid Dedicated Servers
    █ Colocation with Colo@
    Visit us at http://www.totalserversolutions.com/

  25. #25
    Quote Originally Posted by FRCorey View Post
    Should be no problem with a EX4550 it has 960GB/s of bandwidth full line rate for 10GB/s ports.

    Look at your NIC's use Intel for SAN only! make sure offloading is enabled and your RX and TX ring buffers are set to max possibly.
    Be nice if the buffers made a difference, but it doesn't. We even increased the OS buffers (ex: rmem, wmem, etc) and it didn't really make a difference. We use Intel 10Gig NIC's. Each server has 20Gbps for the SAN network, tried every bonding mode possible as well.
    Cloud IaaS Solutions Provider - www.CloudVZ.com
    SSD SANs | High IOPs | Public & Private Cloud
    Solutions | Content Delivery Network
    Create your virtual datacenter in seconds!

Page 1 of 4 1234 LastLast

Similar Threads

  1. HostBill/OnApp Issue
    By IshaKaarlo in forum Hosting Software and Control Panels
    Replies: 1
    Last Post: 03-08-2013, 10:36 AM
  2. Bonding Interfaces - CentOS
    By darkspace_co in forum Hosting Security and Technology
    Replies: 12
    Last Post: 05-14-2011, 08:40 AM
  3. MLPPP (ADSL Bonding)
    By TailoredVPS in forum Hosting Security and Technology
    Replies: 2
    Last Post: 10-25-2009, 08:15 PM
  4. Insurance and Bonding?
    By TubuNet in forum Web Hosting
    Replies: 2
    Last Post: 01-01-2007, 09:04 PM
  5. Hosting via ADSL Bonding
    By SME in forum Other Offers & Requests
    Replies: 21
    Last Post: 04-22-2002, 07:07 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •