Results 1 to 25 of 86
Thread: OnApp Bonding issue
-
06-04-2013, 04:09 PM #1WHT Addict
- Join Date
- May 2013
- Posts
- 112
OnApp Bonding issue
We are trying to get our install completed with OnApp + Integrated Storage. However running into an issue when we try to bond the two ports. After selecting the two NIC's in the HyperVisor settings (onapp), and settig up LACP on the switch, it doesn't work.
Can't figure out if this is a OnApp issue or a switch configuration issue. Unfortunately OnApp wont be available to assist until 3AM (different time zones).
Anyone know if these switch settings are wrong?
xe-0/0/18 {
ether-options {
802.3ad ae0;
}
}
xe-0/0/19 {
ether-options {
802.3ad ae0;
}
}
ae0 {
mtu 9216;
aggregated-ether-options {
lacp {
active;
}
}
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members VLAN_SAN;
}
}
}
}
-
06-04-2013, 04:34 PM #2Aspiring Evangelist
- Join Date
- Nov 2012
- Posts
- 428
Few threads about this kicking around.....
When you select your interfaces to bond, its creating a bond0 (round robin). So you do NOT setup any port grouping or LACP. You will need to create a VLAN for each link in the bond.
Server 1, Link A - VLAN100, Link B - VLAN101
Server 2, Link A - VLAN100, Link B - VLAN101
Server 3, Link A - VLAN100, Link B - VLAN101
-
06-04-2013, 04:51 PM #3WHT Addict
- Join Date
- May 2013
- Posts
- 112
Weird, all I did was change the LACP mode on the switch to none instead of access and it started working....
-
06-04-2013, 04:55 PM #4Aspiring Evangelist
- Join Date
- Nov 2012
- Posts
- 428
It really depends on the switch. On most, you will get MAC Flapping due to the way bonding works; both interfaces use the same MAC Address. Thats why the VLANs are sometimes required.
Run an iperf test to make sure you are getting the correct speed.
-
06-12-2013, 02:08 PM #5Junior Guru Wannabe
- Join Date
- Jun 2013
- Posts
- 45
If you are on a paid license then they provide 24/7 support and from the reviews I have read their tech support is top notch.
-
06-12-2013, 02:10 PM #6Newbie
- Join Date
- May 2010
- Posts
- 25
Please tell me that you have paid support
-
06-17-2013, 03:29 AM #7WHT Addict
- Join Date
- May 2013
- Posts
- 112
-
06-17-2013, 10:32 AM #8Aspiring Evangelist
- Join Date
- Nov 2012
- Posts
- 428
-
06-17-2013, 01:51 PM #9WHT Addict
- Join Date
- May 2013
- Posts
- 112
Our bonding seems to be fine. All that we did was change the custom config from the default bonding that OnApp has (round robin) to something else (LACP for storage servers, and Mode6 for HV's).
Issue now appears not to be the network. We have 20Gbps going to each server for the storage network. We then created a datastore with 8 SSD's with 2 Copies, and 4 stripes settings. That configuration is slower than a single SSD when benchmarked on a test Virtual Machine. We tested the servers storage network using iperf commands, and there is plenty of throughput. Creating 2 copies, and 4 stripes with OnApp Storage kills performance. I think we may be better off using our 9271-8iCC raid controllers and do the striping there, and not with OnApp. Then configure a new datastore in OnApp with just 2 copies, no stripes.
Here is 1 SSD being benchmarked over the network:
running IO "sequential read" test...
result is 389.24MB per second
running IO "sequential write" test...
result is 327.31MB per second
running IO "seq read/seq write" test...
result is 162.65MB/152.18MB per second
running IO "random read" test...
result is 101.65MB per second
equals 26022.5 IOs per second
running IO "random write" test...
result is 139.70MB per second
equals 35764.2 IOs per second
running IO "rand read/rand write" test...
result is 49.46MB/49.69MB per second
equals 12662.0/12721.8 IOs per second
running IO "sequential read" test...
result is 99.57MB per second
running IO "sequential write" test...
result is 199.24MB per second
running IO "seq read/seq write" test...
result is 72.53MB/76.92MB per second
running IO "random read" test...
result is 39.17MB per second
equals 10027.8 IOs per second
running IO "random write" test...
result is 31.56MB per second
equals 8079.8 IOs per second
running IO "rand read/rand write" test...
result is 24.27MB/24.35MB per second
equals 6212.2/6232.8 IOs per second
-
06-17-2013, 05:02 PM #10Disabled
- Join Date
- Aug 2011
- Location
- Dub,Lon,Dal,Chi,NY,LA
- Posts
- 1,839
That's a surprisingly low seq read.
I would have expected 4-5x that for that config.
-
06-17-2013, 10:00 PM #11WHT Addict
- Join Date
- May 2013
- Posts
- 112
-
06-18-2013, 11:40 AM #12Aspiring Evangelist
- Join Date
- Nov 2012
- Posts
- 428
Are you running your disks tests Hypervisor level or within the VM?
I'd bug the heck outta support until they give you a decent response. Are you speaking with Julian or John? Those are the real storage experts.
Did you take a look at this and report back your finding using their metrics?
https://onappdev.atlassian.net/wiki/...mance+Analysis
-
06-19-2013, 06:01 AM #13WHT Addict
- Join Date
- May 2013
- Posts
- 112
Didn't see that wiki page until now. However, we have pretty much done every test that is found on that page. We have done multiple single drive datastore tests from different HV's, and multiple tests on multi-drive datastores from different HV's.
The trend issue appears to be when the data is striped accross different servers. If the datastore sits on 1 machine, it goes pretty quick. When it has to read/write from 2 machines, it bogs down a LOT.
-
06-19-2013, 09:17 AM #14Web Hosting Master
- Join Date
- Sep 2005
- Location
- London
- Posts
- 2,409
did you ping the guys from the storage team at OnApp, as Andrew said, John or Julian would love to dig into this and sort it out.
Ditlev Bredahl. CEO,
OnApp.com + Cloud.net & CDN.net
-
06-19-2013, 01:41 PM #15WHT Addict
- Join Date
- May 2013
- Posts
- 112
-
06-19-2013, 07:56 PM #16Aspiring Evangelist
- Join Date
- Nov 2012
- Posts
- 428
He's down here at HostingCon so he may not be quick to action. Open up a ticket with support and they will ensure someone takes a look at it.
-
06-19-2013, 10:45 PM #17WHT Addict
- Join Date
- May 2013
- Posts
- 112
-
06-28-2013, 07:31 PM #18WHT Addict
- Join Date
- May 2013
- Posts
- 112
We were able to make a couple adjustments on the KVM nodes to increase the disk performance. However, we just can't seem to budge the sequential read even though everything else is amazing after some tweaks. I guess we can't find the right config to increase 'sequential read'
running IO "sequential read" test...
result is 74.47MB per second
running IO "sequential write" test...
result is 794.46MB per second
running IO "seq read/seq write" test...
result is 95.31MB/93.56MB per second
running IO "random read" test...
result is 177.17MB per second
equals 45355.2 IOs per second
running IO "random write" test...
result is 82.82MB per second
equals 21202.5 IOs per second
running IO "rand read/rand write" test...
result is 43.43MB/43.32MB per second
equals 11119.2/11089.0 IOs per second
(Performed on a test CentOS VM with: 512MB Ram, 1 Core, 20GB Disk)
The IOs per second is pretty impressive, which is generally the most important thing.Last edited by CloudVZ; 06-28-2013 at 07:36 PM.
-
07-01-2013, 05:37 AM #19Disabled
- Join Date
- Aug 2011
- Location
- Dub,Lon,Dal,Chi,NY,LA
- Posts
- 1,839
-
07-01-2013, 05:48 AM #20Web Hosting Master
- Join Date
- Sep 2005
- Location
- London
- Posts
- 2,409
-
07-01-2013, 05:52 AM #21Disabled
- Join Date
- Aug 2011
- Location
- Dub,Lon,Dal,Chi,NY,LA
- Posts
- 1,839
Very generous
-
07-08-2013, 04:25 PM #22WHT Addict
- Join Date
- May 2013
- Posts
- 112
Has anyone deployed OnApp + Storage using Juniper switches yet, preferably the Juniper EX4550 (or EX4500) series? If anyone has deployed OnApp with Ex4500 series, are there any problems with the remote path performance of the SAN network? We are trying to troubleshoot it with OnApp, but trying to do our part to ensure it's not the switch causing problems for the SAN network. Hate to see them spend so much troubleshooting the SAN remote path performance if there is a switch configuration issue for the SAN network. Right now, everything seems to be fine with out tests regarding network performance
-
08-27-2013, 09:31 AM #23Newbie
- Join Date
- Aug 2013
- Location
- Virginia
- Posts
- 12
We are thinking about testing it out. If we run in to anything, I can let you know. Of course, by the time we actually start testing, you may have figured it out but we'll see...
-
08-29-2013, 06:54 AM #24Cloud Engineer
- Join Date
- Jul 2011
- Location
- ATL,DFW,PHX,LAX,CHI,NJ
- Posts
- 700
█ Total Server Solutions
█ OnApp Cloud Solutions, CDN, DNS, Load Balancers, and Hybrid Dedicated Servers
█ Colocation with Colo@
█ Visit us at http://www.totalserversolutions.com/
-
08-29-2013, 04:43 PM #25WHT Addict
- Join Date
- May 2013
- Posts
- 112
█ Cloud IaaS Solutions Provider - www.CloudVZ.com
█ SSD SANs | High IOPs | Public & Private Cloud Solutions | Content Delivery Network
█ Create your virtual datacenter in seconds!
Similar Threads
-
HostBill/OnApp Issue
By IshaKaarlo in forum Hosting Software and Control PanelsReplies: 1Last Post: 03-08-2013, 10:36 AM -
Bonding Interfaces - CentOS
By darkspace_co in forum Hosting Security and TechnologyReplies: 12Last Post: 05-14-2011, 08:40 AM -
MLPPP (ADSL Bonding)
By TailoredVPS in forum Hosting Security and TechnologyReplies: 2Last Post: 10-25-2009, 08:15 PM -
Insurance and Bonding?
By TubuNet in forum Web HostingReplies: 2Last Post: 01-01-2007, 09:04 PM -
Hosting via ADSL Bonding
By SME in forum Other Offers & RequestsReplies: 21Last Post: 04-22-2002, 07:07 PM