Page 1 of 4 1234 LastLast
Results 1 to 25 of 82
  1. #1
    Join Date
    Oct 2013
    Posts
    42

    Anyone using OnApp Integrated Storage?

    I'm wondering if anyone is successfully using this with high availability in a production environment.

    We put a new cloud up recently and it's been nothing but one problem after another. We can't even stabilize it enough to see where we're at and make any changes.

    Also, in talking to a large/reputable hosting company they indicated that ALL cloud vendors are having trouble delivering on the promise of integrated storage, inluding VMware and any of the open sources solutions.

    Would love to hear other people's experiences so we know if we're pushing a boulder uphill or just doing something wrong.

  2. #2
    Join Date
    Aug 2005
    Location
    PA
    Posts
    324
    What part or layer, exactly, does not work? I think the more detail you can give, the better advice you will receive.
    reliable colocation ... Dedicated Servers | Dedicated Server VMs | FAST links to Vitelity.com and Conexiant.net
    patrick@zill.net Cell +1.717.201.3366

  3. #3
    Join Date
    May 2003
    Location
    San Francisco, CA
    Posts
    1,506
    I would suggest avoiding integrated storage, for now, while the rest of the kinks are ironed out. It can be successfully configured and it does work, but there are issues that prevent it from being used to its full potential in a production environment.

    Do some searching around WHT, there are a couple of other threads that will offer you further insight into OnApp integrated storage, specifically.

    As my first hand experience only involves OnApp, I can't comment on any of the other vendors.
    * GeekStorage.com - Offering awesome website hosting for over 13 years!
    * Shared Hosting * Reseller Hosting * Virtual Private Servers * Dedicated Servers
    * Have questions? Send us an e-mail, we'd love to hear from you!

  4. #4
    Join Date
    Oct 2013
    Posts
    42
    Honestly, basically ALL of it. I can't even begin to list the problems, and if it isn't one it's another.

    I'm not looking so much for solutions to our specific problems as to hear from other people that have tried and whether they have been successful. Also like to hear what the path to that was, and maybe something about their config.

    Specifically re: OnApp but more generally anyone running an integrated SAN storage solution at a moderate or high load.

  5. #5
    Join Date
    Jul 2010
    Posts
    69
    Quote Originally Posted by xBenx View Post
    Honestly, basically ALL of it. I can't even begin to list the problems, and if it isn't one it's another.

    I'm not looking so much for solutions to our specific problems as to hear from other people that have tried and whether they have been successful. Also like to hear what the path to that was, and maybe something about their config.

    Specifically re: OnApp but more generally anyone running an integrated SAN storage solution at a moderate or high load.
    Hi there, what version are you using?
    Also, have you spoken with our Support team regarding your experiences?

    Happy to help: caroline@onapp.com
    Caroline Paine
    Commercial Operations Manager @ www.onapp.com
    Inquisitive Foodie @ www.inquisitivefoodie.com

  6. #6
    Join Date
    Jan 2004
    Location
    Pennsylvania
    Posts
    942
    Really care to list any of the problems? It could be something pretty simple..

    We are not running production, but have a testbed setup for OnApp IS and while we encountered some issues, nothing was not easily overcome. We'll wait for 3.1 and some bugfixes before going production, but I definitely didn't have "too many to list" issues.
    Matt Ayres - togglebox.com
    Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
    Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions

    www.togglebox.com

  7. #7
    Join Date
    Nov 2006
    Location
    Pune, India
    Posts
    1,428
    Quote Originally Posted by TheWiseOne View Post
    Really care to list any of the problems? It could be something pretty simple..

    We are not running production, but have a testbed setup for OnApp IS and while we encountered some issues, nothing was not easily overcome. We'll wait for 3.1 and some bugfixes before going production, but I definitely didn't have "too many to list" issues.
    I am also waiting for OnApp 3.1 . Do they have a release date ?
    LeapSwitch Networks Pvt. Ltd. - Managed VPS / Dedicated Servers India
    CloudJiffy PaaS - Wordpress Cluster Hosting
    █ Shared, Reseller, VPS, Dedicated Servers, Colocation
    AS132335 - India - USA - Germany - Spain - Portugal - Ukraine

  8. #8
    The biggest issue with OnApp IS is the TCP Incast issue. That, and a couple random issues where the IS would stop on the HV's and the entire cloud can freeze up. Can't boot up any of the VMs, even if moved on a different HV. The vDISK get's locked up on the HV and wont let you migrate the VM on a different HV to boot it up. This requires a scheduled reboot of all the HyperVisors since OnApp Level1-2 support isn't familiar with the issue.

    OnApp 3.1 wont fix these issues, as they are still working on the bugs and said it will be fixed shortly after 3.1. That is only the TCP Incast, why it randomly locks of VM's is another mystery.

  9. #9
    Join Date
    Jun 2002
    Location
    Waco, TX
    Posts
    5,623
    I have a question due to recent problems on large hard drives.

    I've been increasingly seeing bad sectors on all ranges of drives, and all brands. Sure we all know about bad sectors, but prior to the last 1.5 years or so I'd never seen it be such a major issue.

    I know how the IS, and other cloud storage on commodity hardware works, but if you have drives that get bad sectors, so 100s of them, and other drives get them as well. In RAID this can make issues if those zones of bad sectors overlap. How does this get handled in such a setup in the distributed cloud and cloud object systems?


    I only ask because enterprise drives, laptop drives, home desktop drives all, I am seeing such an increase in HD issues due to bad sectors, that it has to be accounted for and solutions for their problems worked around, swapped out, etc.

  10. #10
    Join Date
    Apr 2011
    Posts
    54
    Quote Originally Posted by CloudVZ View Post
    The biggest issue with OnApp IS is the TCP Incast issue. That, and a couple random issues where the IS would stop on the HV's and the entire cloud can freeze up. Can't boot up any of the VMs, even if moved on a different HV. The vDISK get's locked up on the HV and wont let you migrate the VM on a different HV to boot it up. This requires a scheduled reboot of all the HyperVisors since OnApp Level1-2 support isn't familiar with the issue.

    OnApp 3.1 wont fix these issues, as they are still working on the bugs and said it will be fixed shortly after 3.1. That is only the TCP Incast, why it randomly locks of VM's is another mystery.
    We are currently using IS along with a regular SAN. Had a few problems but it is running ok currently. I do not think the local read path is working correctly. We had problems with disks going to degraded state but I think that is fixed for now.

    A bit scared when you say the entire cloud can freeze up. Have you one or more data stores configured? Have you had different data stores in different HV's freezing up the same time?

    It is a problem that Level1-2 support does know much about IS and when you have a problem IS support might not be available.

  11. #11
    Whatever vdisk is on a HV that gets locked up (doesn't matter which datastore), you can't move that VM to any other server or boot it up anywhere else. Level 1-2 Support only suggestion was to reboot the HV, which caused a lot of a downtime since the VM's couldn't power down...they all had to be timeout shutdown from OnApp which isn't a clean shutdown.

    So if there was 30 VM's, it took about 2-3 minutes for each machine to timeout shutdown. Even though they were cold migrated to another node, they couldn't be started until that source HV was rebooted...and we couldn't reboot it until all the VM's timed out from being shutdown. It was a very stressful situation, and support didn't check to see why it happened, nor was it reported to the IS developers.

    We are setting up a new SSD datastore for IS Devs to beta test a new storage controller on. They have been working on a mechanism to eliminate the TCP Incast issues.

  12. #12
    Join Date
    Jan 2004
    Location
    Pennsylvania
    Posts
    942
    Quote Originally Posted by CloudVZ View Post
    So if there was 30 VM's, it took about 2-3 minutes for each machine to timeout shutdown. Even though they were cold migrated to another node, they couldn't be started until that source HV was rebooted...and we couldn't reboot it until all the VM's timed out from being shutdown.
    Code:
    echo s >/proc/sysrq-trigger
    echo s >/proc/sysrq-trigger
    echo b >/proc/sysrq-trigger
    If the VMs are not going to be shutdown gracefully anyway, just do that.

    As an addendum, are you using Xen? We've experienced the Xen netback bug on regular OnApp w/ SAN. No relation to OnApp specifically, it is due to Xen and large bursts of traffic. It would show "#### netback grant fails" in dmesg/logs if that bug was hit.
    Matt Ayres - togglebox.com
    Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
    Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions

    www.togglebox.com

  13. #13
    Join Date
    Apr 2004
    Location
    New Hampshire
    Posts
    773
    I'm definitely interested in hearing from anyone currently using the system also, I've spent a couple of hours reading past threads but anything that really talks in depth about it was from earlier this year. With how fast things advance, a few months old is out of date information.

    I did the demo with OnApp and are still talking to them about possible setups while deciding what we're going to do. The next version (3.1?) is supposed to come out mid November from what I've been told.

    My fear is spending a bunch of time never mind money trying to get things setup and working just to have it not work the way we want. I read some negative things from earlier in the year \ last year but the occurrence of negative items has died down quite a bit.

    A free round of beers for anyone that can give a current review of the OnApp storage.
    Corey Arbogast | CEO
    █ 888-X10-9668, x703 - corey[@]x10hosting.com

  14. #14
    Join Date
    May 2003
    Location
    San Francisco, CA
    Posts
    1,506
    Quote Originally Posted by Jay H View Post
    I would suggest avoiding integrated storage, for now, while the rest of the kinks are ironed out.
    The above sums it up. There are kinks that need to be ironed out. The TCP incast issue being a major one. The product works, but not at its full potential.

    Quote Originally Posted by [x10]Corey View Post
    A free round of beers for anyone that can give a current review of the OnApp storage.
    Sounds good, I'll take a Blue Moon.
    * GeekStorage.com - Offering awesome website hosting for over 13 years!
    * Shared Hosting * Reseller Hosting * Virtual Private Servers * Dedicated Servers
    * Have questions? Send us an e-mail, we'd love to hear from you!

  15. #15
    OnApp Storage has a lot of potential once some of these bugs are fixed. Definitely do not count it out just yet. May be best to wait until the next IS iteration is released. They should have the new features and fixes shortly.

    We have given them a platform and SSDs to play with and test out the new mechanism on a 20Gbps SAN network (powered by Juniper). Right now a single vDisk can't really go over 300-500Mbps, the new mechanism should help eliminate that TCP incast ceiling. We'll try to keep people posted on the beta test of the new mechanism.

  16. #16
    Join Date
    Aug 2005
    Location
    PA
    Posts
    324
    Can't you just do the following:

    1. order 2x 2U servers with 12 bays, add Linux or Solaris with ZFS, add 2x SSDs for use as cache and log devices - add the storage you need.

    Cluster the 2 with e.g. zrep or other ZFS replication setup (allows for backups and failover to second system either manually or automatically, depending on how much you trust the software).

    Then, bond the 2 Gbit interfaces - you now have speeds of up to 225MB/s or more.

    2. Mount everything over NFS - instant migrations and workload balancing, since the remote storage nature of the VMs allows you to easily live-migrate VMs around.
    reliable colocation ... Dedicated Servers | Dedicated Server VMs | FAST links to Vitelity.com and Conexiant.net
    patrick@zill.net Cell +1.717.201.3366

  17. #17
    Join Date
    Apr 2004
    Location
    New Hampshire
    Posts
    773
    Quote Originally Posted by Jay H View Post
    The above sums it up. There are kinks that need to be ironed out. The TCP incast issue being a major one. The product works, but not at its full potential.



    Sounds good, I'll take a Blue Moon.
    Thanks. Do you know if the kinks are supposed to be resolved in the next version that is due out in 2 weeks? We'd obviously wait until the next version since it's so close before any implementation. Just trying to get a feel of what we may be in for.

    I'll remember the blue moon if you're at hostingcon this year!

    Quote Originally Posted by CloudVZ View Post
    OnApp Storage has a lot of potential once some of these bugs are fixed. Definitely do not count it out just yet. May be best to wait until the next IS iteration is released. They should have the new features and fixes shortly.

    We have given them a platform and SSDs to play with and test out the new mechanism on a 20Gbps SAN network (powered by Juniper). Right now a single vDisk can't really go over 300-500Mbps, the new mechanism should help eliminate that TCP incast ceiling. We'll try to keep people posted on the beta test of the new mechanism.
    From what I've been told the next version is due out mid November, I guess I'll check back in toward the end of the month and see how things are going.
    Corey Arbogast | CEO
    █ 888-X10-9668, x703 - corey[@]x10hosting.com

  18. #18
    This sounds a bit like our OnApp deployment. We've been working on it for the past couple of months just to find out the initial setup of a Windows instance takes about an hour. They say it has to do with a bug/ntfsclone and it won't be fixed until 3.2.

    As far as the integrated storage, we haven't had any issues with reliability. We currently have about 100 beta users across 4 test nodes.

    What we're interested in hearing from anyone out there using OnApp is if they experience the same slow setup speeds with a Windows instance.

  19. #19
    I'm watching this too. I'm really interested in Onapp IS.

    Corey, if I come across any info, I'll forward it to you.

    I really wish the free version supported Windows, I'd really like to see Windows in action before I committed to the paid version.
    Bobby - PreciselyManaged.com - Precision Hosting Solutions
    █ Enterprise Shared, Reseller, VPS, Hybrid, and Dedicated Hosting
    █ SpamExperts | CloudLinux | cPanel | Bacula + R1soft | and more!
    █ Full proactively managed, and we specialize in hosting small web hosts

  20. #20
    Join Date
    Oct 2001
    Location
    Ohio
    Posts
    8,535
    Quote Originally Posted by UH-Bobby View Post
    I really wish the free version supported Windows, I'd really like to see Windows in action before I committed to the paid version.
    Have you considered using another OnApp host to test it?

    You're also welcome to post here or on our own forum asking for feedback about Windows on OnApp.

  21. #21
    Join Date
    Aug 2011
    Location
    Dub,Lon,Dal,Chi,NY,LA
    Posts
    1,839

    Anyone using OnApp Integrated Storage?

    Quote Originally Posted by directspace View Post

    What we're interested in hearing from anyone out there using OnApp is if they experience the same slow setup speeds with a Windows instance.
    Odd. Do you mean the provisioning phase? Takes about 3 minutes from click to booting on our platforms - what are you seeing?

  22. #22
    Join Date
    Oct 2013
    Posts
    42
    So here's one problem we're having that we can't seem to figure out, and haven't been able to get any good response from OnApp regarding.

    Migrate all VMs from hyper1 to hyper2 (using IS). Now nothing running on hyper1 and everything seems fine and happy on hyper2.

    Shut down hyper1 for some kind of hardware maintenance. One or more VMs on hyper2 suddenly kernel panic.

  23. #23
    Quote Originally Posted by dediserve View Post
    Odd. Do you mean the provisioning phase? Takes about 3 minutes from click to booting on our platforms - what are you seeing?
    Is this on a Integrated Storage Setup? On A Windows provision?

    A linux VM takes about 3-4 minutes but a Windows VM takes a good hour at best. Even deleting a VM takes a good 10 minutes. Whats odd is that our complete backend is 10Gb with all SSD nodes and a provision is still slug slow. They did say that it is a bug with ntfsclone but it taking so long for a VM to delete tells us otherwise.

    We're working on switching over to a traditional SAN setup to see if anything changes.
    DirectSpace Networks, LLC
    Virtual Private Servers [OpenVZ & KVM], Colocation, and Dedicated Servers
    Offering premium & scalable datacenter services for over 11 years

  24. #24
    Quote Originally Posted by xBenx View Post
    So here's one problem we're having that we can't seem to figure out, and haven't been able to get any good response from OnApp regarding.

    Migrate all VMs from hyper1 to hyper2 (using IS). Now nothing running on hyper1 and everything seems fine and happy on hyper2.

    Shut down hyper1 for some kind of hardware maintenance. One or more VMs on hyper2 suddenly kernel panic.
    It sounds like perhaps your disk replicas are not balanced correctly across the available HVs, I'd suggest to ensure that you have at least 2 replicas in place and that they are in sync across more then one Hypervisor and test again.

    If it continues to happen make sure you are running the latest onapp-store packages and create a new ticket with support for further analysis.

  25. #25
    Join Date
    Jan 2004
    Location
    Pennsylvania
    Posts
    942
    Quote Originally Posted by jwithall View Post
    It sounds like perhaps your disk replicas are not balanced correctly across the available HVs, I'd suggest to ensure that you have at least 2 replicas in place and that they are in sync across more then one Hypervisor and test again.
    Sounds about right, in our testing I found sometimes it'd create 2 replicas on the same HV during a repair procedure if no other HVs were available. That really shouldn't be allowed... or maybe a toggle to allow it or not.
    Matt Ayres - togglebox.com
    Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
    Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions

    www.togglebox.com

Page 1 of 4 1234 LastLast

Similar Threads

  1. OnApp Storage
    By Dignus in forum Cloud Hosting
    Replies: 51
    Last Post: 06-10-2014, 03:59 PM
  2. OnApp Storage
    By webhost4all in forum Cloud Hosting
    Replies: 7
    Last Post: 10-07-2013, 07:03 PM
  3. [FEATURED] OnApp Cloud: Hardware SAN v OnApp Storage (SANity)?
    By cwl@apaqdigital in forum Colocation, Data Centers, IP Space and Networks
    Replies: 212
    Last Post: 04-24-2013, 05:59 PM
  4. Anyone using OnApp Storage?
    By ChaosInMind in forum Colocation, Data Centers, IP Space and Networks
    Replies: 10
    Last Post: 10-14-2012, 10:51 AM
  5. onapp storage
    By Interim in forum Cloud Hosting
    Replies: 7
    Last Post: 09-05-2012, 05:12 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •