View Poll Results: hardware SAN v SAS SANity v SSD SANity
- Voters
- 34. You may not vote on this poll
-
8TB hardware SAN (7200rpm SAS-II + Cachecade)
13 38.24% -
9.6TB OnApp Storage/SANity (10K or 15K SAS-II)
3 8.82% -
7.68TB OnApp Storage/SANity (pure 6G/s SSD)
18 52.94%
Results 76 to 100 of 213
-
03-06-2013, 01:23 AM #76Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
you know the thing puzzling me the most is that I was simply asking a plain simple technical question, basically about how safe a product (in this case: the SANity) is, and all the sudden I was accused of being "hostile" by the product manufacturer, then was re-directed to "pre-sales" dept. for any safety concern like I was on manufacturer's forum.
it's like a parent standing in car lot trying to help his 17-year-old son to buy his first car, and of course, safety was the utmost concern on his mind, so he asked the high-level sale manager " how safe is this car?" instead of answering the most straight forward question, the manager got impatient, and re-directed him to a "special" dept to receive answer. sure, the pretty reception lady in the dept was very polite and said " Welcome Sir, SAFTY IS ALSO OUR #1 CONCERN! please have a seat, one of our safety experts who knows inside-out about how well and how safe this car is built will be with you in about a day or two." hmmm....interesting!
I asked these questions long before the manufacturer first showed up in this thread so that these questions being interpreted as "hostile" were certainly not directed toward any person in particular. if the tone in the language I used was indeed "offensive" to anyone personally or just too "cut-throat" by someone's taste, then I apologize for it because it's truly non-intentional, and perhaps just too hastily and impolitely formed.
you know, every product has two sides of things. it's nice to hear what manufacturer making their case about how good and how great the product is, but we as users/buyers (or rep of potential buyers like I am) who actually are going to spend a small fortune certainly are wholly entitled to hear from other existing users, pro's and con's, and even competitors about the bad and ugly side of things with regard to a particular product.
i've been on these forums since 2004, and after 9 years and ~3500 posts, I guess most old timers on these forums should know me by now. I am always a straight shooter, speaks my mind, post hardware-relevant links and info, and try to share my little bit of knowledge about server hardware after being in computer industry for 20+ years so that I could assist forum visitors in some small way.
anyway, peace on earth, and nothing can't be resolved over beers!
-
03-06-2013, 01:36 AM #77VPS Like a Boss!
- Join Date
- Jul 2009
- Location
- New Zealand
- Posts
- 2,331
Nah Chong, i can't see any offensive words from you rather than your curiosity about this hot new product is similar to mine.
Just keep going until we find some bit more useful info maybe if you have some spare hardware you can do some lab test and shutdown one HV after another to see what will happen to remaining nodes's VM and analyze network traffic, CPU loads and disk activities. I'm sure our good friend Ditlev would be happy to give you a trial license and then post your findings.QuickWeb™ -We Host Servers Like a Boss!
New Zealand - USA - UK - Germany Virtual Servers
Worldwide hosting provider with proven 24x7 and 25-Minute Support!
www.quickweb.co.nz
-
03-06-2013, 02:28 AM #78Web Hosting Evangelist
- Join Date
- Aug 2008
- Posts
- 536
FYI: My reply was only about the throwing it back... As you may have noticed English isn't the native language of Ditlev, some patience may work
If you sum up your unanswered questions I can ask Onapp to respond on that. I admit that I don't have a setup with Onapp up-and-running but I think it's an interesting product like many others do here.
-
03-06-2013, 03:27 AM #79Web Hosting Master
- Join Date
- Sep 2005
- Location
- London
- Posts
- 2,409
SANity has been in public beta for almost a year, and private beta for even longer. We had plenty of bugs and issues to begin with, even up to the last beta versions we saw issues - in RC's we saw less and in the GA versions I am fairly happy with the result.
It's a pretty hard platform to install though, which is why we tend to deal with it on our clients behalf.
Is the platform solid?
So far, we've got ~80 or so 3.0/storage installs in production, and we've yet to see any serious issues, or stability worries.
Have we seen bugs?
YES, like in every other software platform on earth, OnApp has bugs. Though we've seen no 'blockers' and we're happy with the result.
Is the featureset complete?
NO! Personally I was rather disappointed that we had to pull a few features for the final RC/GA versions. But the focus on this release was stability. It was key for us to get it out, and in production. Then the additional features such as Cache, rate-limiting, vmware support etc would have to come in the following release.
This is storage, serious stuff - and as you know we've been pushing the launch again and again to ensure that the end result would be production ready. We feel we're there, and we know it will only get better.
Thanks for your feedback though!
DDitlev Bredahl. CEO,
OnApp.com + Cloud.net & CDN.net
-
03-06-2013, 04:46 AM #80Web Hosting Master
- Join Date
- Mar 2009
- Location
- NL
- Posts
- 594
Don't forget about dedup. Really hope to see it soon, especially important for everyone wanting to run a SSD-only platform (saves a lot of costs).
Onapp 3.0 installations have been suspended at this moment. That might be the reason for the limited amount of real feedback here on this forum. We are currently waiting for 1 of our clouds to be updated to v3.YISP - High Bandwidth dedicated servers and colocation in YISP-AS(Amsterdam)!
Website: http://www.yisp.nl
Contact: info "(AT)" yisp.nl
-
03-06-2013, 07:41 AM #81Web Hosting Master
- Join Date
- Sep 2005
- Location
- London
- Posts
- 2,409
-
03-06-2013, 02:06 PM #82Aspiring Evangelist
- Join Date
- Nov 2012
- Posts
- 428
I think the communication around V3 and its issues have been poor. I put in a ticket to be upgraded day of release, then later decided to upgrade it myself. 5 days later, I received an update on the ticket advising me updates have been halted and I should see a fix within a few days. Its now the 6th and I've really yet to see any type of official communication advising of these fixes. I'm being told 24-48 hours every other day but I've yet to see any updates released. I was told to keep an eye on this page but I don't see any updates. https://onapp.zendesk.com/entries/23...pp-3-0-updates
I think communication is key here. If customers were more aware of what's going on with updates/fixes, we would be happy people. At this point, most of us are being left in the dark, or going back and forth with support with no real fix or promises for fixes that seem like they are never going to be released.
-
03-06-2013, 02:08 PM #83Web Hosting Master
- Join Date
- Sep 2005
- Location
- London
- Posts
- 2,409
-
03-06-2013, 02:10 PM #84Aspiring Evangelist
- Join Date
- Nov 2012
- Posts
- 428
-
03-06-2013, 02:10 PM #85Web Hosting Master
- Join Date
- Sep 2005
- Location
- London
- Posts
- 2,409
-
03-06-2013, 02:13 PM #86Web Hosting Master
- Join Date
- Sep 2005
- Location
- London
- Posts
- 2,409
-
03-07-2013, 04:13 PM #87Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
any user experiences or opinions about OPEN-E that can share with this thread?
although 2x purposely-built turn-key style commercial SAN's would be nice for a robust hardware SAN solution, but it's really out of the price range of most small scale starter cloud hosting. therefore there are many of our clients still researching "cheapo" storage OS running on top of white-box SAN.
http://www.open-e.com/products/pricing/
it seems the pricing is not too bad for a storage OS, what's the catch?
EDIT: is any other Storage OS in similar price range as OPEN-E?Last edited by cwl@apaqdigital; 03-07-2013 at 04:16 PM.
-
03-07-2013, 05:53 PM #88Web Hosting Master
- Join Date
- May 2003
- Location
- San Francisco, CA
- Posts
- 1,506
Starwind Software (http://www.starwindsoftware.com/) has a storage product that is around the same pricing as Open-E.
* GeekStorage.com - Offering awesome website hosting for over 13 years!
* Shared Hosting * Reseller Hosting * Virtual Private Servers * Dedicated Servers
* Have questions? Send us an e-mail, we'd love to hear from you!
-
03-07-2013, 05:56 PM #89Web Hosting Master
- Join Date
- May 2003
- Location
- San Francisco, CA
- Posts
- 1,506
I haven't personally used Open-E, but from talking to various OnApp reps over the last year or two, I know they have customers successfully using it. I imagine, like anything else storage related, your mileage varies depending on hardware (both in terms of the SAN itself and networking equipment).
* GeekStorage.com - Offering awesome website hosting for over 13 years!
* Shared Hosting * Reseller Hosting * Virtual Private Servers * Dedicated Servers
* Have questions? Send us an e-mail, we'd love to hear from you!
-
03-07-2013, 06:06 PM #90Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
it would very nice that those hosts who are operating successful open-E (or any other "economic" class storage OS) on top of white box SAN could share with us about the spec of their SAN and network gears.
to be fair, the same goes to hosts using SANity with good success! spec and gears, please?!
BUT, I fully understand this could be just wishful thinking because the successful guys don't really want to create or encourage more competitors...Last edited by cwl@apaqdigital; 03-07-2013 at 06:12 PM.
-
03-07-2013, 06:23 PM #91Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
to anyone who has the same concern, one of my clients asked OnApp team about using large amounts of large SATA drives on hypervisor with SANity. the following was the response from OnApp verbatim.
Please clarify - you want 8-12 3TB disks ( 24-36TB ) per HV ? ( SANity is essentially a distributed RAID - the disks should be spread out among HVs )
This should be doable - if you can cram the disks into the box and configure them in the HV OS, OnApp storage should have no problem using them.
There is some performance hit replicating to another node, but this depends on how copies / striping is set - this is so the VM can continue running if an HV goes down temporarily, and so the array as a whole isn't degraded if a disk is lost ( HV goes down forever, disk corrupt, etc. )
There is option to do local reads, so if the VM is running on HV that contains its disk ( or part of.. ) this will help things a lot..
It also depends on the NIC(s) used for storage network - Gigabit and better are recommended, and multiple NICs can be used per HV as well for bandwidth.
-
03-08-2013, 09:45 AM #92Web Hosting Master
- Join Date
- May 2003
- Posts
- 1,708
Yea you are asking others to share their success stories with their competitors, that's usually something that isn't done here.
Open-E works very well with OnApp and so does Starwind. We have used both with success, Starwind runs on Windows so there is extra licensing there unlike Open-E. They both do active/active and support multiple 10Gbe interfaces. Starwind has what they call a 1GB ram disk for caching, but it is volatile. They both support caching on the raid cards though. Anything less than 10Gbe on the centralized SAN networks I wouldn't recommend.
-
03-08-2013, 02:06 PM #93Web Hosting Master
- Join Date
- Dec 2004
- Posts
- 790
Well I can share some experiences here. We're just a small hosting company with a few hundred servers - we do not have millions of dollars to throw around, so I think most WHT members can relate to us perhaps.
I have a Dell EQ. Ps6000 or something - 16 drives, 250gb sata's. configured in raid 10 according to best practises from dell. It's rock solid. It just... only offers the performance of slugs in sand. Honestly, we literally could not get more than 35 512mb VM's running on it (using Onapp). We're talking average joe plain jane clients on those VM's, running wordpress etc. Yet those 35 VM's had been migrated from a standalone xen box, with 8 drives in a raid 10 array. The standalone box ran at no load and low io.
That was a huge mess for us, we trusted dell and their little software io measuring crap, never again. We should have done more testing in house before deploying the EQ, huge screwup on our part. In the end we had to migrate those users away from the Dell EQ Onapp cloud setup, sell the Dell EQ to a low use client as a custom cloud (where it's been running ever since with zero issues!) and build out a new cloud for the other clients.
So for cloud 2 we spent a lot of time and money researching. We tried Open-e but at the time cachecade and maxcache only did read caches. So performance was not so hot. Random workloads the ssd cache didn't seem to help much at all. Open-e didn't seem to care much if we used dual xeon's with 48 or 72gb ram or if we used single x3440's with 16gb ram, performance was the same. It was frustrating that there was no easy way to up performance - all we could do was run multiple SAN's or build GIANT SAN's with a ridiculous number of spindles (we're a small company here!). Many hours were spent on testing bonded nics, multipathing, tuning tcp, etc. Open-e seemed to work well, but performance was still not stellar. On the upside, it is quite affordable and their support team seems to answer emails.
So we looked around more and we tested out Nexenta. Expensive. Fast. But also lot's of tiny bugs. Really ****** support and sales team. But they had read and write cache and it was awesome. We ended up building our cloud around this. It is still running. We actually are running out of capacity and are no where near reaching IO limits. So I guess Ditlev has never talked to us about our SAN issues.
We tried to test starwinds, but the sales team scared the crap out of us. Plus it ran windows and we have limited windows experience. Less than Solaris. So we never actually tested this, although that is mostly because the starwinds team could never give us firm pricing or let us do a trial without buying (you don't need a trial or to test it, we have the best product!).
We also tested 3tera/CA around this time. Awesome idea, crappy interface, crappy performance. Interesting original team of people who seemed like they were really great, but the buyout maybe pushed a change in corporate philosophy down the pipes? Dunno. I think they have a pretty good product if they can add better end user tools and clean up some of the performance - they claimed to get awesome performance using lot's more drives and using infiniband but we never got to there for testing because Onapp just seemed so much easier.
Anyways, now we've been testing new SAN stuff for the past 6-8 months (or perhaps longer, I can't recall as SAN is simply a nightmare for me) as we could see that we would run out of SAN space soon and our nexenta setup was homebrewed with a short sighted mistake - adding more storage required taking everything offline. And with all the problems we faced getting it running in the past and support from nexenta being garbage, we were concerned the upgrade wouldn't go smoothly. Could you imagine taking a "cloud" offline for maintenance? Then imagine the upgrade not working and unable to get usable support from nexenta for days as was our past experience? Or maybe it would just work. Who knows.
I personally love the performance nexenta offers. Tried out smartos, openindiana, etc. All are great but building in true redundancy/HA into a homebrewed SAN seems out of reach for your average linux admin. I know we've been unable to do so. So regardless of the awesome performance, this is a serious shortcoming for us. No redundancy is simply too scary to trust thousands of VM's on it.
Nexenta offers HA setups, but I don't want to spend $75k on a SAN solution again only to be handcuffed in the near future as we grow...
We have been testing Open-e again, they now offer an active-active iscsi SAN setup for an affordable price. With cachecade 2 it sounds like it'd be awesome. Reality seems to be different. We've tried cachecade with 4 different sets of ssd's. Performance is garbage (even over 10gbps networking), typically worse than the underlying spinning disks and definitely far worse than a single ssd. Open-e is very helpful, LSI is unreachable. Open-e claims they have many satisfied clients using cachecade with intel 3700 series ssd's, which are the only ones we have not tested. If you are satisfied with an easy to use, redundant system, Open-e may be for you. So far, I am not sure it is for us because the performance just does not seem to be there.
So this brings me to Onapp San. It sounds awesome. Thin provisioning finally! Dedupe coming! Reads off storage local to the VM! Bundled for free with Onapp licenses!
Now I wish I could say this is in reality awesome. But we've not been able to actually test it yet.
We tried to beta test it twice. Both times it wouldn't install/work and the forums were silent. Of course the Onapp support team offered no support as it was beta. These two attempts were a serious waste of time, oh well. Nothing like building a half dozen servers and configuring everything for no reason. Twice. I'm sure the server gods were happy to see such fruitless work performed. They love frustration I believe. Along with blood sacrifices.
We've recently tried to install v3 now that it has launched. We aren't upgrading the existing cloud but trying to test on a whole new infrastructure. V3 installed but cloudbooting hv's barely seems to work/doesn't work at all and we can't actually get it up and running. Onapp support is fast to reply to our tickets on this issue, but so far have been blundering around - Ditlev, they need to read the full ticket history! I do not know why Onapp support seems to be dropping the ball here, they are usually quite good. We haven't heard a word about waiting till 3.02 releases either. Maybe today we'll get it up and running? We've been going back and forth with support for a few days after spending a few frustrating days trying to get things up ourselves.
Personally, I hope v3 just works. If it does HALF of what it claims, I think we'll build out our next cloud infrastructure around it. I'm so tired of SAN's that I could rip my hair out.Last edited by lostmind; 03-08-2013 at 02:10 PM.
-
03-08-2013, 02:14 PM #94Web Hosting Master
- Join Date
- Dec 2004
- Posts
- 790
Now I prepare to get ripped a new one for sharing.
Actually, it's going to be a busy day, doubt I'll even get to read this till later this evening.
-
03-08-2013, 02:19 PM #95Web Hosting Master
- Join Date
- Jan 2004
- Location
- Pennsylvania
- Posts
- 942
Experience sounds similar to ours. Since it seems like you've been to hell and back like I have, use the S3700s, they work great!
Open-E is ok...it's an interface for common Linux tools. If you want quick and easy it's the way to go. It's rather inflexible though, ie. you can't make some seemingly easy changes without stopping everything, can't grow without stopping everything, etc. Basically, a lot of functions require stopping the entire HA service. I prefer to use HA as a means to work on 1 SAN at a time and not have to stop anything.
We're also pretty excited for OnApp 3.0, but our customers have seen enough issues already. We will let others experience the first SAN outage / corruption / whatever first It is a big shame they didn't put Windows 2012 support in OnApp 2.3. Now we're forced to upgrade earlier than we want as customers have been asking for Windows 2012 for a while now. Ditlev, please don't do something like that again. No good reason to delay 2012 support for so long and then only with a forced upgrade to .0 software.Matt Ayres - togglebox.com
Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions
www.togglebox.com
-
03-08-2013, 05:15 PM #96Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
WOW, this is truly invaluable and priceless, Shane!
I think lots of visitors here wholeheartedly appreciate that you took the time to share your experiences about these clouding matters! although most of them may continue to remain as silent bobs, but I'm not: THANK YOU VERY MUCH!!!!
your know post like this not only really elevates this thread, but also represents what WHT as a whole is all about: kind and generous folks share valuable and relevant experiences with fellow members without reservation. THIS CAN SAVE A TON OF DOUGHS for many new hosts who are trying to tread the water of cloud hosting.
keep it up your generousness, will you?
-
03-08-2013, 05:50 PM #97Randy
- Join Date
- Aug 2006
- Location
- Ashburn VA, San Diego CA
- Posts
- 4,615
Thanks for sharing!
I'm surprised you had so much trouble with cachecade and various different setups. For a little experiment I threw together a homebrew (active/passive) setup with 7x spin disks, 1x 256GB Samsung 830 cachecade, LIO target, DRBD and corosync. I was able to get 20k - 30k random IOPs (32 qd) read/write at the VMs with bonded gig networking. Tossed about 50 VMs at it and there wasn't much slowdown. I'd love to see what it can do with double or triple the disks and RAID0 on cachecade.
But yea, the CPU power and RAM on the SAN doesn't matter much for most software based setups. I waaaay overbuilt with E3's and 32G RAM thinking it mattered... under full load there's barely any CPU usage and like 1G RAM usage. If it weren't for the limited PCI lanes an Atom could probably get the job done.Last edited by FastServ; 03-08-2013 at 06:01 PM.
Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
Since 2003 - Ashburn VA + San Diego CA Datacenters
-
03-08-2013, 06:04 PM #98Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
a bit of info from one of my clients to share as well.
I asked one of clients who are prep'ing their HV+SANity for production. all HV's are dual E5 hex-core, 96G RAM, 8x 240G SSD for SANity. I asked him to do "power-pull" sudden-death (no graceful shut-down, no orderly reboot) test on one HV at a time so that we could observe how SANity deal with the situation.
test1 (4x VM on HV):
I just hard kicked a box with 4 VMs, and it hotmigrated them all to a new box, but 1 wouldn't start for some reason, 3 came up okay, about to try one with 10 VMs
On my power pull test on HV with 10 VMs, all 10 moved to another HV within a few minutes and 8 started up on their own, 2 failed to startup on their own
no data loss, all disk synced by up upon boot, so I guess you can say it's 80% there.
error messages -
OnApp::Actions::Fatal Storage API Call failed: {"result"=>"FAILURE", "error"=>"onappstore onlineDisk failed for vdisk (xxxx) with error map: [] and optional error: Failed to start transacti... Fatal:
OnApp::Actions::Fatal Storage API Call failed: {"result"=>"FAILURE", "error"=>"onappstore onlineDisk failed for vdisk (xxxx) with error map: [] and optional error: Failed to start transaction on a subset of the nodes with error map :[('(xxxxxx)', \"'VDisk is part of another ongoing transaction. Please try again later.'\")].
success list: ['xxxxx', 'xxxxx'], failed on node: "xxxxx"} Executing Rollback... Remote Server: 10.0.0.xxx
-
03-08-2013, 06:14 PM #99Junior Guru Wannabe
- Join Date
- Apr 2011
- Posts
- 54
We have used Open-e for about 1,5 years in a HA setup. Once we had license issue because of a expired credit card. Paid the license but the online activation did not work. Neither did the manual as it takes days before you get response for manual activation. We had the SAn in a degraded state for the weekend because of this. Paid 419$ for per-incident support. Took one hour before anybody even replied in the phone. And when they replied it took a long time before they even admitted having Per Incident 24/7 Support. They kept saying we do not have a support contract.
Not impressed. Going to switch to the Onapp storage.
-
03-08-2013, 06:25 PM #100VPS Like a Boss!
- Join Date
- Jul 2009
- Location
- New Zealand
- Posts
- 2,331
yeah indeed, thanks for sharing.... at least some peeps with far more less dough that you should at all cost avoid these costly and frustrating mistakes. Building cloud with SAN is really scary proposition especially for those with less resources.
QuickWeb™ -We Host Servers Like a Boss!
New Zealand - USA - UK - Germany Virtual Servers
Worldwide hosting provider with proven 24x7 and 25-Minute Support!
www.quickweb.co.nz
Similar Threads
-
[US] Try The Cloud for 1 Penny - SAN Storage > Powered by OnApp & Xen > FREE cPanel
By VelocityCentral in forum Cloud Hosting OffersReplies: 0Last Post: 11-28-2012, 09:28 PM -
[US] Cloud VPS - SAN Storage - Self Healing - OnApp & Xen <<50% OFF FOR LIFE!>>
By Stylex Networks in forum Cloud Hosting OffersReplies: 0Last Post: 11-26-2012, 04:50 PM -
[US] StyleX Networks - Cloud VPS | RAID-10 SAN Storage | Self Healing | OnApp & Xen
By Stylex Networks in forum Cloud Hosting OffersReplies: 0Last Post: 10-13-2012, 09:26 AM -
[US] StyleX Networks - Cloud VPS | RAID-10 SAN Storage | Self Healing | OnApp & Xen
By Stylex Networks in forum VPS Hosting OffersReplies: 0Last Post: 10-13-2012, 09:25 AM