View Poll Results: hardware SAN v SAS SANity v SSD SANity

Voters
34. You may not vote on this poll
  • 8TB hardware SAN (7200rpm SAS-II + Cachecade)

    13 38.24%
  • 9.6TB OnApp Storage/SANity (10K or 15K SAS-II)

    3 8.82%
  • 7.68TB OnApp Storage/SANity (pure 6G/s SSD)

    18 52.94%
Page 9 of 9 FirstFirst ... 6789
Results 201 to 213 of 213
  1. #201
    Join Date
    Nov 2012
    Posts
    428
    Quote Originally Posted by eming View Post
    Thanks for your input guys...

    a) Xen support utilises 2.6.18 kernel with typically poorer support for 10 GBit ethernet NICs. KVM is much better in this respect (as you've identified), and Xen support on CentOS6 will be available shortly. Note also though that Xen support on 2.6.18 is dramatically improved with jumbo frames, see next.

    b) The variable performance is typically related to lack of jumbo frames in the initial release. 3.0.6 due out on thursday this week addresses this and we anticipate that performance will be more consistent at higher throughput...


    D
    Thanks Ditlev. Looking forward to seeing 3.0.6. Hopefully it resolves my issues.

  2. #202
    Join Date
    Jul 2002
    Posts
    49
    Some questions about OnApp storage (not interested in the cloud component as we're not a host):

    Let's say we have two ESXi hosts connected into a 10GbE storage network.

    Right now we're running HP P4000 with a HP Microserver handling quorum.

    Could we run OnApp on both physical hosts and survive a host failing with the data/storage cluster remaining available like it does with the P4000 (all volumes are using Network RAID 10)?

  3. #203
    Quote Originally Posted by hutchingsp View Post
    Some questions about OnApp storage (not interested in the cloud component as we're not a host):

    Let's say we have two ESXi hosts connected into a 10GbE storage network.

    Right now we're running HP P4000 with a HP Microserver handling quorum.

    Could we run OnApp on both physical hosts and survive a host failing with the data/storage cluster remaining available like it does with the P4000 (all volumes are using Network RAID 10)?
    AFAIK, integrated storage does not currently work with vmware.
    <<< Please see Forum Guidelines for signature setup. >>>

  4. #204
    Join Date
    Nov 2009
    Posts
    514
    Quote Originally Posted by DigitalDaz View Post
    AFAIK, integrated storage does not currently work with vmware.
    Correct, OnApp Sanity only works with XEN hosts. Out of interest why would you be looking to ditch the P4000?

    Agreed there is a quorum requirement, in your case a Fail Over Manager in a 2 node cluster, but you really don't want to be in a split brain situation as far as storage is concerned.

    I haven't looked that far into how OnApp handles the quorum but without a third vote of some kind I guess this could be a very dangerous setup.
    www.VMhosts.co.uk - "Cloud hosting within reach"
    UK VMware Cloud IaaS, 24x7x365 Support, Evault Cloud Backup & DRaaS
    sales@vmhosts.co.uk 020 3397 1233

  5. #205
    Join Date
    Jul 2002
    Posts
    49
    Quote Originally Posted by VMhosts View Post
    Correct, OnApp Sanity only works with XEN hosts. Out of interest why would you be looking to ditch the P4000?
    Right now we have the hardware product and we lease and the lease is expiring.

    It does nothing wrong so buying more P4000 is an option, but the lease refresh is the time to look at alternatives.

    If I stick with P4000 we'll go the VSA route though - couple of ML350's full of 3TB SAS in RAID10 and 600GB or so of SSD as SmartCache hanging off the SmartArray should perform quite nicely I'd hope.

  6. #206
    Join Date
    Nov 2009
    Posts
    514
    Quote Originally Posted by hutchingsp View Post
    Right now we have the hardware product and we lease and the lease is expiring.

    It does nothing wrong so buying more P4000 is an option, but the lease refresh is the time to look at alternatives.

    If I stick with P4000 we'll go the VSA route though - couple of ML350's full of 3TB SAS in RAID10 and 600GB or so of SSD as SmartCache hanging off the SmartArray should perform quite nicely I'd hope.
    We like the P4000 because its Solid. You really have to go out of your way to break it. No offence to OnApp, im sure their product is good and I love the way it has built in intelligence regarding the placement of VM's so reads can be performed from the local host but its just a bit too new for us.

    SSD auto tiering is brilliant, works very well with P4000 VSA's, if you go with IBM or Dell hardware you can actually get "supported" LSI raid cards which include CacheCade.

    You can put LSI cards in HP servers but support can be a pain with HP.

    Disadvantages of the HP P4000 VSA, no jumbo frames, no network flow control, no NIC teaming so you really want a 10Gb network
    www.VMhosts.co.uk - "Cloud hosting within reach"
    UK VMware Cloud IaaS, 24x7x365 Support, Evault Cloud Backup & DRaaS
    sales@vmhosts.co.uk 020 3397 1233

  7. #207
    Join Date
    Jul 2002
    Posts
    49
    Agreed the P4000 is solid - we have ours doing metro cluster and I can vMotion my guests out of one location, take out that entire room and the storage keeps running off the other site - it's mental

    HP SmartCache is HP's equivalent of CacheCade, not sure who OEM's the card but it's not me buying an LSI card and rolling my own so since it's HP end to end support shouldn't be an issue.

    You're right about going 10GbE on the storage network and we'll be doing that - I want the storage to be the bottleneck not the LAN

    I've spent a lot of time exploring storage options and I've come to the conclusion that the best/right option is going "Storage as software" because to my mind I'm buying the hosts anyway, I'm buying the drives anyway, so if I can spend money just on storage software and run it within a VM I'm in a much better position in a years time if some new product comes along and I want to migrate to it - can't do that with $100k investment in Netapp/EMC sat in a rack...

  8. #208
    Join Date
    Nov 2009
    Posts
    514
    Quote Originally Posted by hutchingsp View Post
    Agreed the P4000 is solid - we have ours doing metro cluster and I can vMotion my guests out of one location, take out that entire room and the storage keeps running off the other site - it's mental

    HP SmartCache is HP's equivalent of CacheCade, not sure who OEM's the card but it's not me buying an LSI card and rolling my own so since it's HP end to end support shouldn't be an issue.

    You're right about going 10GbE on the storage network and we'll be doing that - I want the storage to be the bottleneck not the LAN

    I've spent a lot of time exploring storage options and I've come to the conclusion that the best/right option is going "Storage as software" because to my mind I'm buying the hosts anyway, I'm buying the drives anyway, so if I can spend money just on storage software and run it within a VM I'm in a much better position in a years time if some new product comes along and I want to migrate to it - can't do that with $100k investment in Netapp/EMC sat in a rack...
    Hadn't come across HP's SmartCache - probably LSI cachecade I guess. Thank you for bring that too my attention, we have a lot of HP customers who have been missing out on their local environments

    We had someone tell us the P4000 was old the other day, but run it on top of some SSD's and it zooms along.

    HP certainly think its a future

    http://h30507.www3.hp.com/t5/Around-...ge/ba-p/135321

    Also a lot of other vendors heading towards software-defined-storage Its the way forward IMO
    www.VMhosts.co.uk - "Cloud hosting within reach"
    UK VMware Cloud IaaS, 24x7x365 Support, Evault Cloud Backup & DRaaS
    sales@vmhosts.co.uk 020 3397 1233

  9. #209
    Join Date
    Jul 2002
    Posts
    49
    Confirmed that the storage component won't (currently) work with vSphere.

    I'd be interested to hear from anyone else who's doing synchronous replication over ethernet using something that isn't P4000 or that isn't a roll-your-own solution.

  10. #210
    Join Date
    Apr 2011
    Posts
    54
    Anyone running Onapp storage successfully? If so, what hardware/network setup you have?

    We have not been able to get it working.

  11. #211
    Join Date
    Nov 2012
    Posts
    428
    Quote Originally Posted by Doublepush View Post
    Anyone running Onapp storage successfully? If so, what hardware/network setup you have?

    We have not been able to get it working.
    I keep asking that same question over and over, no luck so far!

    What type of issues are you having? I've been working with support since release day to get OnApp Storage working. We are making progress but its been a tough battle. Feel free to post your issues or PM me with your issues. Chances are, I've gone through the same issue at some point.

  12. #212
    Join Date
    May 2003
    Posts
    1,708
    I was told several hosts have it up and running as their primary storage and it is working. Not sure who that is, but they say it is. In our tests so far the machines cloud boot, but our dual port 10Gbe NICs are not seen and the single port 10Gbe NICs do not pass traffic on the switch so they think it is a driver issue in their cloud boot.
    Last edited by kris1351; 04-24-2013 at 09:02 AM. Reason: WHT has a problem with the word cloud-boot without the hyphen
    ~~~~~~~~~~~~~~~~~~~~~
    UrNode - Virtual Solutions
    http://www.UrNode.com

  13. #213
    Join Date
    Nov 2012
    Posts
    428
    Have you tried the KVM PXE image? They have admitted to issues with the XEN cloud-boot image.

Page 9 of 9 FirstFirst ... 6789

Similar Threads

  1. Replies: 0
    Last Post: 11-28-2012, 09:28 PM
  2. Replies: 0
    Last Post: 11-26-2012, 04:50 PM
  3. Replies: 0
    Last Post: 10-13-2012, 09:26 AM
  4. Replies: 0
    Last Post: 10-13-2012, 09:25 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •