Page 1 of 7 1234 ... LastLast
Results 1 to 40 of 258
  1. "OnApp Storage": bullet-proof software SAN?

    Quote Originally Posted by The Register
    UK-based cloud infrastructure supplier OnApp says it has created a scalable and resilient SAN for cloud service providers by using the provider's virtualised application server's local storage and aggregating it with virtual smart controllers running in the same servers.
    http://www.theregister.co.uk/2012/04...cloud_storage/

    is this a real game changer for the SAN hardware as we know it?
    http://onapp.com/storage/

  2. #2
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,566
    Heard about this being in the pipe for a while... we shall see how it pans out.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  3. #3
    Quote Originally Posted by [email protected] View Post
    http://www.theregister.co.uk/2012/04...cloud_storage/

    is this a real game changer for the SAN hardware as we know it?
    http://onapp.com/storage/
    Absolutely. A huge portion of the cost premium of "cloud" is having to have a SAN built in. Good SANs cost $$$$ (absolutely essential to performance and reliability), and bad SANs cost $$ while also hurting reliability and performance compared to on-node storage.

    Essentially, you've got all these hypervisor servers that could have 2-6 hard drives attached for next to nothing, but instead you have them all sit empty, and then you have to set up a separate SAN box. That gets expensive. This solution lets you take advantage of the 2-6 disks you can be putting into each server already, and still get the benefits of SAN in terms of redundancy and failover. To me, that's huge. Also, this could potentially reduce network costs, as, instead of having one big SAN box that needs multiple 10gbe connections, if you have 10 boxes that all share the disk load, a 1gbe connection from each would be sufficient in a large number of use cases. The cost benefits of having 1 or 2 gbe connections per server for SAN instead of a 10gbe connection is huge. When you consider that most workloads are not sequential-heavy and are random i/o heavy, the 120MB/s or so you can get over 1gbe per server is actually pretty good for 4 hard drives per server. There are cases where this could be a bottleneck, but my feeling is that this would be sufficient the majority of the time.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  4. #4
    Join Date
    Mar 2003
    Location
    chicago
    Posts
    1,557

  5. #5
    Join Date
    Mar 2012
    Posts
    162
    I agree its important, could be as huge breakthrough. In addition to reliability this also gives you much more accessible live migration functionality, which is completely lacking in most public clouds.

  6. #6
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,706
    Hmmm... could give AppLogic a run now that they have a solution to storage performance / redundancy...

  7. #7
    Join Date
    Jan 2009
    Posts
    3,876
    Looks promising for sure. OnApp was a bit bumpy at first, but things seem to be rolling now.

  8. #8
    Join Date
    Apr 2007
    Posts
    3,513
    I am sure it will be a great product, especially when you look at other things from OnApp, and the people they have working on this.

    However it will be interesting to see how many providers go with the solution, at least in the beginning, as you need a lot of trust with your hardware/software when there are that many 'eggs in one basket'.
    - Buying up websites, side-projects and companies - PM Me! -

  9. #9
    Join Date
    Mar 2004
    Location
    Seattle, WA
    Posts
    2,558
    Looks interesting, though their website doesn't really explain how it applies in a real world scenario necessarily. It gives features without explaining how it works.
    ColoInSeattle - From 1U to cage space colocation in Seattle
    ServerStadium - Affordable Dedicated Servers
    Come visit our 18k sq ft. facility in Seattle!
    Managed Private Cloud | Colocation | Disaster Recovery | Dedicated Servers

  10. #10
    Join Date
    Feb 2006
    Location
    New York
    Posts
    630
    Quote Originally Posted by MikeTrike View Post
    Looks promising for sure. OnApp was a bit bumpy at first, but things seem to be rolling now.
    I have to second that comment - OnApp basically was a beta test with its 'free intro offer' to so many last year, and while it worked out some/most of the issues - it's not the type of product/company I'd jump on their next big product with production grade level clients for a while on this new SAN concept.

    That said, I think its a great idea, I believe it was really first pushed commercially by a company doing web based storage (a bit popular on Pingzine and Hostingcon 2 years ago, but the name escapes me). They basically had a customized linux kernel to spread the file storage over many servers across a network (similar to what OnApp is doing). OnApp's difference is its got some commercial teeth to it now i think, and commercialized it to leverage SAN versus just web based file access.

    We are signed up and planning to trial it though internally, but unlikely to place live data on it until at least version 3 is out
    TurnKey Internet, Inc : phone 1.518.618.0999 and 1.877.539.4638 | Contact Us
    Cloud Servers | Dedicated Servers | Colocation | VPS | Mail Services | Reseller hosting
    New York / East Coast Green Datacenter


  11. #12
    Just got accepted to beta:

    OnApp Storage requires additional local disks in hypervisors, beyond those normally required for an OnApp Cloud. These disks must be unused – you cannot use drives that are used for existing LVM storage, or for the hypervisor’s primary OS.
    So what exactly should I be using for the hypervisor's primary OS? Really don't want to tie up a lot of storage on the OS if I need additional separate disks for the cloud storage portion.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  12. #13
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,566
    The idea is great, but the proof will be in the pudding so to speak.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  13. #14
    Join Date
    Jun 2011
    Location
    Miami, FL
    Posts
    824
    Quote Originally Posted by funkywizard View Post
    So what exactly should I be using for the hypervisor's primary OS? Really don't want to tie up a lot of storage on the OS if I need additional separate disks for the cloud storage portion.
    If it's just a hypervisor, then an SD Card/Drive or other small /onboard bootable media.
    Jeff Tysco President Cingular, Inc.
    Business Class Hosting Services
    Your Total IT Solutions Provider

  14. #15
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,566
    Quote Originally Posted by funkywizard View Post
    Just got accepted to beta:



    So what exactly should I be using for the hypervisor's primary OS? Really don't want to tie up a lot of storage on the OS if I need additional separate disks for the cloud storage portion.
    With a RAID card you can create logical volumes to put the OS on say a 10GB volume /dev/sda and the rest for /dev/sdb. Should be able to do something similar with mdadm as well.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  15. #16
    Join Date
    Jul 2003
    Location
    Waterloo, Ontario
    Posts
    1,116
    It's definitely needed to bring more people onto the cloud. Low upfront investment is what's needed in this industry and I think this is a viable way of doing it. AppLogic has been doing it for years, but it would be interesting to see what onApp is doing different to better itself in the marketplace.
    Right Servers Inc. - Fully Managed Cloud Servers in Canada. Join our White Labelled WHMCS Cloud VPS Reseller Program to become your own host!
    SSD Accelerated Cloud | cPanel/WHM | Softaculous | Idera Backups | Bitcoin & Litecoin Accepted

  16. #17
    With a RAID card you can create logical volumes to put the OS on say a 10GB volume /dev/sda and the rest for /dev/sdb.
    The idea of using a h/w raid card to present one disk as more than one disk to the host OS, I would certainly expect would work, but is not something I'm interested in doing.

    Quote Originally Posted by FastServ View Post
    Should be able to do something similar with mdadm as well.
    I would expect you couldn't do this in software otherwise they wouldn't have explicitly said you can't do it. I believe they do want to use an entire raw disk for their storage and so it shouldn't be possible to split it up and use part for the hypervisor OS, otherwise they wouldn't have gone out of their way to say not to do that. Maybe it's possible you can do it and they're telling you that you can't anyway, but for the time being I'll assume that they're being honest that you can't.

    Looking at it, they say they only need 30gb of storage for the hypervisor OS, so I guess I could put a 40gb SSD in there. Shouldn't increase the cost too much or use up valuable chassis space.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  17. #18
    any idea on what the cost of this is going to be? They make it sound like the cost will be tied to the size of your storage in GB, but no actual prices are presented. If the cost is per-gb, then I would imagine this would tip the scale in favor of SSD, since the storage itself is pretty expensive, it would be easier to stomach a per-gb license cost. We're already looking at this from an SSD point of view in general, since thin provisioning and deduplication is far more useful for SSD than for regular storage, as well as, a big bottleneck for VPS or Cloud is disk i/o, and just going ahead and saying "we're doing SSD-only" neatly solves that problem. If you're going to charge "cloud" pricing, SSD seems like a good way to justify that cost to people. "Sure it may cost similar to a dedicated server but everything, I mean everything, is stored on replicated SSDs, and you get full failover, elastic cloud storage, etc."
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  18. Quote Originally Posted by funkywizard View Post
    Just got accepted to beta:



    So what exactly should I be using for the hypervisor's primary OS? Really don't want to tie up a lot of storage on the OS if I need additional separate disks for the cloud storage portion.
    there are many small size mSATA PCIe SSD drives on the market now can be plugged into PCI-E slot directly as OS boot drive.

    also, most socket 2011 server boards now come with 10x on-board SATA ports, thanks to C602 chipset, so that you can have 8-10 inexpensive large-capacity non-enterprise SATA-II/SATA-III drives per hypervisor, directly connect them to on-board controller, then dedicate them to the "virtual" SAN.

    the new dual socket 2011 2U twin2 quad-node (SYS-2027TR-HTRF) from SM is also looking good for this type of platform. it can be 4x high capacity hypervisors, then put 24x 2.5" SATA/SAS/SSD drives contributed to virtual SAN, sorta killing two birds with one big stone!

  19. #20
    Quote Originally Posted by [email protected] View Post
    there are many small size mSATA PCIe SSD drives on the market now can be plugged into PCI-E slot directly as OS boot drive.

    also, most socket 2011 server boards now come with 10x on-board SATA ports, thanks to C602 chipset, so that you can have 8-10 inexpensive large-capacity non-enterprise SATA-II/SATA-III drives per hypervisor, directly connect them to on-board controller, then dedicate them to the "virtual" SAN.

    the new dual socket 2011 2U twin2 quad-node (SYS-2027TR-HTRF) from SM is also looking good for this type of platform. it can be 4x high capacity hypervisors, then put 24x 2.5" SATA/SAS/SSD drives contributed to virtual SAN, sorta killing two birds with one big stone!
    Thanks for the feedback. mSATA PCIe certainly sounds good (or even regular sata ssd if the cost is acceptable) if they're OS bootable.

    The supermicro solution there also looks compelling for this use case. If you're sticking to SSD-only storage, then having that many 2.5" slots is perfect. That said, the official spec requires / strongly encourages 4x1g ports or at least 1x10g ports. I know there were some reasonably priced 4x1g supermicro boards for the X8 series / x34xx cpus, but I haven't looked into it for the X9, and definitely with that twin squared it looks like that only has 2 ports each.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  20. Quote Originally Posted by funkywizard View Post
    Thanks for the feedback. mSATA PCIe certainly sounds good (or even regular sata ssd if the cost is acceptable) if they're OS bootable.

    The supermicro solution there also looks compelling for this use case. If you're sticking to SSD-only storage, then having that many 2.5" slots is perfect. That said, the official spec requires / strongly encourages 4x1g ports or at least 1x10g ports. I know there were some reasonably priced 4x1g supermicro boards for the X8 series / x34xx cpus, but I haven't looked into it for the X9, and definitely with that twin squared it looks like that only has 2 ports each.
    yes, on twin2 quad-node, you can install low-profile dual GbE or quad GbE or dual 10GbE NIC on riser card in each node so that you can have 4-6 GbE ports or 2x GbE + 2x 10GbE ports per node.

  21. #22
    Quote Originally Posted by [email protected] View Post
    yes, you can install low-profile dual GbE or dual 10GbE NIC on riser card in each node so that you can have 4x GbE ports or 2x GbE + 2x 10GbE ports per node.
    That's certainly interesting then. Is the dual gbe a supermicro part, or requires the typical riser + low profile pcie card?
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  22. Quote Originally Posted by funkywizard View Post
    That's certainly interesting then. Is the dual gbe a supermicro part, or requires the typical riser + low profile pcie card?
    AOC-SG-i2 (dual Intel 82574L GbE)
    AOC-SG-i4 (quad Intel 82576 GbE)
    AOC-EXPX9502FXSR (dual Intel 82598EB 10GbE)

    they all can fit the riser comes with the twin2 quad-node already.

  23. #24
    Quote Originally Posted by [email protected] View Post
    AOC-SG-i2 (dual Intel 82574L GbE)
    AOC-SG-i4 (quad Intel 82576 GbE)
    AOC-EXPX9502FXSR (dual Intel 82598EB 10GbE)

    they all can fit the riser comes with the twin2 quad-node already.
    Nice. The AOC-SG-i2 is actually pretty affordable too. Provantage has it for around $75 and wiredzone for around $80. If the twin squared already comes with the riser installed, that's a plus as I had a heck of a time finding compatible risers that actually worked properly back when I needed to do that.

    The price on the AOC-EXPX9502FXSR hurts ($2200 or more), but what can you expect for dual 10gbe
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  24. #25
    Join Date
    Jan 2004
    Location
    Pennsylvania
    Posts
    939
    Beta is not open yet, but from the screenshots I've seen I am fairly confident it is gluster based. With gluster you can set the number of copies of data to keep, ie. 2, 4, 6, etc. When dealing with dozens of head nodes and hundreds of disks it is considered a "bad idea" with gluster to only keep 2 copies of data, 4 is preferred. So there is thin provisioning and dedupe, but at the cost of maintaining 4 copies of everything. My other concern is that we've been able to make a SATA SAN scale really well by using minimal SSD caching, but with this type of system there is no cache so you'll most definitely need to watch your SATA storage and need the ability to offer SAS/SSD storage. This all being said, we're super eager for the beta to open up to verify/test all of this
    Matt Ayres - togglebox.com
    Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
    Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions

    www.togglebox.com

  25. #26
    Join Date
    Oct 2002
    Location
    Miami, FL
    Posts
    501
    Quote Originally Posted by CGotzmann View Post
    Hmmm... could give AppLogic a run now that they have a solution to storage performance / redundancy...
    Sure could. Can't wait to see real world performance and reliability.

  26. #27
    Join Date
    Mar 2012
    Posts
    162
    Thanks for the word Gluster. I hadn't heard of this.

  27. #28
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,404
    Quote Originally Posted by TheWiseOne View Post
    Beta is not open yet, but from the screenshots I've seen I am fairly confident it is gluster based.
    Just a quick note - our storage solution is not gluster based. I have a team in Cambridge, UK, that has architected and engineered the storage platform from scratch.


    D
    Ditlev Bredahl. CEO,
    OnApp.com & SolusVM.com + Cloud.net & CDN.net

  28. #29
    Quote Originally Posted by TheWiseOne View Post
    Beta is not open yet, but from the screenshots I've seen I am fairly confident it is gluster based. With gluster you can set the number of copies of data to keep, ie. 2, 4, 6, etc. When dealing with dozens of head nodes and hundreds of disks it is considered a "bad idea" with gluster to only keep 2 copies of data, 4 is preferred. So there is thin provisioning and dedupe, but at the cost of maintaining 4 copies of everything. My other concern is that we've been able to make a SATA SAN scale really well by using minimal SSD caching, but with this type of system there is no cache so you'll most definitely need to watch your SATA storage and need the ability to offer SAS/SSD storage. This all being said, we're super eager for the beta to open up to verify/test all of this
    It would be great if you could specify 1 or 2 copies of data be stored on one tier (SSD for example), and store extra copies on another tier which is only used for disaster recovery purposes. If you had 2 copies on SSD, the chances of losing both at once is pretty small so long as the storage is smart enough to keep the data on separate servers, and then the cost to keep extra copies on hard disks is almost nothing in comparison to storing 4 copies on SSD.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  29. #30
    Quote Originally Posted by eming View Post
    Just a quick note - our storage solution is not gluster based. I have a team in Cambridge, UK, that has architected and engineered the storage platform from scratch.


    D
    Thanks for the clarification : )
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  30. #31
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,404
    Quote Originally Posted by funkywizard View Post
    It would be great if you could specify 1 or 2 copies of data be stored on one tier (SSD for example), and store extra copies on another tier which is only used for disaster recovery purposes. If you had 2 copies on SSD, the chances of losing both at once is pretty small so long as the storage is smart enough to keep the data on separate servers, and then the cost to keep extra copies on hard disks is almost nothing in comparison to storing 4 copies on SSD.
    that is actually a very very good idea. You are able to specify tiers of storage, and you can guarantee IO's to specific clients/drives for QoS etc. You are also able to define the copies per virtual drive - but currently you are not able to mix it so the copies would be on lower tiers of drives. That makes perfect sense - I'll pass it on to the guys!

    Thanks...


    D
    Ditlev Bredahl. CEO,
    OnApp.com & SolusVM.com + Cloud.net & CDN.net

  31. #32
    Quote Originally Posted by [email protected] View Post

    is this a real game changer for the SAN hardware as we know it?
    http://onapp.com/storage/
    not sure about a game changer, but, its certainly an indication of the trend moving forward, where storage will become more flexible and vendors will start to deliver these sorts of storage solutions as a service...

    Im not sure what tech they used - it looks like a cool offering and very cool that more storage options are coming available.. expect more of this over time

    Building your SAN is easy. Using a simple web-based UI you can select physical disks from any servers connected to the platform, and combine them into virtual data stores to create your SAN. Disks can be any size: they are simply grouped by performance, which enables you to create tiers of storage based on low performance/high capacity SATA drives, high-performance SSDs, or anything in between. A powerful CLI provides low-level access for sysadmins, too.
    this does sound a lot like ZFS - we have been using this to backbone our cloud for quite awhile.. better then tiers though (and you can still tier with zfs if you like), you can create massive SSD read/write caching layers - we can grow a caching layer by adding free SSD drives found anywhere in our network.. or we can grow zpools of sata/sas capacity type of storage behind the caches on the fly, again using any drive available in our network...

    I dont think this type of solution is unique (heck, zfs has been around for ages) - but, the concept of packaging these solutions for service providers is pretty new and onapp is certainly - and continues to be - ahead of the curve with delivering solutions to the market specifically geared towards service providers...

  32. #33
    Join Date
    Mar 2003
    Location
    chicago
    Posts
    1,557
    i was thinking it sounded like zfs with some nice management features.


    Quote Originally Posted by cartika-andrew View Post
    not sure about a game changer, but, its certainly an indication of the trend moving forward, where storage will become more flexible and vendors will start to deliver these sorts of storage solutions as a service...

    Im not sure what tech they used - it looks like a cool offering and very cool that more storage options are coming available.. expect more of this over time



    this does sound a lot like ZFS - we have been using this to backbone our cloud for quite awhile.. better then tiers though (and you can still tier with zfs if you like), you can create massive SSD read/write caching layers - we can grow a caching layer by adding free SSD drives found anywhere in our network.. or we can grow zpools of sata/sas capacity type of storage behind the caches on the fly, again using any drive available in our network...

    I dont think this type of solution is unique (heck, zfs has been around for ages) - but, the concept of packaging these solutions for service providers is pretty new and onapp is certainly - and continues to be - ahead of the curve with delivering solutions to the market specifically geared towards service providers...

  33. #34
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,404
    Quote Originally Posted by cyberhouse View Post
    i was thinking it sounded like zfs with some nice management features.
    It's not - I've uploaded some more info for you guys here: http://ditlev.onapp.com/storage.zip
    It's Julian's (who runs the storage team at OnApp) presentation from worldhostingdays...hopefully that helps answering some of the questions here.


    D

  34. #35
    Quote Originally Posted by eming View Post
    It's not - I've uploaded some more info for you guys here: http://ditlev.onapp.com/storage.zip
    It's Julian's (who runs the storage team at OnApp) presentation from worldhostingdays...hopefully that helps answering some of the questions here.


    D
    you guys really built something proprietary to do what ZFS already does?

    eitherway, Im impressed as always Ditlev - you guys continually package solutions for our industry.. keep it up bud!!!

  35. #36
    Quote Originally Posted by cartika-andrew View Post
    you guys really built something proprietary to do what ZFS already does?

    eitherway, Im impressed as always Ditlev - you guys continually package solutions for our industry.. keep it up bud!!!
    ZFS is kind of proprietary when you consider that it only runs properly in solaris (or via a low performance module under linux). Such a configuration is not entirely useful for the average service provider.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  36. #37
    Quote Originally Posted by funkywizard View Post
    ZFS is kind of proprietary when you consider that it only runs properly in solaris (or via a low performance module under linux).
    yes, it runs on solaris, but, as iscsi attached volumes, it doesnt matter whether you are running linux or windows or whatever. its certainly not "low performance" - if you saw the metrics, you would be hard pressed to replicate it with anything.. we used netapp for years, and no matter what caching you used, you arent coming close..

    Such a configuration is not entirely useful for the average service provider.
    you're right - the average provider could not implement this. this is the value companies like onapp are bringing, where they are developing interfaces and solutions specifically geared to service providers. just seemed logical, based on the functionality - that they would build this on top of something already proven - like an open standards file system and volume manager like zfs.. if they went their own way, so be it.. Im sure it will be an effective solution, but, its not ground breaking - these things exist - what IS groundbreaking is how they package and build solutions and deliver them specifically for service providers - quite impressive

  37. #38
    Quote Originally Posted by cartika-andrew View Post
    yes, it runs on solaris, but, as iscsi attached volumes, it doesnt matter whether you are running linux or windows or whatever. its certainly not "low performance" - if you saw the metrics, you would be hard pressed to replicate it with anything.. we used netapp for years, and no matter what caching you used, you arent coming close..



    you're right - the average provider could not implement this. this is the value companies like onapp are bringing, where they are developing interfaces and solutions specifically geared to service providers. just seemed logical, based on the functionality - that they would build this on top of something already proven - like an open standards file system and volume manager like zfs.. if they went their own way, so be it.. Im sure it will be an effective solution, but, its not ground breaking - these things exist - what IS groundbreaking is how they package and build solutions and deliver them specifically for service providers - quite impressive
    I mean to say that because the storage needs to be on a solaris box, then for the typical service provider that you would need separate storage nodes from everything else since most of their hardware won't be running solaris, which wouldn't really work given what onapp storage is looking to accomplish here, which is to allow you to use disks attached to your existing servers.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  38. #39
    Quote Originally Posted by funkywizard View Post
    I mean to say that because the storage needs to be on a solaris box, then for the typical service provider that you would need separate storage nodes from everything else since most of their hardware won't be running solaris, which wouldn't really work given what onapp storage is looking to accomplish here, which is to allow you to use disks attached to your existing servers.
    I understand and agree.. onapp is trying to do something different. sounded like zfs to me, but, great if its not

    curious to see this in action

  39. #40
    Join Date
    Mar 2003
    Location
    chicago
    Posts
    1,557
    zfs runs pretty well on freebsd 9 i am running some live tests on some servers and from what i see zfs is kicking ass.

    What i would love to see is a great control panel to manage zfs pools and everything.


    Quote Originally Posted by funkywizard View Post
    I mean to say that because the storage needs to be on a solaris box, then for the typical service provider that you would need separate storage nodes from everything else since most of their hardware won't be running solaris, which wouldn't really work given what onapp storage is looking to accomplish here, which is to allow you to use disks attached to your existing servers.

Page 1 of 7 1234 ... LastLast

Similar Threads

  1. Replies: 0
    Last Post: 01-06-2012, 11:33 AM
  2. "Single Pane" management SAN software?
    By ItsChrisG in forum Cloud Hosting
    Replies: 13
    Last Post: 11-18-2010, 06:34 PM
  3. What is the FREE software that work as same as "WHMAP" or "Clientexec"?
    By zabretooth in forum Hosting Software and Control Panels
    Replies: 6
    Last Post: 05-26-2007, 02:28 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •