Page 6 of 6 FirstFirst ... 3456
Results 126 to 144 of 144

Thread: SAN for OnAPP

  1. #126
    Join Date
    Dec 2001
    Location
    Houston Texas
    Posts
    4,420
    Quote Originally Posted by ewitte View Post
    Thought train is headed over to 4 box nexentaStor cluster using the LSI SAS switch, 20 500GB constellation drives and SSD caching. Thing with NexentaStore is more can be added fairly easily. The entire project can scale to $15-20k but I need at least 3 5620 hypervisors as well as other hardware.
    A friend of mine who is running that and we looked at it - they seem to be happy with it. its not cheap though.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  2. #127
    Quote Originally Posted by ewitte View Post
    Thought train is headed over to 4 box nexentaStor cluster using the LSI SAS switch, 20 500GB constellation drives and SSD caching. Thing with NexentaStore is more can be added fairly easily. The entire project can scale to $15-20k but I need at least 3 5620 hypervisors as well as other hardware.
    For a small build out like that, you are better off renting space on someone else's SAN.
    Jay

  3. #128
    Join Date
    Jun 2002
    Location
    PA, USA
    Posts
    5,143
    Quote Originally Posted by jayglate View Post
    For a small build out like that, you are better off renting space on someone else's SAN.
    10 TB of raw space is not really small, is it?
    Fluid Hosting, LLC - Enterprise Cloud Infrastructure: Cloud Shared and Reseller, Cloud VPS, and Cloud Hybrid Server

  4. #129
    Join Date
    Sep 2010
    Posts
    66
    Quote Originally Posted by jayglate View Post
    For a small build out like that, you are better off renting space on someone else's SAN.
    If I wanted something slow I could build it for less than $1k Especially considering I already have 3 450GB 15k sas drives for testing.
    Changes to timeline. My focus in 2011 will be the new baby!
    http://www.vmcloudhost.net - ETA online April/May 2012
    (Onapp)
    http://forum.vmcloudhost.net - Forums are open

  5. #130
    Join Date
    Nov 2007
    Location
    India, USA and Amsterdam
    Posts
    2,581
    Quote Originally Posted by ewitte View Post
    If I wanted something slow I could build it for less than $1k Especially considering I already have 3 450GB 15k sas drives for testing.
    If you are getting shared SAN, it doesn't have to be slow Its a good solution when you are running on a tight budget.

  6. #131
    Join Date
    Sep 2010
    Posts
    66
    Quote Originally Posted by chennaihomie View Post
    If you are getting shared SAN, it doesn't have to be slow Its a good solution when you are running on a tight budget.
    How? Anyone willing to run Infiniband to my cabinet? I want a minimum of 1GB/s (800Mbit) and 10-100k iops on a single connection. 10Gbit ethernet is too expensive and having 10 or so gigabit ehternet connections per server is kinda silly and most likely wouldn't work the way intended anyway.
    Changes to timeline. My focus in 2011 will be the new baby!
    http://www.vmcloudhost.net - ETA online April/May 2012
    (Onapp)
    http://forum.vmcloudhost.net - Forums are open

  7. #132
    Quote Originally Posted by ewitte View Post
    How? Anyone willing to run Infiniband to my cabinet? I want a minimum of 1GB/s (800Mbit) and 10-100k iops on a single connection. 10Gbit ethernet is too expensive and having 10 or so gigabit ehternet connections per server is kinda silly and most likely wouldn't work the way intended anyway.
    10 to 100k IOPs not so hard, lots of drives yes, or fusion io, infiniband, umm maybe.. LOL but for 1GB/s (800Mbit) can easily be achieved via several bonded connections over NFS4 or iSCSI if you can run NFS4 I would suggest it as NFS4 is worlds better than NFS3 and it has some amazing local read caching that happens and is generally easier than iSCSI (nfs3 run from, very very fast) Now if you are looking for 800Mbit to one single server I think you need some real world examples to justify that, some of the very very largest and heaviest hit cloud providers who have alot more money than we do, never hit even under a VERY VERY VERY heavy load on their hypervisors anywhere near 800Mbit to a single host. 300 to 500Mbit is more reasonable to a hypervisor but 800Mbit from the SAN EASILY achievable with infiband.
    Jay

  8. #133
    Join Date
    Dec 2001
    Location
    Houston Texas
    Posts
    4,420
    Quote Originally Posted by FHDave View Post
    10 TB of raw space is not really small, is it?
    its not small but its not large. its not big enough to justify their own clustered solution and the support overhead that goes with it.

    thats only 2.5 useable in a raid 100 format.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  9. #134
    Join Date
    Jun 2002
    Location
    PA, USA
    Posts
    5,143
    raid 100, 1 GB/s=800 Mbps. I think we all need some rest
    Fluid Hosting, LLC - Enterprise Cloud Infrastructure: Cloud Shared and Reseller, Cloud VPS, and Cloud Hybrid Server

  10. #135
    Quote Originally Posted by sailor View Post
    its not that much but find a provider that has the good stuff deployed in large quantities and you can get it for a monthly fee that will be a much better payback than doing a capex on a smaller scale for your needs.

    Going cheap on your san can have dire consequences.
    Hey Sailor,

    I'm not sure I follow your comment? For an active-active pair of dell eq's, the lowest-end config (while still having redundant cards, etc) of 16 x 250gb sata drives is right around $20k each. That's $40k for the pair?

    List price is closer to $60k. Over $60k with taxes.

    Maybe if we purchased more than one or two at once we could get a deeper discount, but I am not sure it would go down much more than it already has...

    To switch my hardware costs from capex to opex, I could always go the leasing route, although that doesn't always make the most sense for my company. Thus I outright own quite a bit of hardware and only lease a small portion.

    I've not found a colo company willing to rent me gear of any sort cheaper than going to the source myself. Even though large companies (colo providers for example) can likely negotiate better hardware pricing from their suppliers than smaller fish such as myself can due to their quantity, once you factor in the colo provider's profit margins that hardware savings evaporate.

    We also get the benefits of full control & flexibility.
    Fully Managed Fast Hosting
    In Vancouver & Toronto
    Canadian owned & operated
    ezp.net

  11. #136
    Quote Originally Posted by JordanJ View Post
    That is really cheap actually for horizontally scalable enterprise storage. The NetAPPs we deploy cost SIGNIFICANTLY more than that. Truth is, you get what you pay for, but alot of what you pay for can only be used by larger enterprises utilizing other linked products.

    I am looking at a few boxes of memory storage that cost ~100k for 4TB with 250k IOPS. To give you an idea, that 3u box is the same IOPs as 1600+ sata II disks.

    Now-days you buy IOPs as well as GBs.
    Jordan, I'm curious what Phoenixnap needs ramsans and netapps for (aren't you strictly colo)? I'd love to hear your experience with them.

    In my case, it's not that I feel the cost is too high for the feature set you get with an enterprise SAN, but that the cost is too high for general use cloud hosting platforms. Maybe if you have a high end niche market willing to pay significantly higher pricing for the end products...
    Fully Managed Fast Hosting
    In Vancouver & Toronto
    Canadian owned & operated
    ezp.net

  12. #137
    Quote Originally Posted by arisythila View Post
    I believe Justin is trying to setup a conference call between you and I. We've over came these issues and are achieving closer to 300-400mb/sec per SERVER. on our Grids. This was one of our major problems with Applogic. Since we've figured out how to use applogic, this hasn't been an issue at all for us.

    Thanks,
    Hey Mike,

    Got your emails, appreciate the feedback. Will be in touch.
    Fully Managed Fast Hosting
    In Vancouver & Toronto
    Canadian owned & operated
    ezp.net

  13. #138
    Quote Originally Posted by jayglate View Post
    For a small build out like that, you are better off renting space on someone else's SAN.
    When renting space on someone's SAN and you run into performance issues, you generally are told "so sorry, you're renting space on a shared SAN, what do you expect?"

    Sometimes, building your own is the only way to go.
    Fully Managed Fast Hosting
    In Vancouver & Toronto
    Canadian owned & operated
    ezp.net

  14. #139
    Quote Originally Posted by lostmind View Post
    When renting space on someone's SAN and you run into performance issues, you generally are told "so sorry, you're renting space on a shared SAN, what do you expect?"

    Sometimes, building your own is the only way to go.
    not if you clearly outline how much storage and what type of performance you are looking to achieve.
    Jay

  15. #140
    Join Date
    Dec 2001
    Location
    Houston Texas
    Posts
    4,420
    I am talking about if you rent it from a provider on their san.

    they are going to have a much lower cost of operations on scale.

    Yes you will get more control which can be a benefit - but then yes you will get more control which can be a drawback and you will have to have guys to support it which are not cheap either. You cant do everything unless you don't want a life outside of work and want to be on call 24x7 with calls actually coming in. There are pluses and minuses to everything. Every minute you spend on something you decide to insource to save 1$ is a minute you don't get back to focus on your core value proposition which might be earning you 5$.

    that is an unwise investment which all too often too many people engage in because they don't do a full analysis of the soft or hidden expense.


    Quote Originally Posted by lostmind View Post
    Hey Sailor,

    I'm not sure I follow your comment? For an active-active pair of dell eq's, the lowest-end config (while still having redundant cards, etc) of 16 x 250gb sata drives is right around $20k each. That's $40k for the pair?

    List price is closer to $60k. Over $60k with taxes.

    Maybe if we purchased more than one or two at once we could get a deeper discount, but I am not sure it would go down much more than it already has...

    To switch my hardware costs from capex to opex, I could always go the leasing route, although that doesn't always make the most sense for my company. Thus I outright own quite a bit of hardware and only lease a small portion.

    I've not found a colo company willing to rent me gear of any sort cheaper than going to the source myself. Even though large companies (colo providers for example) can likely negotiate better hardware pricing from their suppliers than smaller fish such as myself can due to their quantity, once you factor in the colo provider's profit margins that hardware savings evaporate.

    We also get the benefits of full control & flexibility.
    Dedicated Servers
    WWW.NETDEPOT.COM
    Since 2000

  16. #141
    Join Date
    Jun 2002
    Location
    PA, USA
    Posts
    5,143
    Quote Originally Posted by jayglate View Post
    not if you clearly outline how much storage and what type of performance you are looking to achieve.
    What utility do you use to guarantee that somebody can't exceed their assigned IOPS or performance (whatever metrics that is).
    Fluid Hosting, LLC - Enterprise Cloud Infrastructure: Cloud Shared and Reseller, Cloud VPS, and Cloud Hybrid Server

  17. #142
    Quote Originally Posted by FHDave View Post
    What utility do you use to guarantee that somebody can't exceed their assigned IOPS or performance (whatever metrics that is).
    I wouldn't call it a utility but we can build a dedicated allocation within a shared SAN to meet a customers guidelines and needs.
    Jay

  18. #143
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,409
    I actually agree with Jay and Jeff here (and to get back to the OP's Q: "SAN for OnApp"), that it would make sense for a large portion of the hosts on WHT to go with a shared SAN solution for their OnApp setup. Right now margins are very high on cloud hosting, and there is plenty of room for the added long-term cost of a leased SAN. The cloud WILL commoditize in the next 12-18 months, and if you do not have a footprint by then, you may have hard time getting one.
    So, building your own cloud infrastructure and cloud software platform might just be bad business as you would loose out on the pre-price-erosion-cloud-era - it is a gold rush right now, make sure you get started before it is too late. And going with an existing infrastructure will make it a whole lot easier for you.


    D
    Ditlev Bredahl. CEO,
    OnApp.com + Cloud.net & CDN.net

  19. #144
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    Quote Originally Posted by lostmind View Post
    Jordan, I'm curious what Phoenixnap needs ramsans and netapps for (aren't you strictly colo)? I'd love to hear your experience with them.

    In my case, it's not that I feel the cost is too high for the feature set you get with an enterprise SAN, but that the cost is too high for general use cloud hosting platforms. Maybe if you have a high end niche market willing to pay significantly higher pricing for the end products...

    We are working on some product sets to allow enterprises with existing netapps to utilize the snap mirror functionality and backup to a secure facility without having to buy an additional netapp.

    Also, small enterprises needing centralized storage under 5TB can realize a huge cost savings by renting the storage via a datacenter rather than purchase a full netapp with only one shelf.

    One of our customers who I consult for is also using it for an onApp deployment.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

Page 6 of 6 FirstFirst ... 3456

Similar Threads

  1. Calling all hosts: OnApp 2.0 is just about ready for launch.
    By eming in forum Other Web Hosting Related Offers
    Replies: 56
    Last Post: 06-26-2018, 08:15 AM
  2. Calling all hosts: OnApp Cloudform open for business.
    By eming in forum Software & Scripts Offers
    Replies: 8
    Last Post: 02-22-2011, 11:23 AM
  3. Cloud implementation.. fuscan vs onapp?
    By phactor in forum Cloud Hosting
    Replies: 40
    Last Post: 09-14-2010, 01:10 PM
  4. Replies: 10
    Last Post: 09-09-2010, 03:39 AM
  5. Are you not offering cloud hosting yet? ... Check out OnApp.com
    By eming in forum Other Web Hosting Related Offers
    Replies: 3
    Last Post: 07-10-2010, 04:15 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •