Page 1 of 2 12 LastLast
Results 1 to 25 of 38

Thread: Building a SAN

  1. #1

    Building a SAN

    I'm just doing research on a project I am working on.
    I'm looking into the best options for building a SAN, the SAN would have to have high I/O, so no SATA.

    One option was NetApp, has anyone had any experience with them?

  2. #2
    Join Date
    Jan 2010
    Location
    USA
    Posts
    2,147
    No Support Linux Hosting Bargain cPanel Hosting Experts Only
    We IGNORE the support questions, and pass the SAVINGS on to YOU!
    We also ignore questions about VPS Hosting

  3. #3
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,398
    We've (OnApp) been through something like 300-400 client SAN's the last 5-6 months and it seems there is 3-4 different approaches out there.

    One - lowcost - route is a whitebox with a bunch of 2TB SATA drives in it + something like OpenFiler. We've seen that setup more than once, but it still scares me. Not a good idea for redundancy, performance or reliability.

    One step up would be to do Supermicro (something like SC836E1-R800B) with 14 SAS 500gb-1TB drives (more spindles/gb->better). Adding something like Open-E to it will make it easy for you to carve out lun's etc.

    Finally going up another step (before going NetAPP/etc). You could take two of the SAN's above. Chuck in something like MaxIQ and you've got a high performing hardware setup. For redundancy I would suggest you go with Starwind, their active-active setup is sweet (if you can get over the fact it runs on windows). We've got a lot of OnApp clients with that exact setup, and I can vouch for it's performance and reliability.

    Obviously the last step up would be go to with something like NetAPP, EMC or HP etc. Actually I've seen some new things coming out from VSI that looks really nice as well, and at a very good price.

    Finally, in the last 6 months I've been in ongoing talks with the guys from http://acunu.com/ - you should keep an eye on them. They have some VERY interesting technology on the way.


    D
    Ditlev Bredahl. CEO,
    OnApp.com & SolusVM.com + Cloud.net & CDN.net

  4. #4
    Join Date
    Oct 2010
    Location
    Kent, UK
    Posts
    185
    There is some interesting work going into hierarchical multiple node systems that look at the problem not from a disk point of view, most at the moment are custom builds, but the right ones can perform and be reliable far more than would suggest from the outside.

    The work derives oddly not from the traditional file system work but from throughput computing areas.
    The entirety of the SAN can be considered a throughput computing device. The large scale but slow spindles are considered the end of the memory hierarchy (treated like memory or virtual memory is treated in HPC). DDR and NV drives are different cache pools with CPU (and its memory and caches) and drive controllers as unreliable compute devices, each server node is then seen as a large compute node.

    You can see some of the basic approaches in L2ARC, but that is just the tip of the ice-burg and its gets alot more interesting once you take the leap from disk thinking to throughput computing imho.
    Cloud Pixies Ltd. Adding some Pixie magic into the Cloud!

  5. #5
    Join Date
    Oct 2004
    Location
    Earth
    Posts
    419
    Quote Originally Posted by eming View Post
    We've (OnApp) been through something like 300-400 client SAN's the last 5-6 months and it seems there is 3-4 different approaches out there.
    What do most SAN builders usually choose for their disk RAID level? RAID 10 or RAID 6?

    Thanks!

  6. #6
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,398
    Quote Originally Posted by WebGuyz View Post
    What do most SAN builders usually choose for their disk RAID level? RAID 10 or RAID 6?

    Thanks!
    I'm not sure I've seen anything but RAID10.

    But it depends on what you are trying to do.
    Raid 10 would give you faster reads and writes of the two, BUT it is possible to lose everything if you lose the wrong two drives. But on larger disk arrays you could lose exactly half the drives and retain full operations. If you loose two drives from the same side at the same time...you'r in trouble...

    However, with Raid 6, your writes could be a bit slower because of the extra checksum. But you could lose any two drives and not lose any data.
    Ditlev Bredahl. CEO,
    OnApp.com & SolusVM.com + Cloud.net & CDN.net

  7. #7
    Quote Originally Posted by WebGuyz View Post
    What do most SAN builders usually choose for their disk RAID level? RAID 10 or RAID 6?

    Thanks!
    Raid-10

    I would not advise anything other, not even Raid-60
    Carlos Rego
    OnApp CVO

    The Cloud Engine

  8. #8
    Join Date
    Jan 2010
    Posts
    91
    We are study how to make a workable and low cost iSCSI SAN to be used in onapp's solution, it seems there is no cheap way to do this.

    If you only build one SAN, you feel if something happen on that SAN, you loss everything, all your VM clients will start calling you, even you have backup on a NAS, you still need time to get it back on what ever SAN you re-build.

    So that means you need 2 SAN, 2 SAN means double your cost, one SAN is already cost you much, 2 SAN makes you .... painful ?

    If you spend too much money on the storge, that means you might lost the competeition, or you can give large disk space as other hosting company do.

    Is there anybody using onapp and happy to share how you build your SAN and how much you invest on it ?

  9. #9
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,398
    Quote Originally Posted by jameshsi View Post
    We are study how to make a workable and low cost iSCSI SAN to be used in onapp's solution, it seems there is no cheap way to do this.

    If you only build one SAN, you feel if something happen on that SAN, you loss everything, all your VM clients will start calling you, even you have backup on a NAS, you still need time to get it back on what ever SAN you re-build.

    So that means you need 2 SAN, 2 SAN means double your cost, one SAN is already cost you much, 2 SAN makes you .... painful ?

    If you spend too much money on the storge, that means you might lost the competeition, or you can give large disk space as other hosting company do.

    Is there anybody using onapp and happy to share how you build your SAN and how much you invest on it ?
    This setup:
    • 3U Rackmount Server Chassis SC836E1-R800B
    • 2x Intel Xeon Quad Core E5620 2.40GHz
    • 2x Onboard Intel Gigabit NIC
    • Integrated IPMI 2.0 with KVM and Dedicated LAN
    • 800W Redundant Power Supply
    • 6x 2GB DDR2-667 PC2-5300 Fully Buffered RAM
    • 2x Seagate Barracuda ES.2 500GB 16MB Cache 7200RPM SAS Hard Drive
    • 14x Seagate Barracuda ES.2 1TB 16MB Cache 7200RPM SAS Hard Drive
    • Adaptec 5805ZQ 16 Port SAS/SATAII PCI-E Controller w/MaxIQ and BBU
    Isn't that bad really. Like $6k or less.
    You should also consider going with a hosted infrastructure with a provider that already has a strong shared SAN setup that you can buy a 1TB from to get started. That way you wouldn't have to worry about redundancy, capex etc.


    D
    Ditlev Bredahl. CEO,
    OnApp.com & SolusVM.com + Cloud.net & CDN.net

  10. #10
    Join Date
    Apr 2006
    Location
    Phoenix
    Posts
    808
    I think alot of people will be shocked at the IOPS limitations on large sans.

    When you factor in De-Dupe, IOPs, Redundancy, management tools - NetAPP is one of the cheapest ways to go.
    Jordan Jacobs | VP, Products|SingleHop| JJ @SingleHop.com
    Managed Dedicated Servers | Bare-Metal Servers | Cloud Services

  11. #11
    Yea the IOPS limitations are pretty rough. Even with SAS drives.
    Last edited by Uncorrupted-Michael; 12-20-2010 at 03:13 PM.
    --
    I'm retired.
    Check out http://yellowfiber.net for all your needs!

  12. #12
    Join Date
    Jan 2010
    Posts
    91
    Quote Originally Posted by PhPhear View Post
    Yea the IOPS limitations are pretty rough. Even with SAS drives.
    What u guys mean ? You mean it will cost much than you may imagine if you need a big storage ?

    If you build a cloud for 500 clients, each of them need a 50GB space, you need 25000 GB = 25 TB space, is that big to you ?

  13. #13
    Join Date
    Jan 2010
    Posts
    91
    Quote Originally Posted by eming View Post
    This setup:

    Isn't that bad really. Like $6k or less.
    You should also consider going with a hosted infrastructure with a provider that already has a strong shared SAN setup that you can buy a 1TB from to get started. That way you wouldn't have to worry about redundancy, capex etc.


    D
    I don't think the above config can be less than USD$6k, also, it seems you didn't count the open-e or software you need to spend.

    Also, the above's performance ?

  14. #14
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,398
    That san performs very well because of the MaxIQ SSD controller dealing with the majority of the iops.
    I ignored the price of san software (starwind/open-e) as you said this was for OnApp.


    D

  15. #15
    Join Date
    Jan 2010
    Posts
    91
    Quote Originally Posted by eming View Post
    That san performs very well because of the MaxIQ SSD controller dealing with the majority of the iops.
    I ignored the price of san software (starwind/open-e) as you said this was for OnApp.


    D
    Don't follow you, why ? you mean if go with onapp you don't have to use/pay for the (starwind/open-e) software ?

  16. #16
    Join Date
    Jun 2006
    Location
    Cluj Napoca
    Posts
    468
    @eming the setup posted above has DDR2 fully buffer dimms. Are you sure E5620 works on a motherboard that supports DDR2 FB dimms. Those dimms are pretty expensive now compared to DDR3 that you usually add when you have an E5620. You can also get a MB that supports 4 x 1Gbit nics at a very similar price if you really need more speed

    Anyway, I am curious why there is a need for 2 CPUs in a SAN ? The op needs a cheap SAN and I recommend for example an X3450 since the difference in price between 5x and 3x is pretty big.

    If you have a choice than I would suggest looking at what OnAPP or whatever software you plan on using really knows about your setup. For example, you can buy a NetApp (pretty expensive and doesn't worth it, anyway that's just my opinion) but if your software doesn't know how to take full advantage of NetApp for example than you shouldn't actually buy one or at least look for an other that actually can do things like fast disk clone/snapshot and others.

    I am curious about open-e, I've seen VPS.net wishing for Open-E to go out of business, there must have been a reason there.

    I am not sure if I personally would even think of going with openfiler or open-e but if you have the hardware you can test every solution mentioned here (except for netapp) and draw your conclusions after that.
    IntoDNS - Check your DNS health and configuration
    IntoVPS - US Fremont and Dallas;EU - Netherlands and Romania VPS hosting

  17. #17
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,398
    Quote Originally Posted by jameshsi View Post
    Don't follow you, why ? you mean if go with onapp you don't have to use/pay for the (starwind/open-e) software ?
    yes, Starwind/Open-E comes free with OnApp. Contact us for more details - let's keep that talk off WHT.

    Quote Originally Posted by Cristi4n View Post
    @eming the setup posted above has DDR2 fully buffer dimms. Are you sure E5620 works on a motherboard that supports DDR2 FB dimms. Those dimms are pretty expensive now compared to DDR3 that you usually add when you have an E5620. You can also get a MB that supports 4 x 1Gbit nics at a very similar price if you really need more speed
    good points - will doublecheck the configs.

    Quote Originally Posted by Cristi4n View Post
    Anyway, I am curious why there is a need for 2 CPUs in a SAN ? The op needs a cheap SAN and I recommend for example an X3450 since the difference in price between 5x and 3x is pretty big.
    agree - 2 CPU's are not needed, and it would be a way for you to save a bit of $$. Obviously those crucial seconds startup's and reboots might take may warrent the extra CPU, but if you are on a budget, thats an obvious place to save! Good input!

    Quote Originally Posted by Cristi4n View Post

    I am curious about open-e, I've seen VPS.net wishing for Open-E to go out of business, there must have been a reason there.
    yeah, they had a pretty rough ride with Open-E, and they are putting their money on Starwind now.
    Starwind and Open-E are two very different beasts. I'd be happy to post more about my experiences with each of them if there is any interest. We've deployed many SAN's on both platforms and I'd be glad to share.

    Quote Originally Posted by Cristi4n View Post
    I am not sure if I personally would even think of going with openfiler or open-e but if you have the hardware you can test every solution mentioned here (except for netapp) and draw your conclusions after that.
    Have you looked into Nexenta?
    Other alternative could be AOE based Coraid appliances - they've got a great pricing setup (lease options etc) and good technology.
    Keep an eye on Acunu.com as well - they've got greatness on the way, could be the holy grail of storage...


    D
    Ditlev Bredahl. CEO,
    OnApp.com & SolusVM.com + Cloud.net & CDN.net

  18. #18
    Join Date
    Jun 2006
    Location
    Cluj Napoca
    Posts
    468
    @eming yes, I built a SR for nexenta and xenserver that I am testing now with everything supported like cloning and snapshots a.s.o.
    However, Nexenta has some bugs so I adapted my SR to actually work with Solaris since I would trust that more than Nexenta for zfs.

    I would be glad if we can have a quick talk about Open-E and Starwind or other things related to storage
    Last edited by Cristi4n; 12-21-2010 at 05:46 AM.
    IntoDNS - Check your DNS health and configuration
    IntoVPS - US Fremont and Dallas;EU - Netherlands and Romania VPS hosting

  19. #19
    Quote Originally Posted by eming View Post
    Keep an eye on Acunu.com as well - they've got greatness on the way, could be the holy grail of storage...
    Please elaborate

  20. #20
    Join Date
    Jan 2010
    Posts
    91
    eming, You must have a lot of experience about iSCSI SAN, since onapp build a lot of clients' SAN, so, if there are some examples of fail ? Can you share to us ?

    Currently I want to try onapp but I don't have a deep pocket full with money, so we might decide to test run onapp to build a small cloud, using the cheapest iSCSI SAN that you can build of, what will that be ?

    Can it just use RAID5 or RAID6 instead of RAID 10 ?
    If use RAID5 (DELL R610,1CPU,4GB RAM,6X 2.5 SATA 1TB)
    + Open-E (but without expensive Adaptec MAXIQ), what will happen ?

  21. #21
    Join Date
    Sep 2005
    Location
    London
    Posts
    2,398
    Quote Originally Posted by jameshsi View Post
    Can it just use RAID5 or RAID6 instead of RAID 10 ?
    If use RAID5 (DELL R610,1CPU,4GB RAM,6X 2.5 SATA 1TB)
    + Open-E (but without expensive Adaptec MAXIQ), what will happen ?
    That would work, however, depending on your client profile you may hit a IOPS bottleneck fairly fast.
    Raid5/6 would not be recommended though.


    D
    Ditlev Bredahl. CEO,
    OnApp.com & SolusVM.com + Cloud.net & CDN.net

  22. #22
    Join Date
    Jan 2010
    Posts
    91
    When we hit the IOPS bottleneck, what will it looks like ? I mean what will happen ? High loading ? Client's server act very slowly ?

  23. #23
    Join Date
    Oct 2010
    Location
    Kent, UK
    Posts
    185
    Your best bet for a cheaper SAN is start with OpenIndiana/Nextenta and use ZFS and COMSTAR/iSCSI support (which are both built in). IOPS can be helped with a few SSD for ZIL and L2ARC.

    Whilst not necessarily easy (tho nextenta do a free easy version upto 18TB) it will likely outperform in performance and price any other budget solution.
    Cloud Pixies Ltd. Adding some Pixie magic into the Cloud!

  24. #24
    Quote Originally Posted by jameshsi View Post
    When we hit the IOPS bottleneck, what will it looks like ? I mean what will happen ? High loading ? Client's server act very slowly ?
    Correct, the clients servers will experience high load due to IOwait being too high, slowing the cloud server functions.
    Carlos Rego
    OnApp CVO

    The Cloud Engine

  25. #25
    Join Date
    Mar 2004
    Posts
    426
    Quote Originally Posted by jameshsi View Post
    I don't think the above config can be less than USD$6k, also, it seems you didn't count the open-e or software you need to spend.

    Also, the above's performance ?
    Yes, you can do this config (less software) for $6K or probably a little less. The MaxIQ Raid card is around $1K alone though. So while you could cheapen it up your performance would suffer.
    Cutting corners on a cloud setup due to budget issues could be disaster as you could easily have thousands of dollars in a setup that could turn out useless for it's intended purpose due to low performance.

Page 1 of 2 12 LastLast

Similar Threads

  1. Replies: 0
    Last Post: 07-14-2010, 04:34 PM
  2. Replies: 0
    Last Post: 06-30-2010, 02:26 PM
  3. building a backup server/nas/san
    By nax9 in forum Hosting Security and Technology
    Replies: 0
    Last Post: 06-08-2008, 03:04 AM
  4. Need VPS Host in San Fran/Oakland/San Jose Area
    By moloki in forum VPS Hosting
    Replies: 1
    Last Post: 08-02-2007, 01:22 PM
  5. Replies: 27
    Last Post: 05-11-2005, 04:43 PM

Related Posts from theWHIR.com

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •