Page 2 of 2 FirstFirst 12
Results 26 to 38 of 38

Thread: Building a SAN

  1. #26
    Join Date
    Oct 2010
    Location
    Kent, UK
    Posts
    185
    Budget SAN-a-likes are very doable, but you will need someone who understands the soft/hard storage stack from top to bottom.
    HW RAID really doesn't belong in a SAN (doesn't stop it being in many), but they are easy/simple to set-up I guess...
    Cloud Pixies Ltd. Adding some Pixie magic into the Cloud!

  2. #27
    Join Date
    Mar 2003
    Location
    Kansas City, Missouri
    Posts
    462
    I second this quote.

    NetApp has tons of extra value with block-level deduplication, easy SAN replication, flexible volumes and RAID-DP.

    You simply can't find a more flexible _true_ multiprotocol solution. If anyone has any NetApp related questions, please let me know. Keep in mind an IOP is not always an IOP it really depends on the workload and desired latency.

    NetApp wins hands down to any other solution.


    Quote Originally Posted by JordanJ View Post
    I think alot of people will be shocked at the IOPS limitations on large sans.

    When you factor in De-Dupe, IOPs, Redundancy, management tools - NetAPP is one of the cheapest ways to go.
    =>Admo.net Managed Hosting
    => Managed Hosting • Dedicated Servers • Colocation
    => Dark Fiber Access to 1102 Grand, Multiple Public Providers
    => Over •Sixteen• Years of Service

  3. #28
    Join Date
    Dec 2010
    Location
    Dirty Jerzey
    Posts
    38
    I know some people that have been trying to start to offer their own cloud offerings. It's one of the first few types of hosting that I'm trying to learn. Most complain not because it's a pain to build, but because of the price that's involved in building a SAN. The most efficient and cost effective SAN would be a 24 bay chassis with a 5620 proc. Yes, that's 24 1TB or 500GB hard drives depending on how big you want your SAN.

    Also, check out colocation. You might save yourself a few thousand dollars.

    EDIT: I forgot to mention that RAID is a must as well.

  4. #29
    Quote Originally Posted by FranciscoV View Post
    I know some people ...

    Also, check out colocation. You might save yourself a few thousand dollars.

    EDIT: I forgot to mention that RAID is a must as well.
    The RAID you mention must be a RAID 10 ?

  5. #30
    Join Date
    Oct 2010
    Location
    Kent, UK
    Posts
    185

    *

    Quote Originally Posted by jameshsi View Post
    The RAID you mention must be a RAID 10 ?
    Which RAID is entirely dependent on how many disk failures you want to absorb. The issue for any SAN (which usually have at least >10TB of disks) is that when a failure occurs, RAID10 absorbs it but you can't take another failure without catastrophe. As the drives are working overtime due to working 1 disk down, the odds of another failure actually go up.
    Hence why 2+ disk resilience (RAID6 or better) is becoming more popular.

    RAID10 is probably okay if you have daily backups and your or your clients can handle losing a day of data.
    Cloud Pixies Ltd. Adding some Pixie magic into the Cloud!

  6. #31
    Join Date
    May 2003
    Posts
    1,708
    Raid 6 is a large performance hit is why most support Raid 10. One thing you failed to mention is that when building a Raid 10 you can lose 1 drive on each side, but the immediate solution to that is having at least 2 hot spares in a system and a system that has monitoring to allow you to know when the hot spares have been utilized.
    ~~~~~~~~~~~~~~~~~~~~~
    UrNode - Virtual Solutions
    http://www.UrNode.com

  7. #32
    Join Date
    Mar 2003
    Location
    Kansas City, Missouri
    Posts
    462
    Unfortunately it's not 50 disks in one RAID-6 set. You would make two to three RAID-groups inside an aggregate with NetApp solutions.

    This will limit your exposure to double disk failure as you have less disks inside one RAID group. NetApp's RAID-DP is as fast as RAID-10 hands down.
    =>Admo.net Managed Hosting
    => Managed Hosting • Dedicated Servers • Colocation
    => Dark Fiber Access to 1102 Grand, Multiple Public Providers
    => Over •Sixteen• Years of Service

  8. #33
    Join Date
    Oct 2010
    Location
    Kent, UK
    Posts
    185
    RAID10 can survive 2 failures BUT not in all cases. The industry hasn't really settled on a better form of numbering when we get to probability based resiliency. In RAID10 it is 100% for 1 disk failure and a 2 disk resilience as a factor of the size of the array, iirc its (100-200/N)% where N is the number of drives in total array), the probability continues in linear progression decreases upto N/2 failures (beyond which is 0%).

    The rate with which hot spares are useful, is related to the rate at which a hot spare can be converted to a replacement. For mirrors its time to copy the failed drive (or active data on the drive if usage tracking). For parity systems its the time to rebuild the disk to the hot spare which is generally much slower as touches more drives to get the data.
    Cloud Pixies Ltd. Adding some Pixie magic into the Cloud!

  9. #34
    Join Date
    Oct 2010
    Location
    Kent, UK
    Posts
    185
    Quote Originally Posted by AdmoNet View Post
    Unfortunately it's not 50 disks in one RAID-6 set. You would make two to three RAID-groups inside an aggregate with NetApp solutions.

    This will limit your exposure to double disk failure as you have less disks inside one RAID group. NetApp's RAID-DP is as fast as RAID-10 hands down.
    Which is why most SAN vendors 'forget' to mention the probability of catastrophic failures. Normal aggregates of RAID (5/6/DP etc.) don't alter the resilience (worst case failure it can absorb) only the probability that the worst case won't happen. No matter how many RAID-DP groups, 3 disk failure within the same group is total failure.

    Methods that can always guarantee more than that, are currently in the specialist areas.
    Cloud Pixies Ltd. Adding some Pixie magic into the Cloud!

  10. #35
    Join Date
    Dec 2010
    Location
    Dirty Jerzey
    Posts
    38
    Quote Originally Posted by DeanoC View Post
    Which RAID is entirely dependent on how many disk failures you want to absorb. The issue for any SAN (which usually have at least >10TB of disks) is that when a failure occurs, RAID10 absorbs it but you can't take another failure without catastrophe. As the drives are working overtime due to working 1 disk down, the odds of another failure actually go up.
    Hence why 2+ disk resilience (RAID6 or better) is becoming more popular.

    RAID10 is probably okay if you have daily backups and your or your clients can handle losing a day of data.
    Yeah I didn't mention which RAID because I wasn't too sure. I know that RAID10 is known to cause issues with arrays over 10TB but it seems to work pretty well when everything is in working order. On a side note though and it may help the OP or anyone else for that matter, I was discussing with my possible business partner yesterday that the only bad thing about the cloud is that if the SAN goes down, all of our clients go down with it and we came to the resolution that we can install r1soft onto each cloud node. This will help if the SAN ever does go down *knocks on wood* to keep the clients data safe and secure within the hypervisor.

  11. #36
    Personally I love EMC equipment. I think that the CX4 is by far the most reliable, scalable, and has the highest amount of VMware integration among all the mid-range storage vendor. The biggest selling point for me was the Fully Automated Storage Tierring.

    The EMC unified platform, known as Celerra, is also a nice option. It is basically the CX4 with a NAS filer on it.

  12. #37
    Join Date
    Mar 2003
    Location
    Kansas City, Missouri
    Posts
    462
    Honestly between EMC and NetApp... NetApp wins price every time. Also the last CX-4 I installed had two Windows Storage Server licenses on the top. If you trust your data to Gates, by all means go with a glorified server chassis with some batteries slapped below them running Windows.

    I don't believe in the block-level model where you simply keep handing out LUNs. I believe that the storage system should be able to pool storage (pooling RAID groups) to take advantage of all spindles. The upgrade path to add more storage shouldn't involve handing out more LUNs, it should involve growing the underlying storage for I/O requirements or space requirements.

    To me, EMC seems very old school, block storage, very static. They say multiprotocol but are they talking about NFS too? Usually EMC requires you buy more "gateway" products to hand out their storage.

    NetApp is a true unified device servicing FC,iSCSI,NFS,CIFS and allows you to create virtual filers for multi-tenant environments. It's simply more flexible.

    Here is some SPEC numbers on NetApp vs. EMC: http://bit.ly/bJZpRD

    Thanks!
    =>Admo.net Managed Hosting
    => Managed Hosting • Dedicated Servers • Colocation
    => Dark Fiber Access to 1102 Grand, Multiple Public Providers
    => Over •Sixteen• Years of Service

  13. #38
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,585
    If your looking at iSCSI the dell MD3220i + 3x MD1220 drive shelfs will give you 96 2.5" drive bays. We use these with 300gb 10k sas, we split them up 6 drives in raid 5. So far the performance has been great. And for the price point you can't beat it.
    'Ripcord'ing is the only way!

Page 2 of 2 FirstFirst 12

Similar Threads

  1. Replies: 0
    Last Post: 07-14-2010, 04:34 PM
  2. Replies: 0
    Last Post: 06-30-2010, 02:26 PM
  3. building a backup server/nas/san
    By nax9 in forum Hosting Security and Technology
    Replies: 0
    Last Post: 06-08-2008, 03:04 AM
  4. Need VPS Host in San Fran/Oakland/San Jose Area
    By moloki in forum VPS Hosting
    Replies: 1
    Last Post: 08-02-2007, 01:22 PM
  5. Replies: 27
    Last Post: 05-11-2005, 04:43 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •