Results 1 to 22 of 22
  1. #1
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916

    best raid card for 96 SSD's?

    I generally go with the 3ware 9750's for larger raid builds but I was looking at ATTO Technology ESAS-H644-000

    http://www.newegg.com/Product/Produc...82E16816319015

    Anyone have any feedback on this card with a large array?
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  2. #2
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,706
    SAS1 and SATA2 max? Seems limiting.
    I don't think you will find any card that will not be a bottleneck. You should run multiple SAS2/SATA3 HBA's instead and use a ZFS file system or software RAID so you don't have a single card being the bottleneck for 96 SSD's.

  3. #3
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Quote Originally Posted by CGotzmann View Post
    SAS1 and SATA2 max? Seems limiting.
    I don't think you will find any card that will not be a bottleneck. You should run multiple SAS2/SATA3 HBA's instead and use a ZFS file system or software RAID so you don't have a single card being the bottleneck for 96 SSD's.
    NE has wrong info, that card is SAS2:
    http://www.attotech.com/products/pro...=ESAS-R644-000 ESAS-R644-C00
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  4. #4
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    So here is the setup I'm working on,

    4x 24-SSD Hardware Raid 10's
    Those would be combined with a software raid-0

    3ware 9750's was going to be the hardware raid controllers but I'm open to suggestions.

    I think this would provide the best performance/redundancy at the lowest cost. Any input would be great.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  5. #5
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,706
    Which 3ware 9750? Even the 9750-8i only has 512MB cache with single core RoC (single core 800Mhz, 512MB DDR2 Cache).
    If you are going to go with an LSI/3Ware, I would definitely suggest going with the LSI 9271-8i which is not all that much more, but you get PCIE 3.0, 1GB Cache, way faster RoC (dual core 800Mhz processor, 1GB DDR3 cache), etc.

    Thought that many SSD's in a RAID10 (24x) will for sure suffer from the card itself being the bottleneck. I would still suggest not purposefully installing your own single point of failure and bottleneck.

    Though, the SoPF is still an issue with a high performance HBA -- but the bottleneck isn't.

  6. #6
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    CGotzmann,

    800Mhz and 512MB ram is more than enough to handle the raid calculations in a 24 drive raid10 array. This has very low overhead compared to a raid 5/6,50/60

    Why would I need PCIe3.0 when PCIe2.0 runs at 5Ghz per lane so with an 8x PCIe2.0 you get 8GB/s

    6Gb/s is ~ 750MB/s but the Samsung 840 EVO's have a max of 540MB/s
    24 x 540MB/s = 12.96GB/s
    However this number is divided in half with raid 10 because the controller is only sending/receiving 1 dataset to/from the mirrored drives to the PCIE interface and the controller is doing the rest. So 24 drives in raid 10 have a maximum throughout put of 6.48GB/s
    Last edited by Kiamori; 09-28-2013 at 04:39 PM.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  7. #7
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Also you keep talking about single point of failure, no matter what you have attached to the drives it's going to be a single point of failure until enterprise drives start coming with built in redundant connections. I would rather have 1 single point of failure that takes 5 minutes to replace per 24 drives than 6 single points of failure by using separate HBA's. That just does not make sense.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  8. #8
    Join Date
    Mar 2010
    Location
    Germany
    Posts
    681
    I think raw data raw is around 4Gb/s per lane.
    8x PCIe2.0 would as such give you a quite exact 4GB/s.

    Real world performance is usually less than that.

    x16 comes closer in theory but it's not like x16 adapters are around.
    Check out my SSD guides for Samsung, HGST (Hitachi Global Storage) and Intel!

  9. #9
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Quote Originally Posted by wartungsfenster View Post
    I think raw data raw is around 4Gb/s per lane.
    8x PCIe2.0 would as such give you a quite exact 4GB/s.

    Real world performance is usually less than that.

    x16 comes closer in theory but it's not like x16 adapters are around.
    4GB/s is half duplex, almost all cards do support full duplex.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  10. #10
    Join Date
    Oct 2002
    Location
    Vancouver, B.C.
    Posts
    2,656
    Quote Originally Posted by Kiamori View Post
    800Mhz and 512MB ram is more than enough to handle the raid calculations in a 24 drive raid10 array. This has very low overhead compared to a raid 5/6,50/60
    You're right they're more than enough. They're actually totally unnecessary for RAID10, as there aren't any RAID calculations involved at all in mirroring or striping. It's just some extra read/write operations which would be done by the RAID card anyway.

    Hardware RAID makes sense if you're running Windows, ESX,or some type of SSD cached configuration and ZFS isn't an option. For an actual all SSD array on a Linux OS, all the RAID card offers is you some caching to the card's RAM. You can get way more performance spending that same money on other things like RAM, more/better SSD's, etc.
    ASTUTE HOSTING: Advanced, customized, and scalable solutions with AS54527 Premium Canadian Optimized Network (Level3, PEER1, Shaw, Tinet)
    MicroServers.io: Enterprise Dedicated Hardware with IPMI at VPS-like Prices using AS63213 Affordable Bandwidth (Cogent, HE, Tinet)
    Dedicated Hosting, Colo, Bandwidth, and Fiber out of Vancouver, Seattle, LA, Toronto, NYC, and Miami

  11. #11
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    ZFS was not an option for the project.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  12. #12
    Join Date
    Dec 2009
    Posts
    88

    best raid card for 96 SSD's?

    Have a look at the new LSI 12G HBA cards with possibly a sas expander. You could use a mixture of IR firmware raid and software raid to great performance.

  13. #13
    Join Date
    Apr 2010
    Posts
    491
    Quote Originally Posted by Kiamori View Post
    So here is the setup I'm working on,

    4x 24-SSD Hardware Raid 10's
    Those would be combined with a software raid-0

    3ware 9750's was going to be the hardware raid controllers but I'm open to suggestions.

    I think this would provide the best performance/redundancy at the lowest cost. Any input would be great.
    Software raid 0? Any HBA goes and you have an outage soft raid 1 over raid 0's gives you redundancy at the expense of transmitting the writes twice. Using enterprise sas ssd's with dual ports and controllers that can deal with it would be even better.

    Overall I would really wonder why you want to drive 96 sata ssd's there are very good reasons why large ent SSD's have moved to direct pcie. You are looking for redundancy and performance with the lowest cost, I would really doubt that is local storage is the right solution. Pure SSD for large datasets is almost never the right solution if cost is a factor. A 100% hot, 100% random, and a symmetric w/r pattern is a very very rare use pattern. When you even come close to approaching those patterns a scale wide storage is generally a much better fit than a single silo.

  14. #14
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Quote Originally Posted by silasmoeckel View Post
    Software raid 0? Any HBA goes and you have an outage soft raid 1 over raid 0's gives you redundancy at the expense of transmitting the writes twice. Using enterprise sas ssd's with dual ports and controllers that can deal with it would be even better.

    Overall I would really wonder why you want to drive 96 sata ssd's there are very good reasons why large ent SSD's have moved to direct pcie. You are looking for redundancy and performance with the lowest cost, I would really doubt that is local storage is the right solution. Pure SSD for large datasets is almost never the right solution if cost is a factor. A 100% hot, 100% random, and a symmetric w/r pattern is a very very rare use pattern. When you even come close to approaching those patterns a scale wide storage is generally a much better fit than a single silo.
    Its for a search engine project, the current bottleneck is IOps. SSD solves this. Budget is not really an issue within reason of course.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  15. #15
    Kiamori,

    Is there any specific reason your unable to parallel this out over multiple nodes?

    If you are able to then you can get the combined total IOPs of the nodes that make up the cluster and get additional redundancy as a potential side effect.

    Is this a database your looking to run for this search engine project?

    If it is I would suggest looking at something like Clustrix for NewSQL Relational DB (MySQL Compliant) or MongoDB/etc if it's NoSQL and set them up in a horizontal cluster with fast interconnects 10/40Gbps or Infiniband.
    PCLHS | SAS70 Datacenters in New Jersey/Texas
    100TB Dedicated Servers 1U - Full Cab Colocation Complex Hosting Horizontally Scalable Hosting DR/HA Hosting Public and Private Clouds Web Farms Innovative, Reliable, and Responsive
    Contact Us E: mark [at] pclhs.net | W: www.pclhs.net

  16. #16
    Join Date
    Apr 2010
    Posts
    491
    Quote Originally Posted by Kiamori View Post
    Its for a search engine project, the current bottleneck is IOps. SSD solves this. Budget is not really an issue within reason of course.
    Search engines are what map reduce was made for couchbase comes to mind but there are plenty of other options in the big data spaces. They all scale wide rather then deep.

    In any event raid 1 between the cards keeps a single card failure from taking you down.

  17. #17
    Join Date
    May 2009
    Location
    Vaduz/LI
    Posts
    2,771
    Quote Originally Posted by Kiamori View Post
    until enterprise drives start coming with built in redundant connections
    That exists. It works very well.

  18. #18
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Quote Originally Posted by pclhosting View Post
    Kiamori,

    Is there any specific reason your unable to parallel this out over multiple nodes?

    If you are able to then you can get the combined total IOPs of the nodes that make up the cluster and get additional redundancy as a potential side effect.

    Is this a database your looking to run for this search engine project?

    If it is I would suggest looking at something like Clustrix for NewSQL Relational DB (MySQL Compliant) or MongoDB/etc if it's NoSQL and set them up in a horizontal cluster with fast interconnects 10/40Gbps or Infiniband.
    Your suggestion was similar to one of our options initially on the table.

    If the project goes well each 96SSD storage array will be a simgle node for a geo-redundant network. We did look at using XCP(Xen Cloud Platform) and doing lots of small nodes but the cost then goes to administration, space and power for all of the nodes. Since hardware costs are now lower than administrative parole, space and power we've decided to go with higher end and more energy/space efficient hardware instead. A 96 SSD array with hardware raid fits into our 5U chassis providing the best performance/$ for total cost.

    I've ordered several raid cards and a highly rated HBA for testing performance on different configurations. I'll post the results after I'm done testing.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  19. #19
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Quote Originally Posted by TheLie View Post
    That exists. It works very well.
    brand, model?
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  20. #20
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Quote Originally Posted by silasmoeckel View Post
    Search engines are what map reduce was made for couchbase comes to mind but there are plenty of other options in the big data spaces. They all scale wide rather then deep.

    In any event raid 1 between the cards keeps a single card failure from taking you down.
    Yes, you could raid0 all of the drives on each card then software raid1 the cards but this is more likely to cause data loss than a card with raid 10 and software raid0 between the cards. If a card fails it's a 5 minute replacement to bring it back online where an entire raid set in raid 0 can fail because of one drive then you are forced to rebuild that whole set leaving only the other card and all of the drives without redundancy.

    Raid 10 with nightly backups is much more reliable.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

  21. #21
    Join Date
    Jan 2010
    Location
    East Lansing, MI
    Posts
    305
    Quote Originally Posted by Kiamori View Post
    Its for a search engine project, the current bottleneck is IOps. SSD solves this. Budget is not really an issue within reason of course.
    Would something like FusionIO be of any benefit?

    http://www.fusionio.com/products/iodrive2-duo/

    Alternatively
    http://ocz.com/consumer/revodrive-3-pcie-ssd

  22. #22
    Join Date
    Feb 2005
    Location
    Rochester, MN
    Posts
    916
    Quote Originally Posted by HackedServer View Post
    Would something like FusionIO be of any benefit?

    http://www.fusionio.com/products/iodrive2-duo/

    Alternatively
    http://ocz.com/consumer/revodrive-3-pcie-ssd
    At $32,000 USD for 2.4TB Price/TB makes something like that basically useless.

    The OCZ is a waste of space in a server, really just a product for gamers.

    I'm currently running 4.32mil IOp/s with the 96 SSD drives in raid-10 with 46TB available cost/TB and IOp/s although expensive is much better than anything else I've seen so far.
    Area51.mn VPS, Dedicated & Colocated Servers.
    Area51 Computers Custom Servers & Gaming Systems. (Since 1998)
    NetAffect Email & Web Hosting Services. (Since 1996)
    Quality Systems & Service Since 1996

Similar Threads

  1. LSI SAS 8704ELP SAS/SATA Raid card + SSD or SAS
    By AndyB78 in forum Dedicated Server
    Replies: 1
    Last Post: 08-16-2012, 06:52 PM
  2. Best Entry Raid Card for SSD Raid 1
    By moefan in forum Colocation and Data Centers
    Replies: 9
    Last Post: 06-25-2012, 10:35 PM
  3. SSD Raid 5 vs SSD Raid 1 + SAS/SATA Raid 1
    By jdvachal in forum VPS Hosting
    Replies: 5
    Last Post: 11-13-2011, 01:18 PM
  4. please recommend raid 1 and raid 10 sata raid card
    By joelin in forum Colocation and Data Centers
    Replies: 13
    Last Post: 08-14-2008, 01:28 PM
  5. Hardware RAID: Is Motherboard RAID as good as a dedicated PCI-E card?
    By luke_a in forum Colocation and Data Centers
    Replies: 15
    Last Post: 03-24-2008, 07:02 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •