Results 1 to 16 of 16
  1. #1
    Join Date
    Apr 2006
    Posts
    929

    LSI SSD CacheCade? How works?

    Hello, I see that LSI sells a software called CacheCade that allow to improve array performance by placing a running SSD that is using as a cache.

    My idea is to use an LSI 9260-4i and 2x2TB along with a Corsair SSD.

    You know how works this software and if it is a good upgrade?

    Thanks.

  2. #2
    Join Date
    Oct 2010
    Posts
    84
    It's not something that I have tried yet, but like yourself I am keen to see working, as if it does what it says it will do then the performance benefits could be huge.

    The idea (as my understanding) as to how this works, is that commonly used data is copied into "hot zones" on the SSD storage allowing for very fast read and write performance.

    For the seemingly low cost you could potentially introduce tiered storage to your environmrnt. To do that in the enterprise is usually big money!

  3. #3
    Join Date
    Apr 2006
    Posts
    929
    Yes, that is the idea. My main concern is if in case of failure of SSD unit can damage the RAID1 array.

  4. #4
    Join Date
    Oct 2010
    Posts
    84
    I guess that if a drive goes in the RAID1 SSD array then you will be running at risk until the drive is replaced. I'm not sure of the impact of having the whole RAID go, but I would expect it would just loose the ability to cache but your storage would still funnction.
    Not sure if that is the case or not, perhaps someone who uses it could comment.

  5. #5
    I would not worry about the SSD failing and causing data loss. I would worry about the SSD failing and causing performance loss.

    Generally how this stuff works (Adaptec has this, Intel has it on their Z68/Z77 chipsets, Marvell cheap controllers have this, all of the big storage players have one or more versions of this, ZFS does this natively and etc.) is that there is an algorithm that analyzes the most frequently accessed data and copies that data to the SSD (or main memory in other cases.) If the cache is not write cache (like battery/ capacitor backed cache) then data integrity is basically a non-issue. Usually these algorithms make caching better over time and also weed out large files that are generally sequential reads (you can fit more small files in the same space as a big file and sequential reads on spinning media are actually good.)

    Have a few diagrams and a writeup I did of SSD caching and storage tiering that I've actually used at work talking to sales folks at big storage firms since it is easy to use pictures and explain this kind of thing. Very simplified, but might help with conceptualizing what CacheCade is doing.

  6. #6
    Join Date
    Nov 2003
    Posts
    35
    I had this conversation with LSI sales last week actually, here's the response for CacheCade 2.0 on the 9260/9265 series:

    -> CacheCade volume set up for read caching only will result in no data loss if a CacheCade SSD fails
    -> CacheCade volume set up for read & write caching with a non-redundant (single SSD or RAID 0) will result in data loss with a failed SSD
    -> CacheCade volume set up for read & write caching with a redundant (RAID 1) will result in no data loss in the event of an SSD failure

    LSI CacheCade guide also says the array will go in to a Blocked state on SSD malfunction suspending all I/O- where you can either fix the problem with the SSD (if it's a cabling / soft issue or similar) or dump all of the cache data and corrupt the array.

    Keep in mind that data loss only applies to doing read+write caching (2.0), if you're just doing read acceleration and not caching writes on the SSD there will be no data loss.

  7. #7
    Join Date
    Apr 2006
    Posts
    929
    Patrick is a very interesting post. I have resolved many doubts.

    Has anyone been tested in a real environment the SSD caching only for reading?

  8. #8
    Quote Originally Posted by skywin View Post
    Patrick is a very interesting post. I have resolved many doubts.

    Has anyone been tested in a real environment the SSD caching only for reading?
    I have done a bit of VDI work with L1/L2ARC and ZFS... absolutely amazing. Things like office apps and Windows 7 files get cached and really makes a big difference in load times. Also have seen SSD read caching on media servers and those are less exciting if they are not tuned to move the frequently used big files over.

  9. #9
    Just out of curiosity... does anyone understand how the write caching actually works?

    Specifically, I have two 120gb SSD drives in RAID1, and I'm trying to determine what is the best configuration for them to use with a standard shared/reseller hosting server.

    The argument against using the SSDs with cachecade write cache is that eventually the data must still be written to the final storage medium.. so all the cache is really doing is delaying the write process.

    I'm really not sure I understand how caching works, and the more I read.. the more confused I get. What I'm wondering is if you have real-world experience that might help me determine what is the best setup.

    You can read my post about this here:

    http://www.webhostingtalk.com/showthread.php?p=8456003

    What do you think?

  10. #10
    Join Date
    Apr 2010
    Posts
    493
    Quote Originally Posted by mrzippy View Post
    Just out of curiosity... does anyone understand how the write caching actually works?

    Specifically, I have two 120gb SSD drives in RAID1, and I'm trying to determine what is the best configuration for them to use with a standard shared/reseller hosting server.

    The argument against using the SSDs with cachecade write cache is that eventually the data must still be written to the final storage medium.. so all the cache is really doing is delaying the write process.

    I'm really not sure I understand how caching works, and the more I read.. the more confused I get. What I'm wondering is if you have real-world experience that might help me determine what is the best setup.

    You can read my post about this here:

    http://www.webhostingtalk.com/showthread.php?p=8456003

    What do you think?
    From the linked thread you use the mirror set to do your read/write caching it will figure out whats the most frequently read blocks and keep a copy on the ssd's. Write caching, lets data sit there catching rewrites of the same block (fairly common with databases) and letting data sit for long periods while writing out to the drives when they are less active. From your listed setup I would worry about longevity of consumer grade SSD's write caching will hammer a SSD on an active system, as long as you keep an eye on it this is not a problem as you can proactively replace the SSD's. We went with slc and now emlc SSD's for this and have seen some big issues with customers who uses mlc ssd's for write caching without closely monitoring them.

    All the built into raid card ssd caching is very simplistic compared to the software ones. ZFS is probably the most commonly used. Read and write caching are different things allowing you to use cheap consumer drives without raid for read cache. Write cache still needs a high endurance and raid. Other software goes even further treating the ssd's as tiered storage so 100gb of ssd and 900gb or raid gives you a 1TB usable space since data is not duplicated between the spinning disks and SSD's. They can also throw dedupe into the mix letting you leverage your storage and cache even further.

  11. #11
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    quoted from the other thread:
    Quote Originally Posted by mrzippy View Post
    Hello,

    We are thinking about making some shared hosting servers (cpanel) with the following specs:

    • e3-1230v2 cpu
    • 32gb ram
    • 4 x 15k SAS drives in raid10 (900gb total storage)


    We want to make things even faster, so we're considering to use SSD. It's too expensive to simply replace the 4 SAS drives with SSD, so we thought about these two options:

    1. Upgrade the RAID card to LSI 9271-8i, add 2x128gb SSD RAID1, and then use that for everything except customer's "/home" folders. (So mysql and OS would be on the SSD RAID1 array, and the /home folders would be on the RAID10 array.)
    2. Upgrade the RAID card to LSI 9271-8i, add 2x128gb SSD RAID1, and then enable Cachecade intelligent read/write caching for the entire SAS RAID10.


    Which of these options do you think would be better? (The additional cost for both options is the same, so I'm curious to know which option would result in best performance and increased speed for the customers.

    Thanks!
    you can't do "RAID-1" on cachecade SSD caching. you can put 2x 128G in the caching pool or just assign single 256G SSD as caching drive. mind yourself that RAID card's firmware sees the cachecade caching pool as a giant internal "buffer" and it's totally invisible to OS.

    unless you buy 9271-8iCC, which includes cachecade 2.0 software license as well as the required physical hardware key, otherwise you need to spend about $300 to add cachecade to any bare LSI raid card without CC to start with.

    on per GB basis, 4x 512GB SSD HW RAID-10 would be actually cheaper than 4x 450G SAS RAID-10 plus Cachecade plus 1x 256G SSD caching! yet, the pure-SSD RAID-10 array is going to give much, much higher performance than 4x SAS spin drives plus SSD caching.

    4x Samsung 512GB 830 series (1024GB RAID-10 volume): 4x $500 = ~ $2000
    LSI 9271-4i (4-port dual-core ROC; no need to use cachecade): ~ $400
    ======
    ~$2400 ($2.34 per GB)
    (can be installed in much cheaper 1U chassis with 4x 2.5" drive bay)

    v.

    4x 450G 15K 3.5" SAS (900GB RAID-10 volume): 4x $275 = ~$1100
    LSI 9271-8iCC (8-core dual-core ROC with Cachecade 2.0 included): ~$800
    1x Samsung 256G 830 SSD (caching drive) in 3.5" sled: ~$240
    (2x 128G SSD in caching pool would cost even more!)
    2U 8-bay chassis: ~ $160 extra
    ======
    ~ $2300 ($2.55 per GB)

    this comparison assumes the ever-popular, "battle-tested", production-worthy Samsung 512G 830 series, and of course you can spend a bit more to get the newest 840 Pro series (about $100 more per drive). you can also opt for the cheaper 840 series (without "pro"; about $100 less per drive). true that 840 is TLC based which has lower performance/endurance than 840 Pro, but even 840 TLC SSD would perform times better than any 15K SAS drive plus caching or not.

    on top of this, pure SSD array would save you ~0.4A/120v power charge from DC per server.

  12. #12
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,615
    Quote Originally Posted by cwl@apaqdigital View Post
    quoted from the other thread:

    you can't do "RAID-1" on cachecade SSD caching.
    Actually I have set up both RAID0 and RAID1 SSD caches. Read-only on the RAID0 setup, of course. Keep in mind the maximum SSD cache pool size is 512GB for some reason.

    My only gripe with Cachecade is that (contrary to marketing claims) there's no utilization/statistics or tuning parameters available even thru megacli. You just have to 'trust' that it's working.

  13. #13
    Join Date
    Jul 2004
    Location
    New York, NY
    Posts
    2,181
    We'ved used cachecade as well as an ALL SSD array. Cachecade is cool but rather go all SSD. We bought a bunch of the new Samsung 840s 512GB on sale recently and have a few multi TB SANs and the performance is way better than cachecade would get you.

    Cachevault is the other thing that they have which is the ssd version of BBU for power loss.

  14. #14
    Join Date
    Aug 2004
    Location
    Canada
    Posts
    3,785
    We have systems with read/write caching and definitely a huge improvement. We also use flashcache in the form of read only caches. The great thing for us is cachecade just works no matter the operating system. With flashcache we're restricted on operating system along with having to make patches and things of that nature for our systems.

    It's true a complete SSD solution is obviously going to be faster. We like it when we're wishing to have disk space along with iops faster than we could get from mechnical drives. We can setup a 6TB disk setup and get iops faster than a 15K SAS setup required to get the same amount of disk.

    Quote Originally Posted by FastServ View Post
    My only gripe with Cachecade is that (contrary to marketing claims) there's no utilization/statistics or tuning parameters available even thru megacli. You just have to 'trust' that it's working.
    This is the one frustrating thing you're just suppose to assume it works. Our flash cache systems we know what our hit percentage is among other statistics. It helps tell us it's really making a difference.

  15. #15
    Quote Originally Posted by cwl@apaqdigital View Post
    this comparison assumes the ever-popular, "battle-tested", production-worthy Samsung 512G 830 series, and of course you can spend a bit more to get the newest 840 Pro series (about $100 more per drive). you can also opt for the cheaper 840 series (without "pro"; about $100 less per drive). true that 840 is TLC based which has lower performance/endurance than 840 Pro, but even 840 TLC SSD would perform times better than any 15K SAS drive plus caching or not.

    on top of this, pure SSD array would save you ~0.4A/120v power charge from DC per server.
    Given what Anandtech has tested regarding the Samsung 840's, I would not use them in a production server. This goes for the pros as well, unless you want to OP them by well over the industry standard 27% which is quite a waste.

    Off topic:

    A typical SSD will outperform the fastest 15K drive you can find, use less power, and may be more reliable. If you need proven large capacity drives, then SAS might be the way to go. Think 10K 900GB SAS drives. The largest SSD currently that isn't rigged up internally to be raid-0 is 512GB, for some, that might just not be enough. Especially if you are smart and overprovision them.

  16. #16
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by Cookiesowns View Post
    Given what Anandtech has tested regarding the Samsung 840's, I would not use them in a production server. This goes for the pros as well, unless you want to OP them by well over the industry standard 27% which is quite a waste...
    actually, the industry average for consumer grade SSD's is 7% built-in spare NAND. only enterprise grade SSD's reserve (over-provision) larger percentages to improve performance, endurance as well as IO consistency. the new Intel DC S3700 keeps as high as 30% NAND as built-in spare area (264GB actual v. 186GB usable v. "200GB" nominal) and that's the main reason why S3700 has the industry-best IOPS, endurance and consistency!
    http://www.anandtech.com/show/6489/playing-with-op

    over-provision is NOT a "waste" because it's really a very small price to pay so that you can improve performance/endurance from any given SSD in a dramatic fashion! as end users, you always need to find the happy medium between usable capacity and over-provision. Anantech suggests setting aside spare area as high as 25%, but I think 15% (+ built-in 7% = 22% total) prolly is the happy medium for most users.

Similar Threads

  1. LSI 9260 with SATAIII 6gbps SSD's?
    By Cookiesowns in forum Colocation, Data Centers, IP Space and Networks
    Replies: 9
    Last Post: 04-11-2012, 11:06 PM
  2. CacheCade (2.0) experiences
    By Robert vd Boorn in forum Colocation, Data Centers, IP Space and Networks
    Replies: 0
    Last Post: 03-14-2012, 02:58 PM
  3. Anyone using LSI Cachecade SSD caching?
    By WebGuyz in forum Colocation, Data Centers, IP Space and Networks
    Replies: 10
    Last Post: 03-02-2012, 07:52 PM
  4. How works RAID1 SATA + RAID1 SSD?
    By skywin in forum Dedicated Server
    Replies: 11
    Last Post: 12-21-2010, 12:06 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •