Results 1 to 14 of 14
  1. #1
    Join Date
    Nov 2006
    Location
    USA
    Posts
    1,274

    Question LSI CacheCade Performance

    Hi Guys,

    I recently deployed a brand new Supermicro machine using a LSI 9271-8i w/CacheCade & CacheVault, however I am unable to achieve similar performance to a machine I have had in production for ~2 years.

    E3-1240
    Intel Xeon E3-1240 V2
    32GB of RAM
    6x 1TB WD RE4 (WDCWD1003FBYXO)
    2x 120GB Intel SSD (INTELSSDSC2CW12)
    LSI 9265-8i w/CacheCade & BBU

    E5-2620
    Dual Intel Xeon E5-2620 V2
    128GB of RAM
    6x 2TB WD RE4 (WDCWD2000FYYZ0)
    2x 240GB Intel SSD (INTELSSDSC2CW24)
    LSI 9271-8i w/CacheCade & CacheVault

    I ran the following tests with both CacheCade enabled and disabled to draw a baseline and so far I am getting the below results.

    Notes:
    -CacheCade is configured in RAID 0 for the following tests.
    -Writeback is enabled on both the CacheCade and RAID 10 array.
    -All settings are identical between the two RAID controllers (RAID configuration, stripe size, CacheCade, etc.)
    -Both machines are under very similar disk I/O loads.

    E3-1240 without CacheCade Enabled

    [root@trinity tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 11.3512 s, 378 MB/s
    E3-1240 with CacheCade Enabled

    [root@trinity tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 6.24271 s, 688 MB/s
    E5-2620 without CacheCade Enabled

    [root@localhost tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 13.424 s, 320 MB/s
    E5-2620 with CacheCade Enabled

    [root@localhost tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    4294967296 bytes (4.3 GB) copied, 10.8988 s, 394 MB/s
    As you can see on the E5-2620 I receive little to no performance gain when enabling the CacheCade (there is a very minimal gain in the above, but on average its within the results without CacheCade enabled) even though I am running a more powerful RAID controller (not to mention newer), newer & faster SSD's, newer & marginally faster RE disks.

    I realize that a 2TB disk may performance a bit slower when compared to a 1TB, but I am not seeing that would account for a ~300 MB/s difference, not by a long shot.

    More Notes:
    -Both machines are running CentOS 6.5 x64
    -LSI MegaRAID version 06.700.06.00-rh1 on both machines.
    -I have tried RAID 1 & RAID 0 on the CacheCade for the E5-2620, performance is the same.
    -Both systems have all the drives directly connected to the LSI card, I am not use a SAS Expander on either.
    -The LSI 9271-8i is running the latest available firmware.

    --

    So what am I missing here?

  2. #2
    Join Date
    Apr 2013
    Location
    Pennsylvania
    Posts
    937
    What happens when you change bs and count?
    LinuxFox < Linux is in our name!
    Managed SSD VPS • KVM • Dedicated Resources
    Proactive Monitoring • cPanel/WHM
    Lightning Fast Managed VPS with Performance Guarantee!

  3. #3
    Please share the output of the following command for both servers:
    megacli -LDInfo -Lall -Aall

  4. #4
    Join Date
    Nov 2006
    Location
    USA
    Posts
    1,274
    Quote Originally Posted by FoilWeb View Post
    What happens when you change bs and count?
    Dropping the test files size down to 1GB I saw a little more speed, but I have thus far assumed that was more of a "burst" speed as opposed to sustainable throughput.

    Quote Originally Posted by nikhil500 View Post
    Please share the output of the following command for both servers:
    megacli -LDInfo -Lall -Aall
    I do not currently have the CLI installed, using the remote option + GUI on a remote machine.

    What information would you like me to provide?

  5. #5
    Quote Originally Posted by leckley View Post
    I do not currently have the CLI installed, using the remote option + GUI on a remote machine.

    What information would you like me to provide?
    I just wanted to reconfirm that all the settings are same and that the caches are enabled.

  6. #6
    Join Date
    Aug 2004
    Location
    Kauai, Hawaii
    Posts
    3,799
    Quote Originally Posted by leckley View Post
    Dropping the test files size down to 1GB I saw a little more speed, but I have thus far assumed that was more of a "burst" speed as opposed to sustainable throughput.



    I do not currently have the CLI installed, using the remote option + GUI on a remote machine.

    What information would you like me to provide?
    I think the poster is probably wanting to see disk models and whether or not the DISK Cache and RAID controller cache is enabled or disabled. Maybe on one of your arrays you have disk cache enabled (unsafe) and on the other you don't. Typically for h/w raid for safety you would enable the card cache with bbu/flash protection and disable the cache on the drives.

  7. #7
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,615
    I think your testing is flawed. Cachecade isn't going to invoke itself (move a portion of the array to SSD) with a single threaded sequential read/write operation. At least on modern firmwares...

    What you need to do is high queue-depth operation on a limited portion of data, like a server under heavy database load. Tools like bonnie++, fio, and crystal diskmark (qd32) are what I've always used to test cachecade and you should see a huge difference. I've never seen any difference in sequential performance due to cachecade unless the server was already under some very serious disk load.
    Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  8. #8
    Maybe your cachecade is not working correctly, maybe its a bad SSD in your cachecade array? Your speeds seem to show no difference with and without it enabled. Maybe you can try changing out your SSD's and see if there is a difference.
    SolaDrive - Enterprise Managed Server Solutions
    Specializing in Managed NVMe VPS & Dedicated Servers in US & UK
    Visit us at SolaDrive.com

  9. #9
    Join Date
    Nov 2006
    Location
    USA
    Posts
    1,274
    Quote Originally Posted by nikhil500 View Post
    I just wanted to reconfirm that all the settings are same and that the caches are enabled.
    Understood, I appreciate the help and will provide the information as soon as I am able.

    Quote Originally Posted by gordonrp View Post
    I think the poster is probably wanting to see disk models and whether or not the DISK Cache and RAID controller cache is enabled or disabled. Maybe on one of your arrays you have disk cache enabled (unsafe) and on the other you don't. Typically for h/w raid for safety you would enable the card cache with bbu/flash protection and disable the cache on the drives.
    Gotcha, Thanks! I did double-check the on-disk caches earlier and confirmed they are disabled on both systems and as such I do not believe that to be the culprit, but none the less will check again!

    Quote Originally Posted by FastServ View Post
    I think your testing is flawed. Cachecade isn't going to invoke itself (move a portion of the array to SSD) with a single threaded sequential read/write operation. At least on modern firmwares...

    What you need to do is high queue-depth operation on a limited portion of data, like a server under heavy database load. Tools like bonnie++, fio, and crystal diskmark (qd32) are what I've always used to test cachecade and you should see a huge difference. I've never seen any difference in sequential performance due to cachecade unless the server was already under some very serious disk load.
    That is certainly possible, however if that were true I would not expect the E3-1240 machine to show such dramatic performance difference with and without cachecade when utilizing the same test.

    That said, I have another identical E3-1240 (identical to the E3-1240 I mentioned above) which is under heavy load and even so with CacheCade enabled I am able to achieve 600-700MB/s on that machine as well without issue using the same testing.

    Quote Originally Posted by SolaDrive - John View Post
    Maybe your cachecade is not working correctly, maybe its a bad SSD in your cachecade array? Your speeds seem to show no difference with and without it enabled. Maybe you can try changing out your SSD's and see if there is a difference.
    I am of the opinion that the Cachecade isn't working, but I honestly have a hard time believing that is really possible when the card does not appear to be reporting any other issues.

    I did try with only one of the SSD's for CacheCade (1 SSD then the other) thinking is that something was flawed with one of the drives, but alas the performance did not show any real improvement.

    --

    I greatly appreciate all the feedback guys, thanks a lot!

  10. #10
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,615
    I'm actually surprised you see a difference at all with or without CC on the type of test you're running. CC is designed to improve IOPs, not sequential throughput.
    Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  11. #11
    Join Date
    Nov 2006
    Location
    USA
    Posts
    1,274
    Quote Originally Posted by FastServ View Post
    I'm actually surprised you see a difference at all with or without CC on the type of test you're running. CC is designed to improve IOPs, not sequential throughput.
    CacheCade in theory should improve performance on read speeds, write speeds & IOPs for the data that is actively residing on the SSD's.

    To my understanding you should see a large increases in read/write throughput when your data is residing on the SSDs and the only reason where I can see you may not is when the RAID array behind the CacheCade is "faster" than the SSD's.

    Even so, that will be limited if you choose to run say 4x 120GB SSD's in RAID 10 for the Cachecade.

  12. #12
    Quote Originally Posted by leckley View Post
    CacheCade in theory should improve performance on read speeds, write speeds & IOPs for the data that is actively residing on the SSD's.

    To my understanding you should see a large increases in read/write throughput when your data is residing on the SSDs and the only reason where I can see you may not is when the RAID array behind the CacheCade is "faster" than the SSD's.

    Even so, that will be limited if you choose to run say 4x 120GB SSD's in RAID 10 for the Cachecade.
    Well it should do that if the file is large, however from what I understand the algorithms of the LSI cachecade software is that it puts files in the cache pool when it starts to go over XXX amount of requests per a second rather than XXX amount of throughput bandwidth from the drives.

    OP have you tried moving to a different PCI slot, try a different cable from the card to the drives (you may have a bad cable from the card to the ssd's) as that wouldn't be the first time I saw a cable go bad.
    SolaDrive - Enterprise Managed Server Solutions
    Specializing in Managed NVMe VPS & Dedicated Servers in US & UK
    Visit us at SolaDrive.com

  13. #13
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,615
    Quote Originally Posted by SolaDrive - John View Post
    Well it should do that if the file is large, however from what I understand the algorithms of the LSI cachecade software is that it puts files in the cache pool when it starts to go over XXX amount of requests per a second rather than XXX amount of throughput bandwidth from the drives.

    OP have you tried moving to a different PCI slot, try a different cable from the card to the drives (you may have a bad cable from the card to the ssd's) as that wouldn't be the first time I saw a cable go bad.
    This is basically what I'm getting at... what matters is that CC moves the data to SSD. It's quite possible a single DD instance isn't going to trigger it, and you're just writing to spinning disks in either case... watch the drive activity while it's running on both servers, you should be able to tell if the data is going straight to SSD or the HDD array. Given the two different generation of cards and different firmwares, CC will not behave exactly the same. If CC aimed every large write to the SSD's every time you'd wear your SSD's out very quickly.

    All that said, the true test is a deep queue depth which CC will almost always recognize and start diverting requests to CC, not so much a single sequential read/write.
    Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  14. #14
    Join Date
    Nov 2006
    Location
    USA
    Posts
    1,274
    Quote Originally Posted by SolaDrive - John View Post
    Well it should do that if the file is large, however from what I understand the algorithms of the LSI cachecade software is that it puts files in the cache pool when it starts to go over XXX amount of requests per a second rather than XXX amount of throughput bandwidth from the drives.

    OP have you tried moving to a different PCI slot, try a different cable from the card to the drives (you may have a bad cable from the card to the ssd's) as that wouldn't be the first time I saw a cable go bad.
    Thanks!

    Quote Originally Posted by FastServ View Post
    This is basically what I'm getting at... what matters is that CC moves the data to SSD. It's quite possible a single DD instance isn't going to trigger it, and you're just writing to spinning disks in either case... watch the drive activity while it's running on both servers, you should be able to tell if the data is going straight to SSD or the HDD array. Given the two different generation of cards and different firmwares, CC will not behave exactly the same. If CC aimed every large write to the SSD's every time you'd wear your SSD's out very quickly.

    All that said, the true test is a deep queue depth which CC will almost always recognize and start diverting requests to CC, not so much a single sequential read/write.
    Thanks!

Similar Threads

  1. Crucial M500 and LSI CacheCade?
    By Grumbles in forum Colocation, Data Centers, IP Space and Networks
    Replies: 1
    Last Post: 08-19-2013, 10:01 PM
  2. upgrading lsi card to use cachecade
    By kspare in forum Dedicated Server
    Replies: 5
    Last Post: 01-19-2013, 01:13 AM
  3. LSI SSD CacheCade? How works?
    By skywin in forum Colocation, Data Centers, IP Space and Networks
    Replies: 15
    Last Post: 12-06-2012, 10:08 AM
  4. SSD to LSI CacheCade
    By pleiades in forum Colocation, Data Centers, IP Space and Networks
    Replies: 14
    Last Post: 08-28-2012, 02:22 AM
  5. Anyone using LSI Cachecade SSD caching?
    By WebGuyz in forum Colocation, Data Centers, IP Space and Networks
    Replies: 10
    Last Post: 03-02-2012, 07:52 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •