Results 1 to 14 of 14
Thread: LSI CacheCade Performance
-
01-29-2014, 07:44 PM #1Web Hosting Master
- Join Date
- Nov 2006
- Location
- USA
- Posts
- 1,274
LSI CacheCade Performance
Hi Guys,
I recently deployed a brand new Supermicro machine using a LSI 9271-8i w/CacheCade & CacheVault, however I am unable to achieve similar performance to a machine I have had in production for ~2 years.
E3-1240
Intel Xeon E3-1240 V2
32GB of RAM
6x 1TB WD RE4 (WDCWD1003FBYXO)
2x 120GB Intel SSD (INTELSSDSC2CW12)
LSI 9265-8i w/CacheCade & BBU
E5-2620
Dual Intel Xeon E5-2620 V2
128GB of RAM
6x 2TB WD RE4 (WDCWD2000FYYZ0)
2x 240GB Intel SSD (INTELSSDSC2CW24)
LSI 9271-8i w/CacheCade & CacheVault
I ran the following tests with both CacheCade enabled and disabled to draw a baseline and so far I am getting the below results.
Notes:
-CacheCade is configured in RAID 0 for the following tests.
-Writeback is enabled on both the CacheCade and RAID 10 array.
-All settings are identical between the two RAID controllers (RAID configuration, stripe size, CacheCade, etc.)
-Both machines are under very similar disk I/O loads.
E3-1240 without CacheCade Enabled
[root@trinity tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB) copied, 11.3512 s, 378 MB/s
[root@trinity tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB) copied, 6.24271 s, 688 MB/s
[root@localhost tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB) copied, 13.424 s, 320 MB/s
[root@localhost tmp]# dd if=/dev/zero of=test23 bs=64k count=64k conv=fdatasync
65536+0 records in
65536+0 records out
4294967296 bytes (4.3 GB) copied, 10.8988 s, 394 MB/s
I realize that a 2TB disk may performance a bit slower when compared to a 1TB, but I am not seeing that would account for a ~300 MB/s difference, not by a long shot.
More Notes:
-Both machines are running CentOS 6.5 x64
-LSI MegaRAID version 06.700.06.00-rh1 on both machines.
-I have tried RAID 1 & RAID 0 on the CacheCade for the E5-2620, performance is the same.
-Both systems have all the drives directly connected to the LSI card, I am not use a SAS Expander on either.
-The LSI 9271-8i is running the latest available firmware.
--
So what am I missing here?
-
01-29-2014, 11:55 PM #2Web Hosting Master
- Join Date
- Apr 2013
- Location
- Pennsylvania
- Posts
- 937
What happens when you change bs and count?
█ LinuxFox < Linux is in our name!
█ Managed SSD VPS • KVM • Dedicated Resources
█ Proactive Monitoring • cPanel/WHM
█ Lightning Fast Managed VPS with Performance Guarantee!
-
01-30-2014, 12:26 AM #3WHT Addict
- Join Date
- Aug 2011
- Posts
- 134
Please share the output of the following command for both servers:
megacli -LDInfo -Lall -Aall
-
01-30-2014, 07:19 AM #4Web Hosting Master
- Join Date
- Nov 2006
- Location
- USA
- Posts
- 1,274
Dropping the test files size down to 1GB I saw a little more speed, but I have thus far assumed that was more of a "burst" speed as opposed to sustainable throughput.
I do not currently have the CLI installed, using the remote option + GUI on a remote machine.
What information would you like me to provide?
-
01-30-2014, 01:20 PM #5WHT Addict
- Join Date
- Aug 2011
- Posts
- 134
-
01-30-2014, 01:21 PM #6Corporate Member
- Join Date
- Aug 2004
- Location
- Kauai, Hawaii
- Posts
- 3,799
I think the poster is probably wanting to see disk models and whether or not the DISK Cache and RAID controller cache is enabled or disabled. Maybe on one of your arrays you have disk cache enabled (unsafe) and on the other you don't. Typically for h/w raid for safety you would enable the card cache with bbu/flash protection and disable the cache on the drives.
-
01-30-2014, 01:28 PM #7Randy
- Join Date
- Aug 2006
- Location
- Ashburn VA, San Diego CA
- Posts
- 4,615
I think your testing is flawed. Cachecade isn't going to invoke itself (move a portion of the array to SSD) with a single threaded sequential read/write operation. At least on modern firmwares...
What you need to do is high queue-depth operation on a limited portion of data, like a server under heavy database load. Tools like bonnie++, fio, and crystal diskmark (qd32) are what I've always used to test cachecade and you should see a huge difference. I've never seen any difference in sequential performance due to cachecade unless the server was already under some very serious disk load.Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
Since 2003 - Ashburn VA + San Diego CA Datacenters
-
01-30-2014, 02:31 PM #8
Maybe your cachecade is not working correctly, maybe its a bad SSD in your cachecade array? Your speeds seem to show no difference with and without it enabled. Maybe you can try changing out your SSD's and see if there is a difference.
█ SolaDrive - Enterprise Managed Server Solutions
█ Specializing in Managed NVMe VPS & Dedicated Servers in US & UK
█ Visit us at SolaDrive.com
-
01-30-2014, 03:33 PM #9Web Hosting Master
- Join Date
- Nov 2006
- Location
- USA
- Posts
- 1,274
Understood, I appreciate the help and will provide the information as soon as I am able.
Gotcha, Thanks! I did double-check the on-disk caches earlier and confirmed they are disabled on both systems and as such I do not believe that to be the culprit, but none the less will check again!
That is certainly possible, however if that were true I would not expect the E3-1240 machine to show such dramatic performance difference with and without cachecade when utilizing the same test.
That said, I have another identical E3-1240 (identical to the E3-1240 I mentioned above) which is under heavy load and even so with CacheCade enabled I am able to achieve 600-700MB/s on that machine as well without issue using the same testing.
I am of the opinion that the Cachecade isn't working, but I honestly have a hard time believing that is really possible when the card does not appear to be reporting any other issues.
I did try with only one of the SSD's for CacheCade (1 SSD then the other) thinking is that something was flawed with one of the drives, but alas the performance did not show any real improvement.
--
I greatly appreciate all the feedback guys, thanks a lot!
-
01-30-2014, 03:40 PM #10Randy
- Join Date
- Aug 2006
- Location
- Ashburn VA, San Diego CA
- Posts
- 4,615
I'm actually surprised you see a difference at all with or without CC on the type of test you're running. CC is designed to improve IOPs, not sequential throughput.
Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
Since 2003 - Ashburn VA + San Diego CA Datacenters
-
01-30-2014, 03:48 PM #11Web Hosting Master
- Join Date
- Nov 2006
- Location
- USA
- Posts
- 1,274
CacheCade in theory should improve performance on read speeds, write speeds & IOPs for the data that is actively residing on the SSD's.
To my understanding you should see a large increases in read/write throughput when your data is residing on the SSDs and the only reason where I can see you may not is when the RAID array behind the CacheCade is "faster" than the SSD's.
Even so, that will be limited if you choose to run say 4x 120GB SSD's in RAID 10 for the Cachecade.
-
01-30-2014, 03:53 PM #12
Well it should do that if the file is large, however from what I understand the algorithms of the LSI cachecade software is that it puts files in the cache pool when it starts to go over XXX amount of requests per a second rather than XXX amount of throughput bandwidth from the drives.
OP have you tried moving to a different PCI slot, try a different cable from the card to the drives (you may have a bad cable from the card to the ssd's) as that wouldn't be the first time I saw a cable go bad.█ SolaDrive - Enterprise Managed Server Solutions
█ Specializing in Managed NVMe VPS & Dedicated Servers in US & UK
█ Visit us at SolaDrive.com
-
01-30-2014, 04:07 PM #13Randy
- Join Date
- Aug 2006
- Location
- Ashburn VA, San Diego CA
- Posts
- 4,615
This is basically what I'm getting at... what matters is that CC moves the data to SSD. It's quite possible a single DD instance isn't going to trigger it, and you're just writing to spinning disks in either case... watch the drive activity while it's running on both servers, you should be able to tell if the data is going straight to SSD or the HDD array. Given the two different generation of cards and different firmwares, CC will not behave exactly the same. If CC aimed every large write to the SSD's every time you'd wear your SSD's out very quickly.
All that said, the true test is a deep queue depth which CC will almost always recognize and start diverting requests to CC, not so much a single sequential read/write.Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
Since 2003 - Ashburn VA + San Diego CA Datacenters
-
01-30-2014, 04:17 PM #14Web Hosting Master
- Join Date
- Nov 2006
- Location
- USA
- Posts
- 1,274
Similar Threads
-
Crucial M500 and LSI CacheCade?
By Grumbles in forum Colocation, Data Centers, IP Space and NetworksReplies: 1Last Post: 08-19-2013, 10:01 PM -
upgrading lsi card to use cachecade
By kspare in forum Dedicated ServerReplies: 5Last Post: 01-19-2013, 01:13 AM -
LSI SSD CacheCade? How works?
By skywin in forum Colocation, Data Centers, IP Space and NetworksReplies: 15Last Post: 12-06-2012, 10:08 AM -
SSD to LSI CacheCade
By pleiades in forum Colocation, Data Centers, IP Space and NetworksReplies: 14Last Post: 08-28-2012, 02:22 AM -
Anyone using LSI Cachecade SSD caching?
By WebGuyz in forum Colocation, Data Centers, IP Space and NetworksReplies: 10Last Post: 03-02-2012, 07:52 PM