Page 2 of 7 FirstFirst 12345 ... LastLast
Results 26 to 50 of 175
  1. #26
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by FastServ View Post
    I'd be wary of enabling cache on a non-capacitor backed SSD, especially on a RAID array... N drives more chances of corruption in power loss... I think the verdict is in, 840Pro is a no-go for RAID...
    this issue of "no capacitors" on any consumer-grade SSD's (except Intel 320), well Karl would disagree with you wholeheartedly that this isn't an issue, has been talked by all sides to death on this thread:
    http://www.webhostingtalk.com/showthread.php?t=1234660
    so I ain't going to repeat myself again!

    I stand as a strong believer that you really should (or "shall" by my standard!) use Intel S3700's as caching drive(s) controlled by cachecade. main reason is that any cachecade-ready spin-drive based raid-10 equipped server would be easily $3000 or higher, therefore spending just $120 extra (100G S3700) or just $250 extra (200G S3700) to use S3700 is really a no-brainer! why take any chance on this critical piece on the data chain flowing from raid core to main array? let alone the 5 years of worry-free-of-read-only even you intend to cycle 1TB data on a small Intel 100G S3700 as caching drive every single day!

    as to use consumer grade SSD such as 830/840P/520's installed as array members on the main array, that's a very different scenario to consider. I'm kinda agree with Karl that you could choose to take chances with them in doing so. Karl is correct that we all have been using spin drives installed on disk array, with no "capacitors" either, since RAID was invented. if no-capacitor on array members was that "critical" to the "health" of disk array, regardless software based or hardware based, then we should have abandoned the "idea" of disk array all together long, long time ago.

    whether the ultra large buffer (512M on 840P!) can make it more vulnerable to have corrupted file system in sudden power loss event than spin drives with much smaller buffer (16M-128M), that's a debate for another day.

    using enterprise SSD's as member drives for main array are still very, very out of reach by most hosts due to the cost. even a moderate 4x 400G S3700 are FOUR GRANDS, drives alone! so, you can't hardly blame them making economic sense to use 830, 840P, 520...etc, etc. at least I won't!
    Last edited by cwl@apaqdigital; 02-12-2013 at 06:57 PM.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  2. #27
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by George_Fusioned View Post
    Yes, I can also confirm that when I create a RAID10 array out of the 4 x Samsung 840 Pro's, the Disk Cache Policy setting is grayed out.
    hmmmm.... which LSI card are you using?

    we have no issue whatsoever to do so on 9260/9260CV with the latest BIOS/firmware (v4.9 Jan-08-2013) installed. I personally had confirmed my saying just this morning, and my tech guys has been doing so for weeks since we discovered the dismal IO rate if we didn't enable the buffer on 840P.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  3. #28
    It's a 9266-8i with the latest firmware (3.230.05-2100 Jan 08, 2013).
    Attached Thumbnails Attached Thumbnails Screenshot_2_13_13_1_10_AM-3.png  
    Fusioned - http://www.fusioned.net
    Enterprise & Semi-Dedicated Hosting | CloudLinux, cPanel, LiteSpeed, Acronis | PHP 5.6, 7.2, 7.3, 7.4 & 8.0
    Fully Managed SSD KVM VPS & Dedicated Servers | CloudFlare & Acronis Partner | RIPE LIR

  4. #29
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by George_Fusioned View Post
    It's a 9266-8i with the latest firmware (3.230.05-2100 Jan 08, 2013).
    will try to confirm this next time we have a chance to build one with 9266-8i or 9271-8i (they share the same BIOS/firmware). shouldn't be too long a wait in doing so....
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  5. #30
    Quote Originally Posted by cwl@apaqdigital View Post
    this issue of "no capacitors" on any consumer-grade SSD's (except Intel 320), well Karl would disagree with you wholeheartedly that this isn't an issue, has been talked by all sides to death on this thread:
    http://www.webhostingtalk.com/showthread.php?t=1234660
    so I ain't going to repeat myself again!
    It depends on the architecture. Sandforce has no buffer/RAM at all by design to save cost and to stop this issue. Marvell and Samsung controllers for example do. It doesn't mean the buffer will be an issue. It depends on what it is used for and how it is implemented.

    Take a full note on the implementation and design please. Don't make a blanket statement without in depth on how the controller works.
    Hyperconnezion - your fast path to success
    IT management, Cloud and Data Center Services in Asia Pacific

  6. #31
    Seems the 9271/9266 doesn't allow you to change disk cache policy settings for 840P arrays.

    People in this thread have already reported being unable to change it with StorCLI and MegaRAID Storage Manager. I can confirm that WebBIOS doesn't provide the option, and attempting to execute "-LDSetProp -EnDskCache -L0 -a0" in MegaPCLI exits with the error message "Not Allowed to Change Disk Cache Policy."

    I'd venture to guess that the 9271/9266 is unable to recognize the existence of a disk cache on the 840Ps. We'll have to file support requests with LSI, and hope they fix it in the next firmware revision.

  7. #32
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by concerto49 View Post
    It depends on the architecture. Sandforce has no buffer/RAM at all by design to save cost and to stop this issue. Marvell and Samsung controllers for example do. It doesn't mean the buffer will be an issue. It depends on what it is used for and how it is implemented.

    Take a full note on the implementation and design please. Don't make a blanket statement without in depth on how the controller works.
    Haaa...you've got me on this one big time! despite your tone, I MUST say a big "THANK YOU" to point this one out. I'm not afraid a bit to admit my own mistakes and ignorance, hopefully very occasional! after all, bringing facts and truth to benefit users and visitors to these forums is 10x more important than being shamed by my own ignorance and flaws.

    VERY, VERY true that any Sandforce based SSD drives just DON'T have buffer/cache on drive at all because that's the way how Sandforce controller is designed and functions. this includes all SSD drives from little-known names to big time Intel as long as sandforce is the chosen SSD controller.

    this critical fact does change things quite a bit!
    1. since there is no buffer to be required on sandforce SSD's, therfore there is no need to have capacitors to protect buffer in the first place, and that's why not a single sandforce SSD on the market has capacitors installed in end products.

    2. of course there are other factors to consider when choosing SSD for production server, but from pure power-loss-protection's point of view, Intel 520 series now in fact is "safer", therefore better "suitable", than Samsung 830/840/840P in production servers because there is no buffer to protect on Intel 520 or any other Intel SSD with sandforce. all data going thru sandforce are in 'pass-thru' mode with no data temperately stored on buffer to lose in power-loss events. it would be just like we purposely disable write buffer on HW raid card because no BBU is optional nor installed.

    3. all benchmarks associated with sandforce based SSDs should be more "pure" than SSDs with Intel/Samsung/Marvell controller which do require write buffer (64M on intel 320, 128M on Crucial M4, 256M on Samsung 830, 512M on Samsung 840P, then 1GB on intel S3700). whether on-drive buffer can be truly "counted" or "inclusive" as part of true SSD performance in real world, that's a top for another day. at least benchmarks from sandforce are very raw, undisguised, and more "true-form" because there is no buffer/cache to influence those benchmark results. as we all learned by now, disabled/enabled drive-cache on 840P could mean the difference between 10% and 100% of the IO rate reported. I suspect some degree of bigger or smaller influence of buffer on-off would show on other SSD equipped with cache buffer on drive.

    4. if enterprise-grade S3700 was not affordable to you, and by the same reason as (2), and despite Intel 520 has lower performance on average than Samsung 840P, Intel 520 should be "preferred" by hosts, who are very afraid of data-loss event induced by power loss, to use these consumer-grade SSD drives as either caching drive or array members on main array. no data on no buffer, no data to lose, pure and simple.

    the “issue” of reliability of sandforce itself as SSD controller and it's associated "bad" history of BSOD's should play how big a role in making SSD choice is another topic on another day. from the factor of buffer or no-buffer, therefore capacitor or no-capacitor, it seems this matter is settled now: among consumer-grade SSD's, Intel 520 is just a better choice than Samsung 830/840/840P on production servers, various higher or lower IOPS figures aside, of course. AND I will start to point out these pro/s and con's to my own clients from this point on.

    THANK YOU, concerto49!
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  8. #33
    Quote Originally Posted by cwl@apaqdigital View Post
    3. all benchmarks associated with sandforce based SSDs should be more "pure" than SSDs with Intel/Samsung/Marvell controller which do require write buffer (64M on intel 320, 128M on Crucial M4, 256M on Samsung 830, 512M on Samsung 840P, then 1GB on intel S3700). whether on-drive buffer can be truly "counted" or "inclusive" as part of true SSD performance in real world, that's a top for another day. at least benchmarks from sandforce are very raw, undisguised, and more "true-form" because there is no buffer/cache to influence those benchmark results. as we all learned by now, disabled/enabled drive-cache on 840P could mean the difference between 10% and 100% of the IO rate reported. I suspect some degree of bigger or smaller influence of buffer on-off would show on other SSD equipped with cache buffer on drive.

    the “issue” of reliability of sandforce itself as SSD controller and it's associated "bad" history of BSOD's should play how big a role in making SSD choice is another topic on another day. from the factor of buffer or no-buffer, therefore capacitor or no-capacitor, it seems this matter is settled now: among consumer-grade SSD's, Intel 520 is just a better choice than Samsung 830/840/840P on production servers, various higher or lower IOPS figures aside, of course. AND I will start to point out these pro/s and con's to my own clients from this point on.

    THANK YOU, concerto49!
    Welcome. Thanks for the praises.

    3. Sandforce controller uses compression. It's not exactly "pure" as it's faster on compressible data and dips on non-compressible data. This is reflected in benchmarks such as Crystal Disk and AS SSD where RAW non-compressible random IO is measured. However, real world speaking, compression helps a lot.

    Intel claims the 1GB buffer in S3700 is not used for actual data caching, but for internal house keeping. Whether that is accurate I'm not sure.

    I would say Sandforce's firmware issues should have largely been fixed by now, especially with Intel's custom firmware that's been heavily tested. It's a mature product. It's current problem isn't the firmware, but the speed as it is an aging controller. The newer Marvell, Samsung and Indilinx are a lot faster.

    Samsung 840 TLC:

    Even if you write 20GB/day, it's still more than 5 years on 128GB model.

    http://www.anandtech.com/show/6459/s...ce-of-tlc-nand
    Hyperconnezion - your fast path to success
    IT management, Cloud and Data Center Services in Asia Pacific

  9. #34
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by concerto49 View Post
    I would say Sandforce's firmware issues should have largely been fixed by now, especially with Intel's custom firmware that's been heavily tested. It's a mature product. It's current problem isn't the firmware, but the speed as it is an aging controller. The newer Marvell, Samsung and Indilinx are a lot faster.

    Samsung 840 TLC:

    Even if you write 20GB/day, it's still more than 5 years on 128GB model.

    http://www.anandtech.com/show/6459/s...ce-of-tlc-nand
    despite that there were some reported incidents of BSOD's from Intel 520's, but those seemed to be all associated with Windows' users. little to none were reported by linux users, at least I couldn't fine one from googling very hard.

    we do have a client asked us to install 50pcs 480G intel 520 on 25 dual Xeon E5 hex-core production servers (2 each running CentOS' mdadm RAID-1) under very heavy random IO situation in last 6 months. so far so good, and 0 BSOD-like incident. maybe we just need to start trusting Intel+sandforce combo more than we used to, especially for linux kits.

    as to write endurance discussion, the math is rather simple!
    120G TLC has 1000 P/E max = 120000GB TBW (total byte written) limit.
    120000GB divided by 20GB (w/perfect 0x write amp) a day means 60000 days = 16.4 years

    however, if 3x write amp (typical desktop/note non-enterprise use), then the endurance is reduced to 5.4 years. considering the heavy IO on production server, even just 5x write amp would reduce the endurance to 3.3 years. if 10x write amp or you need to cycle 100GB a day even with just 3x write amp applied, then a dismal 1.3-1.1 years would be the end result. so, it's totally up to user patterns and the implementation of GC/TRIM/OP to reduce write amp on production server. regardless, I will say 840 Pro MLC, if must be samsung, is still a much better, safer choice (3000 P/E) because it allows 3 times the TBW, let alone the better performance.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  10. #35
    Quote Originally Posted by cwl@apaqdigital View Post
    despite that there were some reported incidents of BSOD's from Intel 520's, but those seemed to be all associated with Windows' users. little to none were reported by linux users, at least I couldn't fine one from googling very hard.

    however, if 3x write amp (typical desktop/note non-enterprise use), then the endurance is reduced to 5.4 years. considering the heavy IO on production server, even just 5x write amp would reduce the endurance to 3.3 years. if 10x write amp or you need to cycle 100GB a day even with just 3x write amp applied, then a dismal 1.3-1.1 years would be the end result. so, it's totally up to user patterns and the implementation of GC/TRIM/OP to reduce write amp on production server. regardless, I will say 840 Pro MLC, if must be samsung, is still a much better, safer choice (3000 P/E) because it allows 3 times the TBW, let alone the better performance.
    Ok. More information. The BSOD is a not only Sandforce's fault. I watched over this whole event when OCZ first rolled out Sandforce 2 drives. It has to do with driver implementation on the Windows side as well as other factors. SATA drivers have been updated over time to help combat this and so Sandforce have also introduced workarounds for it.

    Sandforce has had other issues though, not Intel related. E.g. they rolled out a firmware update that disabled TRIM and didn't enable it back for months. Intel had no issues as it used custom firmware.

    AS to write amplification - it isn't something that you change. The write amplification is based on the controller. Sure there is base and worse case scenario but it differs per controller and how it handles writes.

    This is why Samsung was able to use TLC in Samsung 840 - because the controller reduced write amplification.

    It is for the same reason NANDs are able to go through die shrinks. The controller has improved to reduce write amplification. P/E cycles were at one stage 5000. It means nothing. Make then write amplification was a lot higher.

    As said, please look at the whole package, not just NANDs and controllers separately.
    Hyperconnezion - your fast path to success
    IT management, Cloud and Data Center Services in Asia Pacific

  11. #36
    To be clear, is the buffer still an issue even when combined with a BBU or CacheVault?
    RamNode - High Performance Cloud VPS
    SSD Cloud and Shared Hosting
    NYC - LA - ATL - SEA - NL - DDoS Protection - AS3842
    Deploy on our SSD cloud today! - www.ramnode.com

  12. #37
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by concerto49 View Post
    Ok. More information. The BSOD is a not only Sandforce's fault. I watched over this whole event when OCZ first rolled out Sandforce 2 drives. It has to do with driver implementation on the Windows side as well as other factors. SATA drivers have been updated over time to help combat this and so Sandforce have also introduced workarounds for it.

    Sandforce has had other issues though, not Intel related. E.g. they rolled out a firmware update that disabled TRIM and didn't enable it back for months. Intel had no issues as it used custom firmware.

    AS to write amplification - it isn't something that you change. The write amplification is based on the controller. Sure there is base and worse case scenario but it differs per controller and how it handles writes.

    This is why Samsung was able to use TLC in Samsung 840 - because the controller reduced write amplification.

    It is for the same reason NANDs are able to go through die shrinks. The controller has improved to reduce write amplification. P/E cycles were at one stage 5000. It means nothing. Make then write amplification was a lot higher.

    As said, please look at the whole package, not just NANDs and controllers separately.
    sure, SSD controllers can have varying success in minimizing write amp, but factory-installed OP (over-provisioning; ~7% for consumer SSD to 32% on S3700) and user OP can drastically increase write endurance which has the same effect as minimizing write amp. 20% user OP could increase endurance level (or pre-set TBW limit in becoming "read-only") 2-3 folds. if installed on RAID card which has no support for any GC/TRIM stuffs, user OP is pretty much the only thing you can do to raise endurance level.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  13. #38
    Join Date
    Jan 2004
    Location
    Pennsylvania
    Posts
    942
    Quote Originally Posted by Nick A View Post
    To be clear, is the buffer still an issue even when combined with a BBU or CacheVault?
    BBU/CacheVault will only protect the RAID cards write cache, not the drive cache.
    Matt Ayres - togglebox.com
    Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
    Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions

    www.togglebox.com

  14. #39
    Has anyone had any luck improving the performance of their 840 pros?

    I have an LSI 9286-8e card with fastpath and 8x840 pro 512GB SSDs.

    I initially set this up in a RAID 0 and got fantastic speeds; high sequentials and good random IOPS .

    This is a production server though and I needed some form of redundancy (plus over 2TB of space) so configured a RAID 6. Sequential speeds seemed about right but random IOPS were terrible (4k32QD writes = about 8000). I know the write performance is worse with a RAID 6 but I wasn't expecting it to be that bad. Random reads should probably be higher too.

    I've been in contact with LSI but they're taking their time to get back to me. Just e-mailed Samsung too.

    LSI settings - Always write back, no read ahead. Like other posters, the drive cache option is greyed out and stuck on 'unchanged' but when I configure one of the settings it does say in the log that the disk cache policy is enabled.

    Just seen that LSI have released a firmware update a couple of days ago. I've not flashed it yet as this server is supposed to go live today! has anyone else? And if you have, have you noticed any improvements? There's nothing in the release notes to suggest the issue might have been fixed.

  15. #40
    Right, just installed the firmware. Had a ten minute window and didn't need to restart anyway.
    The new FW hasn't made a difference. Anyone have any ideas?

  16. #41
    Join Date
    Mar 2008
    Location
    /usr/bin/kvm
    Posts
    261
    Quote Originally Posted by zb1846 View Post
    Right, just installed the firmware. Had a ten minute window and didn't need to restart anyway.
    The new FW hasn't made a difference. Anyone have any ideas?
    Have you tried RAID 10?

  17. #42
    Quote Originally Posted by serverian View Post
    Have you tried RAID 10?
    I'd like to use raid 10 but can't, need at least 2.5TB of space.

  18. #43
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by zb1846 View Post
    Has anyone had any luck improving the performance of their 840 pros?

    I have an LSI 9286-8e card with fastpath and 8x840 pro 512GB SSDs.

    I initially set this up in a RAID 0 and got fantastic speeds; high sequentials and good random IOPS .

    This is a production server though and I needed some form of redundancy (plus over 2TB of space) so configured a RAID 6. Sequential speeds seemed about right but random IOPS were terrible (4k32QD writes = about 8000). I know the write performance is worse with a RAID 6 but I wasn't expecting it to be that bad. Random reads should probably be higher too.

    I've been in contact with LSI but they're taking their time to get back to me. Just e-mailed Samsung too.

    LSI settings - Always write back, no read ahead. Like other posters, the drive cache option is greyed out and stuck on 'unchanged' but when I configure one of the settings it does say in the log that the disk cache policy is enabled.

    Just seen that LSI have released a firmware update a couple of days ago. I've not flashed it yet as this server is supposed to go live today! has anyone else? And if you have, have you noticed any improvements? There's nothing in the release notes to suggest the issue might have been fixed.
    no luck here either! we also tried the latest 5.6 firmware (Mar-02-13) from LSI on 9266/9271 cards (dual-core LSI 2208 ROC), but we still can't change "disk cache" on Samsung 840 Pro from "no change" to "enabled" like 9260 cards (single-core LSI 2108 ROC) can.

    we also found that these LSI 2208 based RAID card can't change (drive buffer) setting on Intel S3700 enterprise SSD either. it always stays at "no change". so, this issue is not exclusive to Samsung 840/840P.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  19. #44
    Quote Originally Posted by cwl@apaqdigital View Post
    no luck here either! we also tried the latest 5.6 firmware (Mar-02-13) from LSI on 9266/9271 cards (dual-core LSI 2208 ROC), but we still can't change "disk cache" on Samsung 840 Pro from "no change" to "enabled" like 9260 cards (single-core LSI 2108 ROC) can.

    we also found that these LSI 2208 based RAID card can't change (drive buffer) setting on Intel S3700 enterprise SSD either. it always stays at "no change". so, this issue is not exclusive to Samsung 840/840P.
    What do you advice will work? (ignore Intel 520 and Sandforce)

    Does this mean anything with a cache/buffer fails, i.e. Samsung 830, OCZ Vector, etc etc?
    Hyperconnezion - your fast path to success
    IT management, Cloud and Data Center Services in Asia Pacific

  20. #45
    Join Date
    Apr 2000
    Location
    Brisbane, Australia
    Posts
    2,602
    Quote Originally Posted by cwl@apaqdigital View Post
    no luck here either! we also tried the latest 5.6 firmware (Mar-02-13) from LSI on 9266/9271 cards (dual-core LSI 2208 ROC), but we still can't change "disk cache" on Samsung 840 Pro from "no change" to "enabled" like 9260 cards (single-core LSI 2108 ROC) can.

    we also found that these LSI 2208 based RAID card can't change (drive buffer) setting on Intel S3700 enterprise SSD either. it always stays at "no change". so, this issue is not exclusive to Samsung 840/840P.
    what about LSI 9260 series?
    : CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
    : Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
    : Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack

  21. #46
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by eva2000 View Post
    what about LSI 9260 series?
    in terms of setting "disk cache" option, there is no issue with LSI 9260 series.

    3ware 9750 series is an alternative as well since 3ware BIOS always enables disk cache, and there is no option to disable it AFAIK.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  22. #47
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by cwl@apaqdigital View Post
    in terms of setting "disk cache" option, there is no issue with LSI 9260 series.

    3ware 9750 series is an alternative as well since 3ware BIOS always enables disk cache, and there is no option to disable it AFAIK.
    an update!

    it seems 3ware 9750 series does a lot better with Samsung 840 pro SSD's than all LSI 9260/9265/9266/9271 cards.

    current issues from the combo of Samsung 840 Pro and LSI raid cards:
    9265/66/71 (2208 dual-core ROC) - can't enable on-SSD 512M buffer which causes very low write rate because performance from Samsung 840P series heavily depends on on-drive buffer.
    9260 - low write rate when block size is smaller than 32K

    we just tested some 840P SSD's on 3ware 9750-4i card (drive-buffer always in "enabled" mode; no user option) this morning, and found the write rate was consistently higher than 700MB/sec on every dd write-0 test we've done, block size ranging from 16K, 32K, 64K, 256K, 512K, 1M. some tests even showed 1.1GB-1.3GB/sec! can't say the same thing about LSI cards which gave out some write rates as low as low 100M/sec which was even slower than single SATA drive.

    ironically, LSI owns 3ware! so, this must be some sort of firmware conflict(s) between LSI and samsung 840P with regard to on-drive buffer.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  23. #48
    Join Date
    Aug 2007
    Location
    L.A., CA
    Posts
    3,710
    Any way to get LSI to revamp the firmware on the 9266-4i for 840Ps?
    EasyDCIM.com - DataCenter Infrastructure Management - HELLO DEDICATED SERVER & COLO PROVIDERS! - Reach Me: chris@easydcim.com
    Bandwidth Billing | Inventory & Asset Management | Server Control
    Order Forms | Reboots | IPMI Control | IP Management | Reverse&Forward DNS | Rack Management

  24. #49
    Join Date
    Dec 2009
    Posts
    2,297
    mine usually die at rejecting I/O to offline device

    this seems to be an LSI 9266-4i / CentOS 6 issue and not CacheCade though.
    REDUNDANT.COMEquinix Data Centers Performance Optimized Network
    Managed & Unmanaged
    • Servers • Colocation • Cloud • VEEAM
    sales@redundant.com

  25. #50
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by CGotzmann View Post
    Any way to get LSI to revamp the firmware on the 9266-4i for 840Ps?
    I've notified LSI tech support a few times about the "compatibility" issues with Samsung 840 pro, yet the latest v5.6 firmware just released in Mar/2013 for 2208 dual-core ROC based RAID cards didn't improve things at all. so, good luck with LSI.

    Samsung's response was even more comical: "840 Pro SSD is consumer grade product! please use Samsung enterprise class SSD drives for enterprise servers", something in that effect.

    too bad, Samsung 830's were so smooth and trouble-free.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

Page 2 of 7 FirstFirst 12345 ... LastLast

Similar Threads

  1. upgrading lsi card to use cachecade
    By kspare in forum Dedicated Server
    Replies: 5
    Last Post: 01-19-2013, 01:13 AM
  2. LSI SSD CacheCade? How works?
    By skywin in forum Colocation, Data Centers, IP Space and Networks
    Replies: 15
    Last Post: 12-06-2012, 10:08 AM
  3. SSD to LSI CacheCade
    By pleiades in forum Colocation, Data Centers, IP Space and Networks
    Replies: 14
    Last Post: 08-28-2012, 02:22 AM
  4. CacheCade (2.0) experiences
    By Robert vd Boorn in forum Colocation, Data Centers, IP Space and Networks
    Replies: 0
    Last Post: 03-14-2012, 02:58 PM
  5. Anyone using LSI Cachecade SSD caching?
    By WebGuyz in forum Colocation, Data Centers, IP Space and Networks
    Replies: 10
    Last Post: 03-02-2012, 07:52 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •