Results 26 to 50 of 175
-
02-12-2013, 06:42 PM #26Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
this issue of "no capacitors" on any consumer-grade SSD's (except Intel 320), well Karl would disagree with you wholeheartedly that this isn't an issue, has been talked by all sides to death on this thread:
http://www.webhostingtalk.com/showthread.php?t=1234660
so I ain't going to repeat myself again!
I stand as a strong believer that you really should (or "shall" by my standard!) use Intel S3700's as caching drive(s) controlled by cachecade. main reason is that any cachecade-ready spin-drive based raid-10 equipped server would be easily $3000 or higher, therefore spending just $120 extra (100G S3700) or just $250 extra (200G S3700) to use S3700 is really a no-brainer! why take any chance on this critical piece on the data chain flowing from raid core to main array? let alone the 5 years of worry-free-of-read-only even you intend to cycle 1TB data on a small Intel 100G S3700 as caching drive every single day!
as to use consumer grade SSD such as 830/840P/520's installed as array members on the main array, that's a very different scenario to consider. I'm kinda agree with Karl that you could choose to take chances with them in doing so. Karl is correct that we all have been using spin drives installed on disk array, with no "capacitors" either, since RAID was invented. if no-capacitor on array members was that "critical" to the "health" of disk array, regardless software based or hardware based, then we should have abandoned the "idea" of disk array all together long, long time ago.
whether the ultra large buffer (512M on 840P!) can make it more vulnerable to have corrupted file system in sudden power loss event than spin drives with much smaller buffer (16M-128M), that's a debate for another day.
using enterprise SSD's as member drives for main array are still very, very out of reach by most hosts due to the cost. even a moderate 4x 400G S3700 are FOUR GRANDS, drives alone! so, you can't hardly blame them making economic sense to use 830, 840P, 520...etc, etc. at least I won't!Last edited by cwl@apaqdigital; 02-12-2013 at 06:57 PM.
-
02-12-2013, 07:08 PM #27Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
hmmmm.... which LSI card are you using?
we have no issue whatsoever to do so on 9260/9260CV with the latest BIOS/firmware (v4.9 Jan-08-2013) installed. I personally had confirmed my saying just this morning, and my tech guys has been doing so for weeks since we discovered the dismal IO rate if we didn't enable the buffer on 840P.
-
02-12-2013, 07:14 PM #28Web Hosting Master
- Join Date
- Nov 2004
- Posts
- 654
It's a 9266-8i with the latest firmware (3.230.05-2100 Jan 08, 2013).
█ Fusioned - http://www.fusioned.net
█ Enterprise & Semi-Dedicated Hosting | CloudLinux, cPanel, LiteSpeed, Acronis | PHP 5.6, 7.2, 7.3, 7.4 & 8.0
█ Fully Managed SSD KVM VPS & Dedicated Servers | CloudFlare & Acronis Partner | RIPE LIR
-
02-12-2013, 07:47 PM #29Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
-
02-12-2013, 10:54 PM #30Web Hosting Master
- Join Date
- Aug 2012
- Posts
- 1,280
It depends on the architecture. Sandforce has no buffer/RAM at all by design to save cost and to stop this issue. Marvell and Samsung controllers for example do. It doesn't mean the buffer will be an issue. It depends on what it is used for and how it is implemented.
Take a full note on the implementation and design please. Don't make a blanket statement without in depth on how the controller works.Hyperconnezion - your fast path to success
IT management, Cloud and Data Center Services in Asia Pacific
-
02-13-2013, 01:03 AM #31New Member
- Join Date
- Feb 2013
- Posts
- 1
Seems the 9271/9266 doesn't allow you to change disk cache policy settings for 840P arrays.
People in this thread have already reported being unable to change it with StorCLI and MegaRAID Storage Manager. I can confirm that WebBIOS doesn't provide the option, and attempting to execute "-LDSetProp -EnDskCache -L0 -a0" in MegaPCLI exits with the error message "Not Allowed to Change Disk Cache Policy."
I'd venture to guess that the 9271/9266 is unable to recognize the existence of a disk cache on the 840Ps. We'll have to file support requests with LSI, and hope they fix it in the next firmware revision.
-
02-13-2013, 07:55 AM #32Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Haaa...you've got me on this one big time! despite your tone, I MUST say a big "THANK YOU" to point this one out. I'm not afraid a bit to admit my own mistakes and ignorance, hopefully very occasional! after all, bringing facts and truth to benefit users and visitors to these forums is 10x more important than being shamed by my own ignorance and flaws.
VERY, VERY true that any Sandforce based SSD drives just DON'T have buffer/cache on drive at all because that's the way how Sandforce controller is designed and functions. this includes all SSD drives from little-known names to big time Intel as long as sandforce is the chosen SSD controller.
this critical fact does change things quite a bit!
1. since there is no buffer to be required on sandforce SSD's, therfore there is no need to have capacitors to protect buffer in the first place, and that's why not a single sandforce SSD on the market has capacitors installed in end products.
2. of course there are other factors to consider when choosing SSD for production server, but from pure power-loss-protection's point of view, Intel 520 series now in fact is "safer", therefore better "suitable", than Samsung 830/840/840P in production servers because there is no buffer to protect on Intel 520 or any other Intel SSD with sandforce. all data going thru sandforce are in 'pass-thru' mode with no data temperately stored on buffer to lose in power-loss events. it would be just like we purposely disable write buffer on HW raid card because no BBU is optional nor installed.
3. all benchmarks associated with sandforce based SSDs should be more "pure" than SSDs with Intel/Samsung/Marvell controller which do require write buffer (64M on intel 320, 128M on Crucial M4, 256M on Samsung 830, 512M on Samsung 840P, then 1GB on intel S3700). whether on-drive buffer can be truly "counted" or "inclusive" as part of true SSD performance in real world, that's a top for another day. at least benchmarks from sandforce are very raw, undisguised, and more "true-form" because there is no buffer/cache to influence those benchmark results. as we all learned by now, disabled/enabled drive-cache on 840P could mean the difference between 10% and 100% of the IO rate reported. I suspect some degree of bigger or smaller influence of buffer on-off would show on other SSD equipped with cache buffer on drive.
4. if enterprise-grade S3700 was not affordable to you, and by the same reason as (2), and despite Intel 520 has lower performance on average than Samsung 840P, Intel 520 should be "preferred" by hosts, who are very afraid of data-loss event induced by power loss, to use these consumer-grade SSD drives as either caching drive or array members on main array. no data on no buffer, no data to lose, pure and simple.
the “issue” of reliability of sandforce itself as SSD controller and it's associated "bad" history of BSOD's should play how big a role in making SSD choice is another topic on another day. from the factor of buffer or no-buffer, therefore capacitor or no-capacitor, it seems this matter is settled now: among consumer-grade SSD's, Intel 520 is just a better choice than Samsung 830/840/840P on production servers, various higher or lower IOPS figures aside, of course. AND I will start to point out these pro/s and con's to my own clients from this point on.
THANK YOU, concerto49!
-
02-13-2013, 08:20 AM #33Web Hosting Master
- Join Date
- Aug 2012
- Posts
- 1,280
Welcome. Thanks for the praises.
3. Sandforce controller uses compression. It's not exactly "pure" as it's faster on compressible data and dips on non-compressible data. This is reflected in benchmarks such as Crystal Disk and AS SSD where RAW non-compressible random IO is measured. However, real world speaking, compression helps a lot.
Intel claims the 1GB buffer in S3700 is not used for actual data caching, but for internal house keeping. Whether that is accurate I'm not sure.
I would say Sandforce's firmware issues should have largely been fixed by now, especially with Intel's custom firmware that's been heavily tested. It's a mature product. It's current problem isn't the firmware, but the speed as it is an aging controller. The newer Marvell, Samsung and Indilinx are a lot faster.
Samsung 840 TLC:
Even if you write 20GB/day, it's still more than 5 years on 128GB model.
http://www.anandtech.com/show/6459/s...ce-of-tlc-nandHyperconnezion - your fast path to success
IT management, Cloud and Data Center Services in Asia Pacific
-
02-13-2013, 09:06 AM #34Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
despite that there were some reported incidents of BSOD's from Intel 520's, but those seemed to be all associated with Windows' users. little to none were reported by linux users, at least I couldn't fine one from googling very hard.
we do have a client asked us to install 50pcs 480G intel 520 on 25 dual Xeon E5 hex-core production servers (2 each running CentOS' mdadm RAID-1) under very heavy random IO situation in last 6 months. so far so good, and 0 BSOD-like incident. maybe we just need to start trusting Intel+sandforce combo more than we used to, especially for linux kits.
as to write endurance discussion, the math is rather simple!
120G TLC has 1000 P/E max = 120000GB TBW (total byte written) limit.
120000GB divided by 20GB (w/perfect 0x write amp) a day means 60000 days = 16.4 years
however, if 3x write amp (typical desktop/note non-enterprise use), then the endurance is reduced to 5.4 years. considering the heavy IO on production server, even just 5x write amp would reduce the endurance to 3.3 years. if 10x write amp or you need to cycle 100GB a day even with just 3x write amp applied, then a dismal 1.3-1.1 years would be the end result. so, it's totally up to user patterns and the implementation of GC/TRIM/OP to reduce write amp on production server. regardless, I will say 840 Pro MLC, if must be samsung, is still a much better, safer choice (3000 P/E) because it allows 3 times the TBW, let alone the better performance.
-
02-13-2013, 09:16 AM #35Web Hosting Master
- Join Date
- Aug 2012
- Posts
- 1,280
Ok. More information. The BSOD is a not only Sandforce's fault. I watched over this whole event when OCZ first rolled out Sandforce 2 drives. It has to do with driver implementation on the Windows side as well as other factors. SATA drivers have been updated over time to help combat this and so Sandforce have also introduced workarounds for it.
Sandforce has had other issues though, not Intel related. E.g. they rolled out a firmware update that disabled TRIM and didn't enable it back for months. Intel had no issues as it used custom firmware.
AS to write amplification - it isn't something that you change. The write amplification is based on the controller. Sure there is base and worse case scenario but it differs per controller and how it handles writes.
This is why Samsung was able to use TLC in Samsung 840 - because the controller reduced write amplification.
It is for the same reason NANDs are able to go through die shrinks. The controller has improved to reduce write amplification. P/E cycles were at one stage 5000. It means nothing. Make then write amplification was a lot higher.
As said, please look at the whole package, not just NANDs and controllers separately.Hyperconnezion - your fast path to success
IT management, Cloud and Data Center Services in Asia Pacific
-
02-13-2013, 11:53 AM #36Solid State
- Join Date
- Aug 2010
- Posts
- 1,687
To be clear, is the buffer still an issue even when combined with a BBU or CacheVault?
██ RamNode - High Performance Cloud VPS
██ SSD Cloud and Shared Hosting
██ NYC - LA - ATL - SEA - NL - DDoS Protection - AS3842
██ Deploy on our SSD cloud today! - www.ramnode.com
-
02-13-2013, 12:01 PM #37Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
sure, SSD controllers can have varying success in minimizing write amp, but factory-installed OP (over-provisioning; ~7% for consumer SSD to 32% on S3700) and user OP can drastically increase write endurance which has the same effect as minimizing write amp. 20% user OP could increase endurance level (or pre-set TBW limit in becoming "read-only") 2-3 folds. if installed on RAID card which has no support for any GC/TRIM stuffs, user OP is pretty much the only thing you can do to raise endurance level.
-
02-13-2013, 12:39 PM #38Web Hosting Master
- Join Date
- Jan 2004
- Location
- Pennsylvania
- Posts
- 942
Matt Ayres - togglebox.com
Linux and Windows Cloud Virtual Datacenters powered by Onapp / Xen
Instant Setup, Instant Scalability, Full Lifecycle Hosting Solutions
www.togglebox.com
-
03-04-2013, 01:18 PM #39New Member
- Join Date
- Mar 2013
- Posts
- 3
Has anyone had any luck improving the performance of their 840 pros?
I have an LSI 9286-8e card with fastpath and 8x840 pro 512GB SSDs.
I initially set this up in a RAID 0 and got fantastic speeds; high sequentials and good random IOPS .
This is a production server though and I needed some form of redundancy (plus over 2TB of space) so configured a RAID 6. Sequential speeds seemed about right but random IOPS were terrible (4k32QD writes = about 8000). I know the write performance is worse with a RAID 6 but I wasn't expecting it to be that bad. Random reads should probably be higher too.
I've been in contact with LSI but they're taking their time to get back to me. Just e-mailed Samsung too.
LSI settings - Always write back, no read ahead. Like other posters, the drive cache option is greyed out and stuck on 'unchanged' but when I configure one of the settings it does say in the log that the disk cache policy is enabled.
Just seen that LSI have released a firmware update a couple of days ago. I've not flashed it yet as this server is supposed to go live today! has anyone else? And if you have, have you noticed any improvements? There's nothing in the release notes to suggest the issue might have been fixed.
-
03-04-2013, 01:51 PM #40New Member
- Join Date
- Mar 2013
- Posts
- 3
Right, just installed the firmware. Had a ten minute window and didn't need to restart anyway.
The new FW hasn't made a difference. Anyone have any ideas?
-
03-04-2013, 02:41 PM #41Web Hosting Guru
- Join Date
- Mar 2008
- Location
- /usr/bin/kvm
- Posts
- 261
-
03-04-2013, 03:51 PM #42New Member
- Join Date
- Mar 2013
- Posts
- 3
-
03-05-2013, 12:33 AM #43Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
no luck here either! we also tried the latest 5.6 firmware (Mar-02-13) from LSI on 9266/9271 cards (dual-core LSI 2208 ROC), but we still can't change "disk cache" on Samsung 840 Pro from "no change" to "enabled" like 9260 cards (single-core LSI 2108 ROC) can.
we also found that these LSI 2208 based RAID card can't change (drive buffer) setting on Intel S3700 enterprise SSD either. it always stays at "no change". so, this issue is not exclusive to Samsung 840/840P.
-
03-05-2013, 12:35 AM #44Web Hosting Master
- Join Date
- Aug 2012
- Posts
- 1,280
Hyperconnezion - your fast path to success
IT management, Cloud and Data Center Services in Asia Pacific
-
03-05-2013, 12:42 AM #45Web Hosting Master
- Join Date
- Apr 2000
- Location
- Brisbane, Australia
- Posts
- 2,602
: CentminMod.com Nginx Installer Nginx 1.25, PHP-FPM, MariaDB 10 CentOS (AlmaLinux/Rocky testing)
: Centmin Mod Latest Beta Nginx HTTP/2 HTTPS & HTTP/3 QUIC HTTPS supports TLS 1.3 via OpenSSL 1.1.1/3.0/3.1 or BoringSSL or QuicTLS OpenSSL
: Nginx & PHP-FPM Benchmarks: Centmin Mod vs EasyEngine vs Webinoly vs VestaCP vs OneInStack
-
03-05-2013, 11:29 AM #46Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
-
03-21-2013, 05:13 PM #47Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
an update!
it seems 3ware 9750 series does a lot better with Samsung 840 pro SSD's than all LSI 9260/9265/9266/9271 cards.
current issues from the combo of Samsung 840 Pro and LSI raid cards:
9265/66/71 (2208 dual-core ROC) - can't enable on-SSD 512M buffer which causes very low write rate because performance from Samsung 840P series heavily depends on on-drive buffer.
9260 - low write rate when block size is smaller than 32K
we just tested some 840P SSD's on 3ware 9750-4i card (drive-buffer always in "enabled" mode; no user option) this morning, and found the write rate was consistently higher than 700MB/sec on every dd write-0 test we've done, block size ranging from 16K, 32K, 64K, 256K, 512K, 1M. some tests even showed 1.1GB-1.3GB/sec! can't say the same thing about LSI cards which gave out some write rates as low as low 100M/sec which was even slower than single SATA drive.
ironically, LSI owns 3ware! so, this must be some sort of firmware conflict(s) between LSI and samsung 840P with regard to on-drive buffer.
-
03-21-2013, 06:11 PM #48Web Hosting Master
- Join Date
- Aug 2007
- Location
- L.A., CA
- Posts
- 3,710
Any way to get LSI to revamp the firmware on the 9266-4i for 840Ps?
EasyDCIM.com - DataCenter Infrastructure Management - HELLO DEDICATED SERVER & COLO PROVIDERS! - Reach Me: chris@easydcim.com
Bandwidth Billing | Inventory & Asset Management | Server Control
Order Forms | Reboots | IPMI Control | IP Management | Reverse&Forward DNS | Rack Management
-
03-21-2013, 06:43 PM #49Web Hosting Master
- Join Date
- Dec 2009
- Posts
- 2,297
mine usually die at rejecting I/O to offline device
this seems to be an LSI 9266-4i / CentOS 6 issue and not CacheCade though.█ REDUNDANT.COM • Equinix Data Centers • Performance Optimized Network
█ Managed & Unmanaged • Servers • Colocation • Cloud • VEEAM
█ sales@redundant.com
-
03-22-2013, 06:55 AM #50Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
I've notified LSI tech support a few times about the "compatibility" issues with Samsung 840 pro, yet the latest v5.6 firmware just released in Mar/2013 for 2208 dual-core ROC based RAID cards didn't improve things at all. so, good luck with LSI.
Samsung's response was even more comical: "840 Pro SSD is consumer grade product! please use Samsung enterprise class SSD drives for enterprise servers", something in that effect.
too bad, Samsung 830's were so smooth and trouble-free.
Similar Threads
-
upgrading lsi card to use cachecade
By kspare in forum Dedicated ServerReplies: 5Last Post: 01-19-2013, 01:13 AM -
LSI SSD CacheCade? How works?
By skywin in forum Colocation, Data Centers, IP Space and NetworksReplies: 15Last Post: 12-06-2012, 10:08 AM -
SSD to LSI CacheCade
By pleiades in forum Colocation, Data Centers, IP Space and NetworksReplies: 14Last Post: 08-28-2012, 02:22 AM -
CacheCade (2.0) experiences
By Robert vd Boorn in forum Colocation, Data Centers, IP Space and NetworksReplies: 0Last Post: 03-14-2012, 02:58 PM -
Anyone using LSI Cachecade SSD caching?
By WebGuyz in forum Colocation, Data Centers, IP Space and NetworksReplies: 10Last Post: 03-02-2012, 07:52 PM