Results 1 to 25 of 31
Thread: Why not use SSDs on RAID5?
-
02-24-2015, 05:04 AM #1Aspiring Evangelist
- Join Date
- Apr 2013
- Location
- Boston, MA
- Posts
- 400
Why not use SSDs on RAID5?
I see a lot of people saying to avoid SSDs on RAID5 like it's a curse, but why exactly should I avoid putting 3 SSDs in RAID5, and have 2 SSDs in RAID1 and 1 hot-spare SSD?
Thanks.
-
02-24-2015, 05:53 AM #2Web Hosting Guru
- Join Date
- Jul 2014
- Posts
- 291
-
02-24-2015, 05:57 AM #3~~~~
- Join Date
- May 2008
- Posts
- 3,424
Uptime Monitor - Minimize your downtime by being the first to know about it!
Blacklist Monitor - Are any of your IPs or Domains blacklisted? Find out before it gets to affect you or your clients.
-
02-24-2015, 06:17 AM #4Server sales professional
- Join Date
- Mar 2010
- Location
- Lithuania
- Posts
- 2,767
Please check this source, many opinions: http://serverfault.com/questions/513...raid5-with-ssd
Affordable custom Single/DUAL CPU servers EU | Configure
Linux, Windows VPS in LT/UK/NL/USA | Get one now
-
02-24-2015, 06:17 AM #5Junior Guru
- Join Date
- Feb 2014
- Posts
- 229
| | MassiveGRID.com - The High Availability Cloud Provider with Global Coverage
| | Equinix Datacenters, Fully Redundant Power, Network & Enterprise Grade Hardware
| | High Availability PaaS, Cloud Dedicated Servers & Private Cloud Hosting with 100% Uptime SLA
| | 17 years of excellence, 17 years of support, [b]17/b] years of speed, stability and evolution!
-
02-25-2015, 04:08 AM #6
I think everyone in this thread is missing the obvious here: Raid 5 amplifies writing dramatically. SSDs have a limited lifespan for number of writes. So the two are really a poor fit as you're going to wear out the SSDs very quickly in a raid 5 configuration. Raid 10 is best, raid 1 is also acceptable. Raid 5 is best avoided if possible.
IOFLOOD.com -- We Love Servers
Phoenix, AZ Dedicated Servers in under an hour
★ Ryzen 9: 7950x3D ★ Dual E5-2680v4 Xeon ★
Contact Us: sales@ioflood.com ★
-
02-25-2015, 04:14 AM #7dd if=/dev/null of=/dev/sda
- Join Date
- Aug 2010
- Location
- Belgium
- Posts
- 657
█ AssetGateway
█ Skype da_arco
-
02-25-2015, 04:14 AM #8The Linux Specialist
- Join Date
- Mar 2003
- Location
- /root
- Posts
- 23,981
Specially 4 U
Reseller Hosting: Boost Your Websites | Fully Managed KVM VPS: 3.20 - 5.00 Ghz, Pure Dedicated Power
JoneSolutions.Com is on the net 24/7 providing stable and reliable web hosting solutions, server management and services since 2001
Debian|Ubuntu|cPanel|DirectAdmin|Enhance|Webuzo|Acronis|Estela|BitNinja|Nginx
-
02-25-2015, 04:17 AM #9dd if=/dev/null of=/dev/sda
- Join Date
- Aug 2010
- Location
- Belgium
- Posts
- 657
Yes, mostly epic rebuild times, the problem with rebuilding is that it's intensive on your disks, there's a possibility that your rebuild will make another disk fail.
Here, have fun:
https://www.memset.com/tools/raid-calculator/
http://wintelguy.com/raidmttdl.pl█ AssetGateway
█ Skype da_arco
-
02-25-2015, 04:26 AM #10Web Hosting Master
- Join Date
- Feb 2012
- Posts
- 2,103
Like others have said RAID 5 is intensive during rebuild, rebuild times are slower compared to other RAID levels and other disks are more likely to fail during the rebuild, in which case you're screwed. Go with RAID 1 or RAID 10 where possible.
█ Clouveo - SSD/NVMe Cloud VPS & Web Hosting
█ Cloud VPS Servers | DDoS Protected | Snapshots | Auto Backups | One Click Apps | Custom ISOs
█ clouveo.com | Locations: [UK] London, [NL] Amsterdam, [US] Los Angeles
-
02-25-2015, 10:26 AM #11WHT Addict
- Join Date
- Feb 2014
- Posts
- 150
I don't understand.. a write to a RAID10 array results in write to two SSD's (actual data + 1 copy). A write to a RAID5 array results in a write to two SSD's as well (actual data + checksum). Shouldn't make any difference for the lifespan. Also with enterprise 10DWPD SSD's you'd need a lot of writes to wear out your SSD's in their lifetime.
-
02-25-2015, 10:33 AM #12IOFLOOD.com -- We Love Servers
Phoenix, AZ Dedicated Servers in under an hour
★ Ryzen 9: 7950x3D ★ Dual E5-2680v4 Xeon ★
Contact Us: sales@ioflood.com ★
-
02-25-2015, 10:35 AM #13Junior Guru Wannabe
- Join Date
- Oct 2014
- Location
- Houston
- Posts
- 36
Sort of like the old song - Don't Copy That Floppy (https://www.youtube.com/watch?v=up863eQKGUI) you gotta remember to not "Raid5 your SSD"
I may have just aged myself.█ NetDepot.com - Dedicated Servers - Cloud Servers
█ Atlanta, Dallas, NYC area - Chicago & Los Angeles (Coming Soon)
█ Fully Automated Cloud | All Dedicated Servers include IPMI & 24x7 Support
█ AIM: RodneyGiles154 | Skype: rodney.giles154
-
02-25-2015, 11:13 AM #14Web Hosting Master
- Join Date
- Aug 2007
- Posts
- 2,157
█ Bobby - PreciselyManaged.com - Precision Hosting Solutions
█ Enterprise Shared, Reseller, VPS, Hybrid, and Dedicated Hosting
█ SpamExperts | CloudLinux | cPanel | Bacula + R1soft | and more!
█ Full proactively managed, and we specialize in hosting small web hosts
-
02-25-2015, 11:22 AM #15Web Hosting Master
- Join Date
- May 2001
- Location
- HK
- Posts
- 3,082
As everyone is very against raid 5, under what circumstances does one use raid 5? I keep hearing the same thing when raid 5 has been put into discussion.
I use raid 5 and 50 and not seeing a problem.
-
02-25-2015, 11:33 AM #16Web Hosting Master
- Join Date
- Nov 2005
- Posts
- 3,944
-
02-25-2015, 02:39 PM #17
It's based in large part on the number of write operations, because each write op, no matter how small, must reflash in sector sized chunks or bigger. Depending on various factors including free / available blocks on the ssd, raid chunk size, size of a given write, etc, you could see pretty substantial write amplification in this scenario, which is why it should be avoided for both longevity and performance reasons.
IOFLOOD.com -- We Love Servers
Phoenix, AZ Dedicated Servers in under an hour
★ Ryzen 9: 7950x3D ★ Dual E5-2680v4 Xeon ★
Contact Us: sales@ioflood.com ★
-
02-25-2015, 02:55 PM #18Web Hosting Master
- Join Date
- Nov 2005
- Posts
- 3,944
Yes, I was thinking the same thing with write amplifications after I posted. Assuming you know what you are doing, having a higher RAID chunk size and higher filesystem transaction size / commit timeout could go to benefiting endurance. I think with proper settings, you could make a RAID5 outlast a RAID1 by a good amount, assuming you are willing/able to lose a small amount of data on a power loss or crash (with the commit timeout).
-
02-25-2015, 03:09 PM #19Backup Guru
- Join Date
- Feb 2002
- Location
- New York, NY
- Posts
- 4,618
How so? If a naive RAID controller is rewriting data that hasn't changed, then it would cause amplification, but a proper RAID5 implementation shouldn't be doing that.
The main reason to avoid it would be for performance. The overhead of reading old data and old parity on partial stripe writes can slow things down quite a bit. In reality, you should still get decent performance, and maybe that's all you need.
Another option to consider is using all 3 SSDs in a RAID10 by logically splitting each SSD in half, giving you 6 components to work with. You would then have 3 mirrors in your RAID10, where mirror #1 uses 1A and 2A, mirror #2 uses 3A and 1B, and mirror #3 uses 2B and 3B.Scott Burns, President
BQ Internet Corporation
Remote Rsync and FTP backup solutions
*** http://www.bqbackup.com/ ***
-
02-25-2015, 06:13 PM #20Junior Guru Wannabe
- Join Date
- Nov 2013
- Posts
- 34
Don't forget that raid5 adds significant (at least in SSD standards) read/write latency. SSD's in servers are about low access time, high queue-depth performance. You will see much lower random performance in raid 5 than a single drive.
-
02-26-2015, 07:34 AM #21WHT Addict
- Join Date
- Feb 2014
- Posts
- 150
No it doesn't. In a 5-disk RAID5 array a write will result in the following operations:
- Write of the actual data (1 operation)
- Reading the data form the three other data disks in the stripe for parity calculation (3 operations)
- Writing out the new parity information to the last remaining disks (1 operation)
-
02-26-2015, 03:02 PM #22Backup Guru
- Join Date
- Feb 2002
- Location
- New York, NY
- Posts
- 4,618
It doesn't even have to read from all the other disks in the stripe. It just has to read the old data and the old parity. To write one sector, the RAID reads one sector of old data, reads one sector of old parity, writes one sector of new data, and writes one sector of new parity. XORing the old data with the old parity basically removes the old data from the parity calculation.
RAID10 will do 2 writes. RAID5 will do 2 reads and 2 writes. Depending on the workload, there's a good chance that the old data is already cached, so it may only have to do 1 read and 2 writes.Scott Burns, President
BQ Internet Corporation
Remote Rsync and FTP backup solutions
*** http://www.bqbackup.com/ ***
-
02-27-2015, 07:03 AM #23WHT Addict
- Join Date
- Feb 2014
- Posts
- 150
-
02-27-2015, 03:12 PM #24Junior Guru
- Join Date
- Mar 2005
- Posts
- 184
Mostly because it's dangerous in large arrays. With multiple terabytes or RAID arrays on SATA-drives with many blocks you risk discovering another bad block during resync. If you do the entire array is most likely gone. RAID 6 gives you double protection and should be used for all large SATA-setups. When you resync another drive may fail and nothing happens.
You can use RAID5 for 72-146 GB 10-15k drives in small setups (4-6 drives). More than that and you start taking unnecessary risks.
And as always - keep a spare drive available next to your servers at all times. When you replace it - immediately order a new drive.
-
02-27-2015, 04:11 PM #25Location = SoapBox
- Join Date
- Oct 2003
- Posts
- 6,564
there is nothing principally wrong with raid5 if used in the right use case scenarios - it always has been viable. but, as others have pointed out, it does have various drawbacks. those drawbacks were always weighed vs the benefits of increased capacity with less drives. with the price of storage continuing to drop, the argument simply isnt the same as it was back in the days of 73GB/146GB/300GB SCSI drives. you can now purchase 4TB or more SATA drives which perform better and at lower cost. if its pure capacity, a raid10 array will cost you less, provide you more capacity and cost you less then a raid5 array of older drives ever could. so, the argument for raid5 just doesnt make sense any longer - unless of course you are really just trying to save on drive cost and have zero care about write performance in the environment. otherwise, buy an extra drive or 2 and go with raid10/60
www.cartika.com
www.clusterlogics.com - You simply cannot run a hosting company without this software. Backups, Disaster Recovery, Big Data, Virtualization. 20 years of building software that solves your problems
Similar Threads
-
NY - E3-1230 v2 + 32GB RAM - 256GB SSDs and 512GB SSDs available! - Starting at $139!
By comforthost in forum Dedicated Hosting OffersReplies: 0Last Post: 04-19-2014, 10:09 AM -
NY - E3-1230 v2 + 32GB RAM - 256GB SSDs and 512GB SSDs available! - Starting at $139!
By comforthost in forum Dedicated Hosting OffersReplies: 0Last Post: 06-04-2013, 11:14 PM -
NY - E3-1230 v2 + 32GB RAM - 256GB SSDs and 512GB SSDs available! - Starting at $139!
By comforthost in forum Dedicated Hosting OffersReplies: 0Last Post: 04-12-2013, 10:29 PM -
Considering SSDs - Test your SSDs or risk massive data loss, researchers warn
By Coolraul in forum Dedicated ServerReplies: 19Last Post: 03-06-2013, 03:18 AM -
copy files from raid5's HD to raid5's HD?
By ttgt in forum Hosting Security and TechnologyReplies: 3Last Post: 07-22-2009, 03:28 AM