Results 1 to 20 of 20
  1. #1

    SSD drives in a server setting

    Does anyone have experience with SSD drives in a server environment? I've seen now a few offers with SSD (Intel) and wondering if the speed is noticeable?

    Are they worth it? from what I have been reading is that they a superior in reliability, but have issues with limited write cycles.

    Any feedback would be nice.

    BTW, thank you all for all the great help I have been receiving on this board. I'm learning daily about servers.

  2. #2
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    19,196
    The speed *is* noticeable and it is definitely worth it if you don't need a *lot* of storage space.

    Limited write cycles is true of any drive however most standard drives fail on the read (meaning lost data) where as SSD drives fail on the write meaning no data is lost and the sector is marked bad and the data is written elsewhere.

    The intel drives for servers are SLC (single level cell) with up to 1,000,000 rewrites per sector or so where as MLC is consumer grade with 10,000 to 100,000 rewrites per sector.
    Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
    Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
    cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
    Class-leading support that responds in minutes, not days.

  3. #3
    Thank you for your quick response.

  4. #4
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    19,196
    Quote Originally Posted by fendter View Post
    Thank you for your quick response.
    No problem
    Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
    Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
    cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
    Class-leading support that responds in minutes, not days.

  5. #5
    Join Date
    Mar 2002
    Location
    SFO,MIA,ATL,AMS
    Posts
    650
    ssd drives are da ****!

  6. #6
    Join Date
    Jan 2005
    Location
    Richmond, VA
    Posts
    3,119
    Is it known whether the SSD drives are as reliable/less reliable/more reliable than regular hard disks over the long run?
    Daniel B., CEO - Bezoka.com and Ungigs.com
    Hosting Solutions Optimized for: WordPress • Joomla • OpenCart • Moodle
    Data Centers in: Chicago (US), London (UK), Sydney (AU), Sofia (BG), Pori (FI)
    Email Daniel directly: ceo [at] bezoka.com

  7. #7
    Join Date
    Jan 2003
    Location
    Texas, where else?
    Posts
    1,571

    Cool

    Due to their extreme expense per GB (unless this is your server and you don't need much in terms of space)
    Those few hosts who are using SSD are using them patitioned for the OS and other operations where Read/Write time is critical (usually read time) where extreme sped is noticeable and then using the fast (10K) or ultra fast (15K) SAS drives for data,
    However even with SAS speed gains are greatest up to 146GB. So it's still relatively over-expensive technology when a properly optimized SATAII with a fast spindle speed and a good RAID set up will deliver very good performance with a far lower cost per GB.

    Technology is changing very fast these days though so who knows what will be available in a few months, a little over a year ago the first LED TV's were going for around $1,500 for a 12" now they are that much for at 40".
    SSD technology is advancing rapidly as well as SAS technology and as they do prices continue to drop.
    But one thing to consider in ultra fast server drives for the web (or is you are a consumer and someone is using them as a selling point) is no matter how fast the server, nowadays the bottleneck is usually at the viewers end, so you can send it out lightning fast but the "actual" speed they experience these days will likely depend more on their ISP and their computer's speed)

    PS: To answer the above, we will have to wait for time to tell., since servers aren't dropped like laptops. Same things with drives-it's a trade off 15K is 1/2 again faster but 10K is that many revolutions less on the components every minute (and less heat) and 7,2000 SATA II's have proven very reliable over time.
    Last edited by DDT; 10-04-2009 at 08:53 PM.
    New Idea Hosting NO Overselling-Business-Grade, Shared Only! New-In House Design Team.
    High Speed & Uptime; , DIY Pro-Site Builder-Daily Backups-Custom Plans, All Dual Xeon Quad Intel servers w/ ECC DDR3 RAM SCSI RAID minimums.
    We Concentrate on Shared Hosting ...doing one thing and doing it VERY well

  8. #8
    I'm one of those weird guys that replaces his hard drive on my laptop once a year. keep in mind that my laptop runs for at least 10 hrs a day 7 days a week with lots of design work happening. The only reason I haven't gotten a SSD drive yet is cost. ~$500 for a 160 gig Intel.

  9. #9
    Join Date
    Mar 2009
    Location
    Israel
    Posts
    1,212
    awesome drives they preform very well for database servers!

  10. #10
    Join Date
    Feb 2003
    Location
    Seattle, WA
    Posts
    541
    So far I have noticed that a single SSD drive can keep up with my RAID-10 array with 4x500Gb Western Digital Greens. However I get 1TB of disk space + redundancy for about the price of a 120Gb SSD drive.

  11. #11
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    19,196
    The only thing I've used a SSD in a server environment for is for the MySQL database storage... It speeds up MySQL quite a bit and decreases latency which is very nice but that's only really a good thing because each server only needs around 8gb or less of database storage on average

    The performance was great but it wasn't worth the cost when we could just run Raid10 with 4 to 10 drives and see similar performance with much more storage
    Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
    Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
    cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
    Class-leading support that responds in minutes, not days.

  12. #12
    Join Date
    Sep 2007
    Location
    Albuquerque, NM
    Posts
    140
    SSD i think once it because more mainstream will eventually replace the standard drives of today. Just need those companies who early adopt the technology and it move fairly fast from that point.

    Cant wait for 500GB cheap SSD =)
    Anthony H. Webmaster of 3Dx Hosting
    Hosting: 3dxhosting.com & My Design Blog: Anthony Hays.com
    We provide low cost standard & custom shared hosting solutions.

  13. #13
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    19,196
    500gb SLC SSD for cheap would be nice, 500gb MLC SSD for a server??? No thanks...
    Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
    Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
    cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
    Class-leading support that responds in minutes, not days.

  14. #14
    Join Date
    Aug 2004
    Location
    Canada
    Posts
    3,785
    Quote Originally Posted by MikeDVB View Post
    500gb SLC SSD for cheap would be nice, 500gb MLC SSD for a server??? No thanks...

    A lot of MLC SSD are still be significantly faster than even SAS drives in raid configurations. This is especially true with random read which is what servers basically do.

    The biggest worry would obviously be the write life time of the drive. How much writing are you actually doing? Even old articles about SSD suggest we're talking about even on say a 64GB drives you'd need to be writing 4GB/hour. With the 500GB example that would work out to 31GB/hour or so? In most examples they give 30% or more of a safety factor as well so it's not a sudden demise. The technology used to do the wear leveling has improved since a lot of the SSD articles have been out. The example I gave gives you only 3 years before the drive is toast. If it was a SLC variation that would work out to 30 years.

    I could see ways you could keep running even past that life time. Run a raid-1 say and after the first year replace one drive. Then you now have them a year apart in their wear level. So you'll never lose all the data anyways.

    I think it's more about a cost thing right now. If we could get 500GB SSD's at the cost of other drives it be an easy choice. Right now though that 500GB SSD is what over a $1000? Get four of them drop them into a raid-10 and you'd be laughing. Although people are hitting the the limits of 3.0Gbps of SATA so you might not gain much in peak transfer it be more in random.
    Tony B. - Chief Executive Officer
    Hawk Host Inc. Proudly serving websites since 2004
    Quality Shared and Cloud Hosting
    PHP 5.2.x - PHP 8.1.X Support!

  15. #15
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    19,196
    Quote Originally Posted by TonyB View Post
    A lot of MLC SSD are still be significantly faster than even SAS drives in raid configurations. This is especially true with random read which is what servers basically do.
    It's reliability not speed that concerns me with SLC over MLC.

    Quote Originally Posted by TonyB View Post
    The biggest worry would obviously be the write life time of the drive. How much writing are you actually doing? Even old articles about SSD suggest we're talking about even on say a 64GB drives you'd need to be writing 4GB/hour. With the 500GB example that would work out to 31GB/hour or so? In most examples they give 30% or more of a safety factor as well so it's not a sudden demise. The technology used to do the wear leveling has improved since a lot of the SSD articles have been out. The example I gave gives you only 3 years before the drive is toast. If it was a SLC variation that would work out to 30 years.
    Exactly - things are getting better but no matter what when you have multiple levels to a cell - if any of those levels fails the cell fails. It's like running raid0 with 12 drives, if any one of those 12 drives fails the whole array goes down... If any of the levels of a MLC goes down, the whole cell goes down - although not as catastrophically as an entire raid0 array

    Quote Originally Posted by TonyB View Post
    I could see ways you could keep running even past that life time. Run a raid-1 say and after the first year replace one drive. Then you now have them a year apart in their wear level. So you'll never lose all the data anyways.
    You should *never* lose all of the data - as time goes on you simply lose capacity because as I said before they fail on the *write* not on the read so realistically unless something really bad happens you shouldn't ever lose data that is already on the drive.

    Quote Originally Posted by TonyB View Post
    I think it's more about a cost thing right now. If we could get 500GB SSD's at the cost of other drives it be an easy choice. Right now though that 500GB SSD is what over a $1000?
    A 256GB SLC SSD is at around $800 last I checked...
    Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
    Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
    cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
    Class-leading support that responds in minutes, not days.

  16. #16
    Join Date
    Aug 2004
    Location
    Canada
    Posts
    3,785
    Quote Originally Posted by MikeDVB View Post
    Exactly - things are getting better but no matter what when you have multiple levels to a cell - if any of those levels fails the cell fails. It's like running raid0 with 12 drives, if any one of those 12 drives fails the whole array goes down... If any of the levels of a MLC goes down, the whole cell goes down - although not as catastrophically as an entire raid0 array
    I do not see the big deal here. You can lose bits on a mechanical drive as well it's no different. You have error correction for a reason. The wear leveling is designed to prevent what you describe from happening over a gradual amount of time though. If this does happen early on this would suggest a more troubling issue such as complete drive failure.


    Quote Originally Posted by MikeDVB View Post
    You should *never* lose all of the data - as time goes on you simply lose capacity because as I said before they fail on the *write* not on the read so realistically unless something really bad happens you shouldn't ever lose data that is already on the drive.

    The point of this idea is you will never have both drives that are going to fail at the same time. This is the same idea as when you do raid of mechanical drives you do not take them all from the same batch. With the SSD the failure issue would be a problem even if they were from different batches. So the solution I suggest spreads them out so as one reports it's hitting it's end of life you replace just the one. Then down the road you have to replace the other.
    Tony B. - Chief Executive Officer
    Hawk Host Inc. Proudly serving websites since 2004
    Quality Shared and Cloud Hosting
    PHP 5.2.x - PHP 8.1.X Support!

  17. #17
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    19,196
    Quote Originally Posted by TonyB View Post
    I do not see the big deal here. You can lose bits on a mechanical drive as well it's no different. You have error correction for a reason. The wear leveling is designed to prevent what you describe from happening over a gradual amount of time though. If this does happen early on this would suggest a more troubling issue such as complete drive failure.
    It's not being worried about losing bits, it's that if a cell does go - it takes many more bits with it on an MLC than an SLC - bits that may not have been to the point of failure yet. Wear-leveling is great and will prolong the life of both an SLC and an MLC drive however an SLC is always going to have a longer life than an MLC even with wear-leveling. I'm not disagreeing with you, just pointing out different things than you


    Quote Originally Posted by TonyB View Post
    The point of this idea is you will never have both drives that are going to fail at the same time. This is the same idea as when you do raid of mechanical drives you do not take them all from the same batch. With the SSD the failure issue would be a problem even if they were from different batches. So the solution I suggest spreads them out so as one reports it's hitting it's end of life you replace just the one. Then down the road you have to replace the other.
    Definitely a good idea, can't really argue with you there
    Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
    Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
    cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
    Class-leading support that responds in minutes, not days.

  18. #18
    Join Date
    Sep 2009
    Location
    Atlanta, GA
    Posts
    78
    Another thing that is bad about SSDs, unlike with hard-drives, if you mess up configuring your raid, or more than 1 drive dies before you can get to the colo, you can still get the data back through recovery services (REALLY expensive, but it could totally save your job). If an SSD dies, or a raid of SSDs fails, you are high and dry.

  19. #19
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    19,196
    Quote Originally Posted by astarnes View Post
    Another thing that is bad about SSDs, unlike with hard-drives, if you mess up configuring your raid, or more than 1 drive dies before you can get to the colo, you can still get the data back through recovery services (REALLY expensive, but it could totally save your job). If an SSD dies, or a raid of SSDs fails, you are high and dry.
    You should do a tad more research on SSD's imho (I've done extensive research on the technology) ... The chances of an SSD all-out-failing is so slim it's not even really worth discussing. I mean you could get hit by lightning in the next year which is more likely than a single SSD drive failing outright without a loss of capacity beforehand to let you know it was on it's way out but we're not discussing how you should stay indoors because it's not likely to happen There's no mechanical moving parts in a SSD and when the drive does fail it will happen slowly over time on writes (i.e. sector marked bad, capacity marked reduced, data written to another sector) so you won't lose anything.

    So the chances of your SSD Raid array failing catastrophically as you mention are so astronomically low ... now over time (years) it will gradually lose capacity at which point you will have to replace drives if you need the space.
    Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
    Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
    cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
    Class-leading support that responds in minutes, not days.

  20. #20
    Join Date
    Sep 2009
    Location
    Atlanta, GA
    Posts
    78
    Working in a computer assembly/repair sweatshop is the computer equivalent of being a doctor in the ER. You rapidly develop an extremely fatalistic view of computer hardware. With that said I must first state that you are correct, SSD failure rates are DRASTICALLY lower than mechanical hard drive failure rates. For consumers that do not normally use raid to store data, and would never consider the vulgarly expensive data recovery prices if their single disk did die, SSD is certainly a much better option.

    However, business critical data is worth much much more. Securing data is all about redundancy, the more of it you have, the safer your data is.

    What I was getting at was not at all the reliability of SSD as a storage device, but they fact that you have significantly fewer emergency data recovery options available. With sufficient data-redundancy being one of the most neglected things in computing, I don't think SSDs are going to help with people accidentally loosing large quantities of important data. With that said, I also love them.

    Finally, I feel that I must address this "I mean you could get hit by lightning in the next year which is more likely than a single SSD drive failing outright without a loss of capacity beforehand to let you know it was on it's way out but we're not discussing how you should stay indoors because it's not likely to happen"

    I know people that have been hit by lightning. It does happen. What I am saying is, if we use your metaphor. SSDs are like getting hit by lightning while being stranded on an island. If you do get hit, you won't be going to the hospital because there just isn't one. Then again, you probably won't get hit...

    I was going to mention this earlier, but I almost forgot. Hardware failure is not the only cause for data loss on drives. Don't forget user error, both with raid functions, and file deletion. SSDs make it very hard to recover this data as well.

Similar Threads

  1. How many drives does my dedicated server have...
    By srik79 in forum Hosting Security and Technology
    Replies: 9
    Last Post: 08-14-2009, 12:45 PM
  2. Replies: 0
    Last Post: 08-28-2004, 09:45 PM
  3. Adding Drives or Replacing with Larger Drives, Possible?
    By rracer99 in forum Hosting Security and Technology
    Replies: 11
    Last Post: 08-12-2004, 06:50 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •