Page 1 of 2 12 LastLast
Results 1 to 40 of 48

Thread: 1.5TB drives

  1. #1
    Join Date
    Mar 2007
    Posts
    402

    1.5TB drives

    I know there are a lot of experienced hardware guys on here, so I wanted some input on 1.5TB drives. Are they reliable enough to be used in non-mission critical storage servers? 99% of what we do is OEM (Dell) equipment, so I don't test raw hardware much these days.

    I've read a lot of negative things about Seagate lately. Can anyone chime in with specific models they've had positive or negative experiences with from any vendor? Reading some reviews on the WD 1.5TB Caviar Black drives, there seems to be some weird issues with them going into a recovery cycle.
    iCall Carrier Services - Carrier-grade VoIP services from a licensed CLEC - http://carriers.icall.com
    Domestic termination and origination, toll-free origination, A-Z International termination, dedicated servers, and colocation in our wholly-owned datacenter
    Real-time ordering via our control panel or XML-based API with over 20,000 numbers in stock

  2. #2
    Join Date
    Oct 2009
    Location
    Houston, TX
    Posts
    88
    Quote Originally Posted by voipcarrier View Post
    I know there are a lot of experienced hardware guys on here, so I wanted some input on 1.5TB drives. Are they reliable enough to be used in non-mission critical storage servers? 99% of what we do is OEM (Dell) equipment, so I don't test raw hardware much these days.

    I've read a lot of negative things about Seagate lately. Can anyone chime in with specific models they've had positive or negative experiences with from any vendor? Reading some reviews on the WD 1.5TB Caviar Black drives, there seems to be some weird issues with them going into a recovery cycle.
    I have used many of the Seagate drives of various capacities that have had high failure rates, but waited until they started shipping with the "fixed" firmware. I have had no issues with these later batches, even in RAID. Generally speaking, if the drive has a firmware version starting with CC, it should be unaffected by the issues many have been reporting (this applies to any recent drive of any capacity).

    Dell sells the Seagate 1.5TB ST31500341AS model, to which all the above applies to. They sometimes have a custom firmware for these drives when installed in a PowerEdge/PowerVault.

  3. #3
    Join Date
    Feb 2006
    Location
    Kusadasi, Turkey
    Posts
    3,273
    I have a Seagate 1.5TB ST31500341AS at home PC, working faster than 500 GB WD disks, and no problems so far. But note that this is not a server.

    Although it's better than 95% of hosting servers out there, with 4 cores, 8GB ram, 4x SATA drives and 2x SSD drives. =)
    Fraud Record - Stop Fraud Clients, Report Abusive Customers.
    █ Combine your efforts to fight misbehaving clients.

    HarzemDesign - Highest quality, well designed and carefully coded hosting designs. Not cheap though.
    █ Large and awesome portfolio, just visit and see!

  4. #4
    Join Date
    Mar 2003
    Location
    Kansas City, Missouri
    Posts
    462
    Hello,

    We also wait for a drive to be "in production" for 2-3 months. This allows for kinks like the 1.5TB Seagates to be worked out. We have 12 or so of these 1.5's in production in RAID configurations with no issues.
    =>Admo.net Managed Hosting
    => Managed Hosting • Dedicated Servers • Colocation
    => Dark Fiber Access to 1102 Grand, Multiple Public Providers
    => Over •Sixteen• Years of Service

  5. #5
    Join Date
    Oct 2009
    Location
    Houston, TX
    Posts
    88
    Quote Originally Posted by AdmoNet View Post
    Hello,

    We also wait for a drive to be "in production" for 2-3 months. This allows for kinks like the 1.5TB Seagates to be worked out. We have 12 or so of these 1.5's in production in RAID configurations with no issues.
    One of the good things about the most recent Seagate drives is they are all basically the same drive, just with more platters. The higher capacity models that come later just need the firmware updates that were applicable to the previous models.

    So our waiting period for reliable drives should be getting shorter... hopefully there won't be one at all when they release a 2TB model.

  6. #6
    With fixed firmware they are great. Last 6 months - no single failure in our 16 drive unit.
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  7. #7

  8. #8
    How about 2TB drives ?

  9. #9
    Join Date
    Feb 2006
    Location
    Kusadasi, Turkey
    Posts
    3,273
    Quote Originally Posted by XFactorServers View Post
    How about 2TB drives ?
    I believe they are 5900rmp for now, not as fast as 7200 drives.
    Fraud Record - Stop Fraud Clients, Report Abusive Customers.
    █ Combine your efforts to fight misbehaving clients.

    HarzemDesign - Highest quality, well designed and carefully coded hosting designs. Not cheap though.
    █ Large and awesome portfolio, just visit and see!

  10. #10
    Quote Originally Posted by Harzem View Post
    I believe they are 5900rmp for now, not as fast as 7200 drives.
    Pretty sure WD has one 2TB that has 7,200RPM but its like $300.

  11. #11
    Join Date
    Jun 2008
    Posts
    1,471
    Quote Originally Posted by XFactorServers View Post
    Pretty sure WD has one 2TB that has 7,200RPM but its like $300.
    Indeed, the Western Digital Caviar Black WD2001FASS 2TB 7200 RPM
    http://www.newegg.com/Product/Produc...82E16822136456

  12. #12
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,612
    Seagate's initial batches of 1.5TB drives shipped with firmware that caused problems for some people under certain workloads, but that has long been fixed. We have several dozen Seagate 1.5TB drives in our storage servers, and have not had any problems.
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  13. #13
    Join Date
    Apr 2009
    Location
    Dallas/FortWorth TX
    Posts
    1,675
    Seagate are now cheap, today i saw a promo in microcenter for just $149 for 1.5Tb drives.
    IPStrada When uptime counts.
    Warren Buffet: Honesty is very expensive gift do not expect it from cheap people.

  14. #14
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by IPStrada LLC View Post
    Seagate are now cheap, today i saw a promo in microcenter for just $149 for 1.5Tb drives.
    They have come down rather considerably in cost.

    The 2TB WD-Black is a monster when it comes to speed (given it's capacity) The big downside however, is that WD has now disabled the ability to change the TLER values in the larger black drives.I guess that really forces you into the RE4's if you want the additional stability in an array.

  15. #15
    You can't go wrong with the WD RE3/RE4's. Over the past couple of years I've installed many a WD Enterprise drive and have come across only 1 issue that I can remember. The drive was DOA.

  16. #16
    Quote Originally Posted by IPStrada LLC View Post
    Seagate are now cheap, today i saw a promo in microcenter for just $149 for 1.5Tb drives.
    At Fry's I saw 2TB drives for $179 a month ago. I think it is a better dealt than $149 for 1.5TB
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  17. #17
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by tulix View Post
    At Fry's I saw 2TB drives for $179 a month ago. I think it is a better dealt than $149 for 1.5TB
    Great drives for the desktop, don't stick them in a raid array tho, you'll be dropping drives like nobody's business.

  18. #18
    Quote Originally Posted by WireSix View Post
    Great drives for the desktop, don't stick them in a raid array tho, you'll be dropping drives like nobody's business.
    Actually we don't (use them for the server environment yet), but RaidWeb had units with them for several months already (SCSI/Fiber external storage units with up to 24 drives in each). We were planning to get one of those for the next upgrade of one of our systems but didn't need that yet.
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  19. #19
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by tulix View Post
    Actually we don't (use them for the server environment yet), but RaidWeb had units with them for several months already (SCSI/Fiber external storage units with up to 24 drives in each). We were planning to get one of those for the next upgrade of one of our systems but didn't need that yet.
    I'm sure they are using the RE4 series drives and not the cheap desktop versions that you or I would pickup at Fry's.

  20. #20
    Quote Originally Posted by WireSix View Post
    I'm sure they are using the RE4 series drives and not the cheap desktop versions that you or I would pickup at Fry's.
    Probably, I don't know - I've looked at the price - since I was not getting them - I just looked at the price and the capacity and was amazed with the price.
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  21. #21
    Join Date
    Mar 2009
    Posts
    534
    Quote Originally Posted by WireSix View Post
    The 2TB WD-Black is a monster when it comes to speed (given it's capacity) The big downside however, is that WD has now disabled the ability to change the TLER values in the larger black drives.I guess that really forces you into the RE4's if you want the additional stability in an array.
    Oh man, that really stinks that WD blocked the TLER utility. Hopefully someone will figure out a way around that.

    --Chris
    The Object Zone - Your Windows Server Specialists for more than twelve years - http://www.object-zone.net/
    Services: Contract Server Management, Desktop Support Services, IT/VoIP Consulting, Cloud Migration, and Custom ASP.net and Mobile Application Development

  22. #22
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by ObjectZone View Post
    Oh man, that really stinks that WD blocked the TLER utility. Hopefully someone will figure out a way around that.

    --Chris
    They haven't we've hit this issue with a number of clients that wanted raid on black 1tb's

  23. #23
    Join Date
    Mar 2009
    Posts
    534
    Quote Originally Posted by WireSix View Post
    They haven't we've hit this issue with a number of clients that wanted raid on black 1tb's
    Recently, I was able to turn on TLER for a pair of Black 750's. Dang.

    So much for there being any other firmware differences between a RE and a standard drive. By blocking use of the TLER tool, they've basically confirmed that there's no other difference and they only want to make more money.

    --Chris
    The Object Zone - Your Windows Server Specialists for more than twelve years - http://www.object-zone.net/
    Services: Contract Server Management, Desktop Support Services, IT/VoIP Consulting, Cloud Migration, and Custom ASP.net and Mobile Application Development

  24. #24
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by ObjectZone View Post
    Recently, I was able to turn on TLER for a pair of Black 750's. Dang.

    So much for there being any other firmware differences between a RE and a standard drive. By blocking use of the TLER tool, they've basically confirmed that there's no other difference and they only want to make more money.

    --Chris
    Yup, it literally was the only difference, it's amazing what a price premium you can charge for the same thing by crippling another version of it and selling it for less

  25. #25
    Join Date
    Apr 2009
    Location
    whitehouse
    Posts
    656
    2TB and 7200 rpm does not seem ideal. The should jack up the revs propotionally as well to reduce io in a production environment.
    Quote Originally Posted by XFactorServers View Post
    Pretty sure WD has one 2TB that has 7,200RPM but its like $300.
    James B
    EzeeloginSetup your Secure Linux SSH Gateway.
    |Manage & Administer Multiple Linux Servers Quickly & Securely.

  26. #26
    Join Date
    Jan 2001
    Location
    Miami, FL
    Posts
    1,072
    WD caviar black has our vote.
    Biznesshosting, Inc. DBA VOLICO - Intelligent Hosting Solutions
    East Coast Enterprise Dedicated Servers and Miami Colocation.
    managed and unmanaged dedicated servers. High bandwidth colocation. Managed clusters.

  27. #27
    Quote Originally Posted by BarackObama View Post
    2TB and 7200 rpm does not seem ideal. The should jack up the revs propotionally as well to reduce io in a production environment.
    Yeah, it probably isn't time yet. Stick to the 1TB imo.

  28. #28
    Join Date
    May 2005
    Location
    London, United Kingdom
    Posts
    388
    Would be interesting to see the rebuild time for a failed 2TB drive in RAID 5

  29. #29
    Join Date
    Jan 2001
    Location
    Miami, FL
    Posts
    1,072
    that rebuild can be over a day if the io is high
    Biznesshosting, Inc. DBA VOLICO - Intelligent Hosting Solutions
    East Coast Enterprise Dedicated Servers and Miami Colocation.
    managed and unmanaged dedicated servers. High bandwidth colocation. Managed clusters.

  30. #30
    Join Date
    Mar 2009
    Posts
    534
    Quote Originally Posted by bizness View Post
    that rebuild can be over a day if the io is high
    All the more reason to use RAID 6/ADG instead then, I guess. :-)

    --Chris
    The Object Zone - Your Windows Server Specialists for more than twelve years - http://www.object-zone.net/
    Services: Contract Server Management, Desktop Support Services, IT/VoIP Consulting, Cloud Migration, and Custom ASP.net and Mobile Application Development

  31. #31
    With low priority rebuild for an external RAID 6 unit - 1.5TB drives it took 2 weeks to rebuild (one fault drive).
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  32. #32
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by tulix View Post
    With low priority rebuild for an external RAID 6 unit - 1.5TB drives it took 2 weeks to rebuild (one fault drive).
    The ever increasing size of drives and the ever lagging rebuild time just continues to help make the case for deployment of clustered, high-availability network storage. Take for example a lefthand(hp) networks iSCSI SAN.

    Not only do you have host-based raid with your standard sas/sata disks and their associated controllers but you have network raid and levels of replication on that. During a lower i/o period you could actually drop a physical host node out of your storage cluster, allow the rebuild to take place with 100% i/o availability and then re-add the unit to the network. Assuming you have at least 2-bit copies of the data in the network shouldn't impact your overall data availability but should provide for overall substantially faster local raid rebuild.

    There then would be the associated network rebuild with the rest of the cluster but that again would have dedicated i/o so it would be a quick rejoin as opposed to 1, 2, 3 4 days or even weeks of rebuild time.

    The ability of the lefthand/hp platform in particular to deal with faults is primarily one of the reasons we selected that platform for our storage, it eliminates the dependence upon hardware you would have in a traditional disk system with say a failing backplane in an external shelf, a bad raid card, or even a loss of power to 1/2 the backplane in a SuperMicro chassis. We've seen it all happen before

  33. #33
    Quote Originally Posted by WireSix View Post
    The ever increasing size of drives and the ever lagging rebuild time just continues to help make the case for deployment of clustered, high-availability network storage. Take for example a lefthand(hp) networks iSCSI SAN.

    Not only do you have host-based raid with your standard sas/sata disks and their associated controllers but you have network raid and levels of replication on that. During a lower i/o period you could actually drop a physical host node out of your storage cluster, allow the rebuild to take place with 100% i/o availability and then re-add the unit to the network. Assuming you have at least 2-bit copies of the data in the network shouldn't impact your overall data availability but should provide for overall substantially faster local raid rebuild.

    There then would be the associated network rebuild with the rest of the cluster but that again would have dedicated i/o so it would be a quick rejoin as opposed to 1, 2, 3 4 days or even weeks of rebuild time.

    The ability of the lefthand/hp platform in particular to deal with faults is primarily one of the reasons we selected that platform for our storage, it eliminates the dependence upon hardware you would have in a traditional disk system with say a failing backplane in an external shelf, a bad raid card, or even a loss of power to 1/2 the backplane in a SuperMicro chassis. We've seen it all happen before
    It is all very interesting, we are exploring little bit alternative approach to storage/computing architecture, but problem is - high cost - high cost that you have one way or another to pass to your customers. More complex systems - even higher cost. Like just a real commercial cluster file system - $64,000. Of course there are other, cheaper solutions, when data is duplicated not on the drive cluster level, but than you have patched solution and still high cost. It is like saying "Bently Azure" is a nice car (saw it yesterday - you can it too if you'll get in Buckhead), but cost is $369,000.

    What is also bad about a lot of clustered solutions - they are good for relatively (relatively means much more hardware, compare to just DASD) for not a very high load - to make it more understandable - imagine running backup of Facebook? With high I/O everything is getting "screwed up" in almost any clustered environment, unless you'll add lot of hardware. Example - we had to split one of our mysql cluster because of the synchronization the load was too high. Another option was to add 4 more servers (and still potentially to have problems).
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  34. #34
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    Quote Originally Posted by tulix View Post
    It is all very interesting, we are exploring little bit alternative approach to storage/computing architecture, but problem is - high cost - high cost that you have one way or another to pass to your customers. More complex systems - even higher cost. Like just a real commercial cluster file system - $64,000. Of course there are other, cheaper solutions, when data is duplicated not on the drive cluster level, but than you have patched solution and still high cost. It is like saying "Bently Azure" is a nice car (saw it yesterday - you can it too if you'll get in Buckhead), but cost is $369,000.

    What is also bad about a lot of clustered solutions - they are good for relatively (relatively means much more hardware, compare to just DASD) for not a very high load - to make it more understandable - imagine running backup of Facebook? With high I/O everything is getting "screwed up" in almost any clustered environment, unless you'll add lot of hardware. Example - we had to split one of our mysql cluster because of the synchronization the load was too high. Another option was to add 4 more servers (and still potentially to have problems).
    Well the lefthand solution is fully distributed and you can easily add additional units to scale both capacity and performance (at very reasonable costs). You can do this at near DASD pricing levels. Knowing specifically which DASD units you are referring to I'll tell you that if negotiated properly, You could easily deploy a higher availability, faster, storage solution on the lefthand platform.

  35. #35
    Join Date
    Feb 2004
    Location
    Atlanta, GA
    Posts
    5,627
    I should add over time that we've done deployments on other clustered file systems such as hadoop, gluster, etc and that while they have their advantages and disadvantages, ultimately for us they we are all dependent upon less than desirable connectivity methods, IE not native file systems. Some of the others like gfs and ocfs while offering the clustering ability didn't scale well enough on very large data sets nor did they have the replication/availability components desired We have a particular client that has ~40TB of online data all being very small images. Unfortuantely due to their software platform while something like hadoop would have been ideal they weren't able to handle the cost to modify their platform to work a system like hadoop. GFS would simply crumble under the load of an LS in some paths of their directory structure.

  36. #36
    Quote Originally Posted by WireSix View Post
    Well the lefthand solution is fully distributed and you can easily add additional units to scale both capacity and performance (at very reasonable costs). You can do this at near DASD pricing levels. Knowing specifically which DASD units you are referring to I'll tell you that if negotiated properly, You could easily deploy a higher availability, faster, storage solution on the lefthand platform.
    I don't think that hard drive, RAID hardware and a software layer with multiple fault tolerant devices could be cheaper than a drive with a RAID hardware Negotiation skills are always very valuable and could be applied to any solution DASD or not DASD.

    Regarding distributed storage we are looking into completely alternative computing/storage platform - not VMWARE/VPS based. We are evaluating if we can use that platform in shared environment only or it will be good for our HPC environment too. And it comes of course with adiitional cost ;(
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  37. #37
    Quote Originally Posted by WireSix View Post
    I should add over time that we've done deployments on other clustered file systems such as hadoop, gluster, etc and that while they have their advantages and disadvantages, ultimately for us they we are all dependent upon less than desirable connectivity methods, IE not native file systems. Some of the others like gfs and ocfs while offering the clustering ability didn't scale well enough on very large data sets nor did they have the replication/availability components desired We have a particular client that has ~40TB of online data all being very small images. Unfortuantely due to their software platform while something like hadoop would have been ideal they weren't able to handle the cost to modify their platform to work a system like hadoop. GFS would simply crumble under the load of an LS in some paths of their directory structure.
    We were not able to find a solution for hadoop in a shared environment - so why we decided do not go with it. Oracle clustered FS and GFS we've tested (I think were part of our hosting division at that point of time) - not enough scalability - at least compare to XFS.
    We've looked at Ubuntu - you can't get any single node bigger than the physical node - not good for us for that reason.
    They all have pros and cons. You know that we were using large storage devices long time ago - can't elaborate more at WHT, we also were and are using devices with a huge I/O traffic and this is where all "nice and cool" cluster solutions are breaking. Actually even proxy solutions are breaking - it all depends on the traffic pattern.
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  38. #38
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by tulix View Post
    We were not able to find a solution for hadoop in a shared environment - so why we decided do not go with it. Oracle clustered FS and GFS we've tested (I think were part of our hosting division at that point of time) - not enough scalability - at least compare to XFS.
    We've looked at Ubuntu - you can't get any single node bigger than the physical node - not good for us for that reason.
    They all have pros and cons. You know that we were using large storage devices long time ago - can't elaborate more at WHT, we also were and are using devices with a huge I/O traffic and this is where all "nice and cool" cluster solutions are breaking. Actually even proxy solutions are breaking - it all depends on the traffic pattern.
    fyi Ubuntu s a linux distribution.. not a file system of any sorts.

  39. #39
    Quote Originally Posted by Spudstr View Post
    fyi Ubuntu s a linux distribution.. not a file system of any sorts.
    Really ? I am sorry I meant Ubuntu cloud solution - sorry again that I have to clarify that - I thought because of the topic it was kind of obvious...
    Professional Streaming services - http://www.tulix.com - info at tulix.com
    Double optimized - AS36820) network, best for live streaming/VoIP/gaming
    The best quality network - AS7219

  40. #40
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,132
    Quote Originally Posted by tulix View Post
    Really ? I am sorry I meant Ubuntu cloud solution - sorry again that I have to clarify that - I thought because of the topic it was kind of obvious...

    Still not a file system. thats just them packaging in eucalyptus and calling it a cloud.

    Still not a global file system or any sorts of a clustered file system.

Page 1 of 2 12 LastLast

Similar Threads

  1. Dell drives in a RAID array with non-Dell drives?
    By Qgyen in forum Colocation and Data Centers
    Replies: 1
    Last Post: 09-14-2009, 04:05 PM
  2. Adding Drives or Replacing with Larger Drives, Possible?
    By rracer99 in forum Hosting Security and Technology
    Replies: 11
    Last Post: 08-12-2004, 06:50 PM
  3. USB pen drives?
    By RDX1 in forum Web Hosting Lounge
    Replies: 6
    Last Post: 02-27-2004, 06:11 PM
  4. DAT Drives
    By richardparry in forum Hosting Security and Technology
    Replies: 1
    Last Post: 04-03-2002, 02:19 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •