Page 2 of 2 FirstFirst 12
Results 41 to 72 of 72
  1. #41
    It is only possible to change the stripe size when the raid array is created, so this has to be done before the OS is installed.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  2. #42
    Join Date
    Mar 2003
    Location
    Kansas City, Missouri
    Posts
    462
    Hello-

    For file hosting, why is everyone recommending RAID-10? 10 is great for database IO workloads where there is a great demand for random I/O for both reads and writes. In a file-server scenario, most of your I/O will be read related. Why not just use RAID-5 with large disks? When performing large read operations RAID-5 acts like RAID-0. Write operations will be slower but it really just depends on the I/O characteristics of the users. Will there be large data coming in or leaving the system? I imagine most of the data will be outbound (read from the array) than inbound.

    I guess it just depends on what files are hosted, how often they are read and written.

    Good luck
    =>Admo.net Managed Hosting
    => Managed Hosting • Dedicated Servers • Colocation
    => Dark Fiber Access to 1102 Grand, Multiple Public Providers
    => Over •Sixteen• Years of Service

  3. #43
    Quote Originally Posted by AdmoNet View Post
    Hello-

    For file hosting, why is everyone recommending RAID-10? 10 is great for database IO workloads where there is a great demand for random I/O for both reads and writes. In a file-server scenario, most of your I/O will be read related. Why not just use RAID-5 with large disks? When performing large read operations RAID-5 acts like RAID-0. Write operations will be slower but it really just depends on the I/O characteristics of the users. Will there be large data coming in or leaving the system? I imagine most of the data will be outbound (read from the array) than inbound.

    I guess it just depends on what files are hosted, how often they are read and written.

    Good luck
    Certainly if you're tight on disk space and you are mostly doing reads, raid 5 can work for this use case.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  4. #44
    Join Date
    Sep 2005
    Location
    Albany, NY
    Posts
    3,754
    I don't really have anything else to contribute to this thread since it has been so meticulously dissected by the posters. I just wanted to praise the people, especially Gabe, who contributed to this thread. It's truly helpful, not only for the OP, but for anyone else who reads this in the future. I hope more threads like this pop up on WHT instead of the usual bashing.
    AYKsolutions.com - High Bandwidth Specialists - 100TB/1Gbps/10Gbps Unmetered
    Over 20 Global Locations - Asia, Mexico, Brazil, Australia, US, CA, EU - Bare Metal and Virtual Cloud. All Managed.
    View our current Specials.
    We are Professional. Painless. Polite.

  5. #45
    Join Date
    Dec 2007
    Posts
    262
    Are read and write performance for any give raid system independent of each other. For example will read performance be effected in a RAID 5 system if all of a sudden there is a spike in write (uploads)

    From my experience for a file host write is 1/5 or 1/10th of read requests at any given time (pls See attached)
    Attached Thumbnails Attached Thumbnails raid0.png  

  6. #46
    You have a certain number of i/o/s per drive to work with for both reads and writes together. With a raid 5, a read (when the stripe size is large enough and the read request small enough) will usually use one i/o operation on a single drive. A write on a raid 5 will cause both a read and a write to be performed on every drive in the array, which uses up a lot of your 'iops budget', evem with relatively few writes. A write on a raid 10 will hit two disks, two iops. On a 4 drive raid 5, a write will do two iops per drive, or 8 iops total. So raid 5 will dramatically hurt performance unless the number of writes is very near zero. Even with writes only being 10% of disk requests, they would ve taking up about 50% of all disk activity.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  7. #47
    Join Date
    Dec 2007
    Posts
    262
    Thank you.

  8. #48
    Join Date
    Dec 2007
    Posts
    262
    Quote Originally Posted by funkywizard View Post
    4) Disable "atimes" in /etc/fstab
    Hi, nano /etc/fstab display follwoing information. I see noatime

    Code:
    # /etc/fstab
    # Created by anaconda on Thu Aug  2 12:12:55 2012
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    UUID=f254e85b-65f3-48db-a35e-77f0b8e7caec /                       ext4    defaults        1 1
    UUID=5f566899-e8d1-4af8-b312-b12547f94b0a /boot                   ext2    defaults        1 2
    UUID=9c2cc467-f8a8-4768-8a89-dd6f6efcf405 swap                    swap    defaults        0 0
    tmpfs                   /dev/shm                tmpfs   defaults        0 0
    devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
    sysfs                   /sys                    sysfs   defaults        0 0
    proc                    /proc                   proc    defaults        0 0
    /dev/md2 is the raid0
    Code:
    # blkid
    /dev/md2: UUID="f254e85b-65f3-48db-a35e-77f0b8e7caec" TYPE="ext4"
    Should I change the fstab file to something like this? Do i need all these options or just the noatime?

    Code:
    UUID=f254e85b-65f3-48db-a35e-77f0b8e7caec /               ext4    errors=remount-ro,discard,noatime,nodiratime 0

  9. #49
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,267
    Quote Originally Posted by funkywizard View Post
    Yes, they would need to reinstall the os, making sure to create the raid array before going into the os installer, because the os installer won't support modifying the stripe size. Actually, there's no excuse for a 128k stripe because the default is 256, so I don't know how they messed that one up so badly.
    I looked at it, they used Raid sata mode and setup a intel fake raid.

  10. #50
    Quote Originally Posted by Steven View Post
    I looked at it, they used Raid sata mode and setup a intel fake raid.
    strange then that mdadm reports a raid array is present
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  11. #51
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,267
    Quote Originally Posted by funkywizard View Post
    strange then that mdadm reports a raid array is present
    Yeah, I don't know. :-/

  12. #52
    Join Date
    Apr 2009
    Posts
    1,143
    Your 1tb might do around 30 megs max if It's multiple users downloadning.. If you add users uploading aswell you'll most likely end up closer to maxing a 100Mbit link.. Raid 10 for storage/serving files and a ssd disc for your encoding - imo

  13. #53
    Join Date
    Dec 2007
    Posts
    262
    Quote Originally Posted by mazedk View Post
    Your 1tb might do around 30 megs max if It's multiple users downloadning.. If you add users uploading aswell you'll most likely end up closer to maxing a 100Mbit link.. Raid 10 for storage/serving files and a ssd disc for your encoding - imo
    That was the case before funkywizard (strongly recommended if you want a host that know i/o performance) IOFLOOD.com

    and others helped me with this. If you look at my atop you can see I am doing more than 30megs.

  14. #54
    Join Date
    Apr 2009
    Posts
    1,143
    Yeah, saw that - was on my iphone in bed just browsing and a bit tired. My bad
    /maze

  15. #55
    Join Date
    Sep 2010
    Location
    Behind you...
    Posts
    355
    Interesting topic about the different raid advantages and disadvantages!
    file1.info :: 50GB secure cloudstorage with filemanager

  16. #56
    Join Date
    Dec 2007
    Posts
    262
    Hello all,

    i have gotten a new server and host agreed to install the Sodtware RAID1 (not fakeraid) with a 2MB strip size.

    Now that they have finished I get the following output

    Code:
    [[email protected] ~]# cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sda1[0] sdb1[1]
          511988 blocks super 1.0 [2/2] [UU]
    
    md1 : active raid1 sdb2[1] sda2[0]
          1953000316 blocks super 1.1 [2/2] [UU]
          [==>..................]  resync = 11.8% (231827264/1953000316) finish=226.8min speed=126443K/sec
          bitmap: 14/15 pages [56KB], 65536KB chunk
    
    unused devices: <none>
    What have they done now!
    1. Isn't this chunk size way to bigger than the requested 2MB?
    2. also what is the resync and bitmap info say about this setup?


    Thanks in advance

  17. #57
    Quote Originally Posted by p2prockz View Post
    Hello all,

    i have gotten a new server and host agreed to install the Sodtware RAID1 (not fakeraid) with a 2MB strip size.

    Now that they have finished I get the following output

    Code:
    [[email protected] ~]# cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sda1[0] sdb1[1]
          511988 blocks super 1.0 [2/2] [UU]
    
    md1 : active raid1 sdb2[1] sda2[0]
          1953000316 blocks super 1.1 [2/2] [UU]
          [==>..................]  resync = 11.8% (231827264/1953000316) finish=226.8min speed=126443K/sec
          bitmap: 14/15 pages [56KB], 65536KB chunk
    
    unused devices: <none>
    What have they done now!
    1. Isn't this chunk size way to bigger than the requested 2MB?
    2. also what is the resync and bitmap info say about this setup?


    Thanks in advance
    Raid 1 doesn't have a stripe size, because there is no striping, only mirroring. I'm not sure why it says 65MB chunk, that's kind of a misleading piece of information there.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  18. #58
    Join Date
    Dec 2007
    Posts
    262
    Quote Originally Posted by funkywizard View Post
    Raid 1 doesn't have a stripe size, because there is no striping, only mirroring. I'm not sure why it says 65MB chunk, that's kind of a misleading piece of information there.

    Is there any specific raid 1 setup that would help in our case (video hosting for large 100mb and up files)?

    Thanks again.

  19. #59
    Quote Originally Posted by p2prockz View Post
    Is there any specific raid 1 setup that would help in our case (video hosting for large 100mb and up files)?

    Thanks again.
    The only configuration for raid 1 that would be helpful is to increase the linux readahead value to 512k, which you can do with the following command:

    blockdev --setra 1024 /dev/sda
    blockdev --setra 1024 /dev/sdb
    blockdev --setra 1024 /dev/sda1
    blockdev --setra 1024 /dev/sda1
    blockdev --setra 1024 /dev/sda2
    blockdev --setra 1024 /dev/sda2
    blockdev --setra 1024 /dev/md0
    blockdev --setra 1024 /dev/md1

    etc (do this essentially for each partition and each raid volume)

    The above won't "stick" between reboots, so you can make this change permanent by editing /etc/rc.d/rc.local and adding those commands above to it.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  20. #60
    Join Date
    Dec 2007
    Posts
    262
    Quote Originally Posted by funkywizard View Post
    The only configuration for raid 1 that would be helpful is to increase the linux readahead value to 512k, which you can do with the following command:

    blockdev --setra 1024 /dev/sda
    blockdev --setra 1024 /dev/sdb
    blockdev --setra 1024 /dev/sda1
    blockdev --setra 1024 /dev/sda1
    blockdev --setra 1024 /dev/sda2
    blockdev --setra 1024 /dev/sda2
    blockdev --setra 1024 /dev/md0
    blockdev --setra 1024 /dev/md1

    etc (do this essentially for each partition and each raid volume)

    The above won't "stick" between reboots, so you can make this change permanent by editing /etc/rc.d/rc.local and adding those commands above to it.
    Thanks for the help!

  21. #61
    I accidnetally duplicated sda1 and sda2 twice. One should be sda1 / sda2 and another should be sdb1 / sdb2
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  22. #62
    Join Date
    Mar 2003
    Posts
    352
    Sorry for jumping in, assuming i am only serving a few numbers of files, and the total size of all files (100MB) fits well within the system Memory capacity (512MB).

    Then could i assume that all file will be served from cache / Ram?
    And therefore speed of Disk does not come into the equation?

    Would CPU speed ever become a bottleneck? Let say i am only running on a Single Core Atom?

  23. #63
    Quote Originally Posted by ayksolutions View Post
    I don't really have anything else to contribute to this thread since it has been so meticulously dissected by the posters. I just wanted to praise the people, especially Gabe, who contributed to this thread. It's truly helpful, not only for the OP, but for anyone else who reads this in the future. I hope more threads like this pop up on WHT instead of the usual bashing.
    I concur; reading this thread was very informative. Thanks everyone for your contribution here.

  24. #64
    Join Date
    Dec 2007
    Posts
    262
    Hello, I am back here to get some of your expertise regarding a RAID 10 setup.

    It was pointed out that stripe size of 2MB is recommended for a software RAID 10 and for Hardware RAID10 we should go with the maximum stripe size allowed by the controller.

    I have an issue where I paid $$ for a HW raid controller which only support a max stripe size of 256K

    What should I do in this case to get the best performance?
    Should I cancel the Hardware RAID controller and go with software raid?

    Thanks in advance.

  25. #65
    Quote Originally Posted by p2prockz View Post
    Hello, I am back here to get some of your expertise regarding a RAID 10 setup.

    It was pointed out that stripe size of 2MB is recommended for a software RAID 10 and for Hardware RAID10 we should go with the maximum stripe size allowed by the controller.

    I have an issue where I paid $$ for a HW raid controller which only support a max stripe size of 256K

    What should I do in this case to get the best performance?
    Should I cancel the Hardware RAID controller and go with software raid?

    Thanks in advance.
    If you're primarily working with large files with sequential access, then definitely yes. For large numbers of simultaneous reads of large files, a 2MB stripe and a 512k readahead will provide at least 4x the performance of a 256k stripe.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  26. #66
    Join Date
    Dec 2007
    Posts
    262
    Thanks a lot for the reply. Wish they had a thanks button for profiles.

  27. #67
    Glad to help : )
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  28. #68
    Join Date
    Jul 2012
    Location
    Europe
    Posts
    21
    Wow, It depends on many things.. Its very hard for me to answer right away, I might get back to this one..

  29. #69
    Join Date
    Dec 2007
    Posts
    262

    RAID 10 resync

    Hi All, Had setup software raid10 and I see the partition is re-syncing. Is resyncing done in the case of repairing the partition? or is it normal for it to run after the raid10 installation for the first time?



    Code:
    [[email protected] ~]# cat /proc/mdstat
    Personalities : [raid10] [raid1]
    md0 : active raid1 sda1[0] sdc1[2] sdd1[3] sdb1[1]
          487360 blocks [4/4] [UUUU]
    
    md2 : active raid10 sdc3[2] sda3[0] sdb3[1] sdd3[3]
          5843927040 blocks super 1.2 2048K chunks 2 near-copies [4/4] [UUUU]
          [===>.................]  resync = 15.3% (896459072/5843927040) finish=74870.8min speed=1100K/sec
    
    md1 : active raid10 sdc2[2] sda2[0] sdb2[1] sdd2[3]
          15621120 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
    
    unused devices: <none>
    What kind of commands can I use to get the status related to RAID10? Can I get information directly from the HW controller? I am very new to RAID so any reading material would be helpful.

    Thanks.

  30. #70
    Join Date
    Dec 2007
    Posts
    262
    Quote Originally Posted by funkywizard View Post
    Maybe, but there really isn't a whole lot to it:

    1) If using software raid 10, make sure the raid stripe is 2MB
    2) If using hardware raid 10, make sure the raid stripe is as large as the raid controller will allow
    3) Set the linux readahead to 1/4 of the raid stripe (if using raid 10), or to 512k (if using raid 1 or no raid at all)
    4) Disable "atimes" in /etc/fstab
    5) All else being equal, more ram, more hard drives, or faster rpm drives, are better than less ram, fewer hard drives, or slower rpm drives

    Hello again,

    Just wondering would it be beneficial to set a stripe size even larger than 2MB? For example, since we have files larger than 100Mb can we set the strip size to let's say 4MB or 8MB?

    Thanks.

  31. #71
    Quote Originally Posted by p2prockz View Post
    Hello again,

    Just wondering would it be beneficial to set a stripe size even larger than 2MB? For example, since we have files larger than 100Mb can we set the strip size to let's say 4MB or 8MB?

    Thanks.
    There is no additional performance benefit when you use a readahead larger than 512KB, and in fact you can lose performance in some cases because you'll have to read the same data more than once if your readaheads are taking up too much ram. With a 2MB stripe and a 512k readahead, approximately 25% of your read requests will cross a stripe boundary, so 75% of the time, one read request causes one disk i/o on one drive, and 25% of the time you'll get one disk i/o on each of two drives. A 4mb stripe should drop this to 12.5%. So for 10 reads, instead of 12.5 i/o requests, you'd have 11.25 i/o requests. So potentially a 4MB stripe might be a little faster in this use pattern than a 2MB stripe, but it's a pretty small difference and there might be negative performance from having the stripe be too large, so I'd say 2MB is a safe bet. You could use 4MB if you want, I would expect that to be fine too, but I haven't tested that. Anything above 4MB when you have a 512k readahead won't do much for you. A readahead above 512k won't do much for you either.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  32. #72
    Join Date
    Dec 2007
    Posts
    262
    Thank you again!

Page 2 of 2 FirstFirst 12

Similar Threads

  1. Cant change ftp port, chkserv.d/ftpd file always back using original file
    By basketmen in forum Hosting Security and Technology
    Replies: 2
    Last Post: 02-24-2012, 12:04 AM
  2. Replies: 21
    Last Post: 08-27-2010, 07:36 PM
  3. Replies: 0
    Last Post: 11-22-2008, 07:38 AM
  4. Dual 1GHZ P3 1GB Memory 40GB HDD CentOS/Ubuntu Server 100MBIT Port
    By seasideintl in forum Dedicated Hosting Offers
    Replies: 3
    Last Post: 04-04-2008, 05:08 PM
  5. Server suddenly Bottleneck
    By spikeyspy in forum Dedicated Server
    Replies: 2
    Last Post: 12-29-2004, 01:53 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •