Results 1 to 13 of 13
  1. #1

    Random I/O Testing Linux

    Can anybody recommend a free linux utility that will do completely random read/writes to a raw device (across the whole device, i.e. 100GB-500GB in size) for a period of time? The size of the data read/written should also be random.

    I've come across xdd, bonnie++, and iozone - although having difficulty figuring out all the options to actually do the completely random I/O over a large range of data.

    NOTE: I'm not looking for an application which is working with a filesystem.

    Any recommendations?

  2. #2
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    'Ripcord'ing is the only way!

  3. #3
    To clarify, what I'm doing is testing performance of a SAN. So I've created about 20 LUNs on the SAN, and 20 VMs. Each VM has its own LUN (as a second drive) which I am doing the testing on. I'm running the tests for a few hours at a time on all VMs at once. My aim is find out the maximum performance of the SAN, so the tests need to be able to run indefinitely.

    I do not really need to see the performance of the individual VMs, as I can see from the SAN the stats I'm looking for (IOPs, and throughput) throught the SAN interface.

  4. #4
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    Bonnie would be able to do that best.
    'Ripcord'ing is the only way!

  5. #5
    Quote Originally Posted by Visbits View Post
    Bonnie would be able to do that best.
    Yep, bonnie does seem like it could do what I require. Any idea what options I could use to do the test completely random and non-stop?

  6. #6
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    cd to your san mount, Make a directory called scratch chown it 500:500 and run this.

    That bench takes about 20 minutes on our san.

    watch "bonnie++ -d scratch -u 500:500"

    We average 230/230MB/s for read/write on dual gig with 1400 iop/s.

    That's a array with 12 300gb 10k 2.5" sas disk and a few luns.
    'Ripcord'ing is the only way!

  7. #7
    Quote Originally Posted by Visbits View Post
    cd to your san mount, Make a directory called scratch chown it 500:500 and run this.

    That bench takes about 20 minutes on our san.

    watch "bonnie++ -d scratch -u 500:500"

    We average 230/230MB/s for read/write on dual gig with 1400 iop/s.

    That's a array with 12 300gb 10k 2.5" sas disk and a few luns.
    Thanks! Do you know if by default bonnie++ is doing random read and writes, or just sequential?

    Also, do you know if there is anyway to perform this on a physical drive instead of a folder?

  8. #8
    Join Date
    Nov 2009
    Location
    Cincinnati
    Posts
    1,583
    It test everything.

    It can only benchmark via folder, it writes out large files and makes changes to them.
    'Ripcord'ing is the only way!

  9. #9
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    555
    baryluk's modification to 'seeker' is a good utility for finding out the maximum number of random reads/seeks an array can get. Its better than the original one as its multi threaded so work well with raid arrays. Here is what I get on a 20x2TB (raid6):

    Code:
    dekabutsu ~ # ./seeker_baryluk /dev/sdd 128
    Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
    Benchmarking /dev/sdd [69749987328 blocks, 35711993511936 bytes, 33259 GB, 34057611 MB, 35711 GiB, 35711993 MiB]
    [512 logical sector size, 512 physical sector size]
    [128 threads]
    Wait 30 seconds..............................
    Results: 1982 seeks/second, 0.504 ms random access time (656556904 < offsets < 35711968218173)
    Here is what a 4x256 GB SSD raid0 array gets (some other activity running on the machine:

    Code:
    Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
    Benchmarking /dev/sdd [2015623168 blocks, 1031999062016 bytes, 961 GB, 984191 MB, 1031 GiB, 1031999 MiB]
    [512 logical sector size, 512 physical sector size]
    [128 threads]
    Wait 30 seconds..............................
    Results: 19851 seeks/second, 0.050 ms random access time (2369651 < offsets < 1031998307533)

  10. #10
    Thanks for the suggestions, I will give these a go!

  11. #11
    Join Date
    Aug 2009
    Location
    Orlando, FL
    Posts
    1,063
    Did you try the old fasioned hdparm and iostat utils?

    I had a similar situation where I needed to benchmark a SAN mount. I used these utils to compare SAN mounted volumes against the local disk:

    hdparm -tT /dev/sda1 <-- your mount points

    Here is an example of mine from a server that is almost as old as me:

    [[email protected] ~]# hdparm -tT /dev/hde1

    /dev/hde1:
    Timing cached reads: 824 MB in 2.01 seconds = 410.22 MB/sec
    Timing buffered disk reads: 72 MB in 3.05 seconds = 23.58 MB/sec

    From a more modern HP G6:

    [[email protected] ~]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    446G 2.4G 421G 1% /
    /dev/cciss/c0d0p1 99M 18M 76M 20% /boot
    tmpfs 3.0G 0 3.0G 0% /dev/shm
    [[email protected] ~]# hdparm -tT /dev/mapper/VolGroup00-LogVol00

    /dev/mapper/VolGroup00-LogVol00:
    Timing cached reads: 30240 MB in 2.00 seconds = 15156.25 MB/sec
    Timing buffered disk reads: 310 MB in 3.00 seconds = 103.28 MB/sec

    to use iostat, you must have the package "sysstat" installed.

    [[email protected] ~]# iostat
    Linux 2.6.18-194.11.3.el5PAE (ns1.rdvs.com) 03/15/2011

    avg-cpu: %user %nice %system %iowait %steal %idle
    0.00 0.01 0.00 0.03 0.00 99.96

    Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
    cciss/c0d0 0.27 0.13 4.31 1966292 67787980
    cciss/c0d0p1 0.00 0.01 0.00 211130 44
    cciss/c0d0p2 0.27 0.11 4.31 1754938 67787936
    dm-0 0.55 0.11 4.31 1753802 67787936
    dm-1 0.00 0.00 0.00 576 0

    There are numerous flags to use with iostat that should give you an easy to read output to measure your results.

  12. #12
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    555
    that is usefull too but hdparm is bad with fast array sas it wont read enough data or long enough giving lower results... example:

    Code:
    [email protected]: 08:15 AM :~# hdparm -t /dev/sdd
    /dev/sdd:
     Timing buffered disk reads:  2336 MB in  3.00 seconds = 778.51 MB/sec


    Its better to use dd with the iflag=direct option.

    Code:
    [email protected]: 08:16 AM :~# dd bs=1M iflag=direct count=10000 if=/dev/sdd of=/dev/null
    10000+0 records in
    10000+0 records out
    10485760000 bytes (10 GB) copied, 12.6816 s, 827 MB/s
    I have seen hdparm give *mucH* lower results in the 500-600 MB/sec range before as well.

  13. #13
    Join Date
    Jul 2009
    Location
    The backplane
    Posts
    1,790
    fio works pretty well too, though at the file system level.

    http://freshmeat.net/projects/fio/

    There are a number useful sample job files included, or create your own for more fun and excitement.

Similar Threads

  1. Linux Stress testing
    By WickedShark in forum Hosting Security and Technology
    Replies: 3
    Last Post: 07-24-2008, 12:25 AM
  2. GD2 Linux / FreeBSD = Random Behaviour
    By BioALIEN in forum Programming Discussion
    Replies: 2
    Last Post: 03-26-2005, 04:07 AM
  3. Leraning and Testing Linux On Windows ?
    By goolex in forum Hosting Security and Technology
    Replies: 11
    Last Post: 11-04-2004, 03:28 AM
  4. Random Questions -- Random Answers?
    By flitcher in forum Web Hosting Lounge
    Replies: 22
    Last Post: 05-18-2004, 10:53 PM
  5. Load Testing for Windows and Linux
    By trader7702 in forum Hosting Security and Technology
    Replies: 2
    Last Post: 04-02-2004, 05:13 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •