Results 1 to 9 of 9
  1. #1

    Folder Structure for LOTS of files for best performance.

    I'm creating a web 2.0 application that is going to accept images, and potentially I could have millions of files. I was wondering how many files I should limit to one folder.

    Currently I split up the folders like this:

    So an image ID of 123456789021.jpg would be stored in folder:

    Also noting that each ImageID will have 4 files to correspond to different each folder will have about 4000 files....not including possible deletions by users.

    So my question. Do I need to split up the files this way? Is there a better way to split the files? Do I have to split them at all (so can I have millions of files in one folder)?

    Thanks in advance!

  2. #2
    This actually depends on a number of things, but the short answer is that 4000 files should be fine, for the most part.

    If your webserver is running linux, you have two main filesystems - ext3 and reiserfs. From Wikipedia:

    Compared to ext2 and ext3 in version 2.4 of the Linux kernel, when dealing with files under 4k and with tail packing enabled, ReiserFS is often faster by a factor of 1015. This is of great benefit in Usenet news spools, HTTP caches, mail delivery systems and other applications where performance with small files is critical.

    I'm not sure how large your files are, but if they are small thumbnails, this filesystem might help.

    But like I said above, you should be fine with 4000 files per directory.

    Hope this helps.

  3. #3
    Also, one thing to note is that you can always just give it a try, and if you notice performance issues, make sure you have an alternate plan set up. You might just have to test a bit to see what works best for your application.

  4. #4
    The file sizes range from under 4k for small thumbnails to 70k, for the full size image (and then sizes in-between for medium sized images). I might be using Windows too.

  5. #5
    I'm not too familiar with file handling and Windows. I believe NTFS will handle those files sizes just fine, but don't quote me on that.

  6. #6
    Here I just found this post on Google. This might help.

  7. #7
    Join Date
    Jun 2001
    Denver, CO
    Quote Originally Posted by watchdoghosting
    I'm not too familiar with file handling and Windows. I believe NTFS will handle those files sizes just fine, but don't quote me on that.
    If you do use NTFS, you will need to tweak the follwing:

    - Expand your MFT
    - Disable 8.3 naming
    - Disable last access time
    - Set an appropriate cluster size
    Jay Sudowski // Handy Networks LLC // Co-Founder & CTO
    AS30475 - Level(3), HE, Telia, XO and Cogent. Noction optimized network.
    Offering Dedicated Server and Colocation Hosting from our SSAE 16 SOC 2, Type 2 Certified Data Center.
    Current specials here. Check them out.

  8. #8
    Cool! Thanks for the info it really helps a lot.

  9. #9
    Not a problem, glad to help

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts