i can confirm its the file system that has the limitation. if you are running 64bit then you dont have to worry about this, and the concern that i knew about had nothing to do with caching, it really had to do with the number of members, since each member has a directory created, and FYI, we are intertangling file vs. directory under one tree having to do with the number inodes its capable of creating on a per directory implementation.
I had to do some research. I could not believe that 45,000 file limit. And it's actually not really true. Performance drops the more files that are in there.
From what i have been able to determine, it all depends on the type of file system in use. ext2,ext3, RiserFS, ect. And if file indexing is on or not.
I could not find a percise answer on this one. So i question the accuracy of that 45,000 file limit.
For the ext3 filesystem "There is a limit of 31998 sub-directories
> per one directory, stemming from its limit of 32000 links per inode."
"The ext2 inode specification allows for over 100 trillion files to
> reside in a single directory, however because of the current
> linked-list directoryimplementation, only about 10-15 thousand files
> can realistically be stored in a single directory. �This is why
> systems such as Squid (http://www.squid-cache.org ) use cache
> directories with many subdirectories - searching through tens of
> thousands of files in one directory is sloooooooow."
source: linuxforums.org
It seems that ext2/ext3 filesystem has a limit of 32,000 links in an
inode, which in turn limits the number of directories in a single place
to 31,998. I need to be able to work with a filesystem with no limit or
a limit which is configurable. Is there such?
answer to above question from same forum:
Maybe reiserfs, I heard it has "dynamic inodes". That will keep
reiserfs from running out of inodes, and maybe this "links in inode"
problem too.
If not, perhaps XFS can do it.
source:http://www.issociate.de/board/post/487557/Files_per_directory.html
As far as I know there's no limit to the number of files in a directory
currently in ext3. There IS a limit to the number of files (actually
inodes) in the whole filesystem, which is a completely different thing.
According to Wikipedia "If V is the volume size in bytes, then the
default number of inodes is given by V/2^13 (or the number of blocks,
whichever is less)." There's also a limit to the number of
sub-directories in a directory, currently 32000.
xt3 also has a limit of 32000 hard links, which means that a
directory can't have more than 31998 subdirectories.
However, the original poster wasn't asking about hard limits, but
efficiency.
If the filesystem wasn't created with the dir_index option, then
having thousands of files in a directory will be a major performance
problem, as any lookups will scan the directory linearly.
Even with the dir_index option, large directories could be an issue. I
think that you would really need to conduct tests to see exactly how
much of an issue.
so the only solution would be to move into a 64bit environment or set your file system as ReiserFS, ext2 and ext3 have the sub-directory limitation
Sorry for the drawn out post, but if news like this is shared, it needs to be shared in its fullest. and i dont understand why boonex is not stating the need for a 64bit environment or a specific file system on the machine.
Regards,
DosDawg
When a GIG is not enough --> Terabyte Dolphin Technical Support - Server Management and Support