On an older Linux box I have, there is an NFS mount of a Netapp. Once there are 100k files in a single directory on this box, files can no longer be written. Anyone know what might be causing this? I have been told symbolic links can still be written.
I will add more specific information as I look it up. Please, skip the ’use a better a file / folder structure and/or database’, I know ...
---comment--
---comment--
---comment--
==Answer==
There are two limits that you may have hit.
inodes (unlikely)
You’ve consumed all of the available inodes for that volume. You can confirm this with the commands df -i and maxfiles.
maxdirsize (more likely)
ONTAP imposes a limit on size of directories. Size being a product of metadata/hardlinks, not file content. This limit defaults (assuming >6.5) to 1% of your system RAM. It does so to ensure that large directories don’t impact system performance. Because linear directory scans require directory data structures to be loaded into memory. Quick overview:
- You can check a directory size with ls -lkd.
- Raising maxdirsize for a volume is a bit of a one way operation.
- Only raise it in small increments.
- You can’t reduce a directory’s size by deleting it’s contents.
- I have these two bookmarked links which contain a lot more information.
0 comentarios