Recently Linus and friends focused on a throwback to the age of small RAM PCs, file-caching was managed according to a percentage of RAM. This caused huge files to be cached on writes, clogging up the I/O system. I have seen this but it was not really a problem on my Beast. I could always do something else while there was a pause in FireFox…
“Yeah, I think we default to a 10% "dirty background memory" (and allows up to 20% dirty), so on your 16GB machine, we allow up to 1.6GB of dirty memory for writeout before we even start writing, and twice that before we start *waiting* for it.
On 32-bit x86, we only count the memory in the low 1GB (really actually up to about 890MB), so "10% dirty" really means just about 90MB of buffering (and a "hard limit" of ~180MB of dirty).
And that "up to 3.2GB of dirty memory" is just crazy. Our defaults come from the old days of less memory (and perhaps servers that don’t much care), and the fact that x86-32 ends up having much lower limits even if you end up having more memory. You can easily tune it:
echo $((16*1024*1024)) > /proc/sys/vm/dirty_background_bytes
echo $((48*1024*1024)) > /proc/sys/vm/dirty_bytes
or similar. But you’re right, we need to make the defaults much saner.”
Isn’t FLOSS grand? A problem is seen and all it takes is an e-mail to fix it…