My hosting solution provides unlimited diskspace. It used to be limited but large (like 600 GB or something) but even that has been lifted. However, there was one problem. The number of inodes is supposed to be no more than 250K. That’s an awful lot if you think about it since each file and directory consume an inode (I think). I host several domains with my hosting plan that also supports unlimited domains. One of my domains happened to require lots and lots of files. It was actually because I was pre-generating static html pages rather than generating them on the fly ondemand. My hosting provider is a decent one. I once accidentally ended up having a cgi script that went into infinite loop and I just got a phone call and an email that the script has been terminated. I fixed the issue and after filing a ticket they made the script executable again. For the inode limit, as such the limit of 250K is not a hard limit. It’s ok to exceed (and I did end up at 255K+) but apparently the backup process will not pickup all the files.
I had been serving the pre-generated static html files using a cgi script. So, I got rid of the inode problem by bundling all files in each folder into a separate tar.gz (compressed archive) file and then at runtime extract the required file. This reduced the inodes significantly. Ofcourse, this increases the runtime overhead as the file has to be extracted, but even though I have so many files, the traffic to the site is very little and so this extra overhead isn’t such an issue.
Besides using a compressed archive and a regular file system I would like to know if there are any other efficient alternatives for storing and accessing large chunks of text.