[huskerlug] Re: I ran into my first case of having too many files to rm on.

  • From: Brett Bieber <brett.bieber@xxxxxxxxx>
  • To: huskerlug@xxxxxxxxxxxxx
  • Date: Fri, 4 Jun 2010 06:22:35 -0500

On Thu, Jun 3, 2010 at 8:09 PM, Martin Wolff <mwolffedu@xxxxxxxxx> wrote:
>
> Hi,
> I thought this might be an interesting discussion.
>
> Today, for the first time, I ran into a situation where GNU program couldn't
> handle me.  I had run "rm ./*"  on a directory and it failed because it had
> too many arguments (more than 100,000 files existed.).
> I then realized that I could just delete the directory just by doing rm -r
> directorynamehere
> It was only 11 GB in size, but it took forever to delete. I was just curious
> if there were any obvious things I should have done differently.  Perhaps
> there is a better way to trash > 138,000 files?

Aha, I'd be interested in a good solution as well. I run into this
quite frequently with web applications that cache generated output to
the filesystem. Usually the /tmp/ dir has thousands of cache_{md5}
files.

It's a bit trickier when they're in the same dir with other files you
don't want to remove. Usually I chunk it into ranges of files that rm
can handle using brackets, rm /tmp/cache_[0-3]* or I have a shell
script that just loops through all the files and unlinks them.

As for the speed, I know that certain filesystems are much faster at
handling small file deletion — in particular reiserfs. I know that xfs
has very fast large file deletion and small file directory listing,
but ymmv.

It's an interesting problem to have.

--
Brett Bieber

----
Husker Linux Users Group mailing list
To unsubscribe, send a message to huskerlug-request@xxxxxxxxxxxxx
with a subject of UNSUBSCRIBE


Other related posts: