RE: Storing blobs in database vs filesystem

  • From: "Matthew Zito" <mzito@xxxxxxxxxxx>
  • To: "ORACLE-L" <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 3 Oct 2006 15:21:45 -0400

Yep, and in our case, we hashed our directory structure in a 
naïve-but-human-readable way of (first letter)/(second letter).  So, in the DNS tree would have been in:  /path/to/filer/g/r/

Obviously, /s/t/ was significantly busier than /y/q/, but still, it kept us 
down to in the 100s of thousands of files per directory on the high end instead 
of millions.  And even then, we had to use Netapp, as all of the commodity 
filesystems would  blow up whenever we had to do mass nameserver reloads.


> -----Original Message-----
> From: oracle-l-bounce@xxxxxxxxxxxxx 
> [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Kevin Closson
> Sent: Tuesday, October 03, 2006 2:51 PM
> Subject: RE: Storing blobs in database vs filesystem
> I wouldn't say that filesystems are built for efficient I/O 
> to the contents of files. Actually, once a file is open, most 
> of the hard work of a filesystem is over. Depending on how 
> files are allocated (maps, extents, whatever), the 
> positioning and read/write calls are really no big deal. It 
> is the manipulation of file attributes that is tough for 
> files. I'd say there are not many (if any) filesystems that 
> can readily handle millions of files in a directory.
> Video rendering farms
> do often times need such structure, so we are working on 
> optimizations for such here, but that is not our bread and 
> butter either. 

Other related posts: