Just a couple of corrections: > I think that there is a little more to it than this. > First off, some amount of attributes are stored in the directory structure > (the inode). 700 bytes or so. So many times, the attributes cost you *no* It's around 700 bytes for file systems initialized to 1K blocks. For 2k block size file systems it is 1024+700, for 4k blocks.... > file system space. That space is used, regardless. Secondly, if there are > more than 700 or so bytes, you would have to read in another block It's worse than that. If there are attributes that cannot be embedded into the inode an attribute directory is allocated, so you BFS internally will have to fetch the inode for the attribute directory, and later fetch the directory data for walking it (the attribute directory looks structurally like a regular directory). Now for each attribute you are walking the inode of the attribute will have to be fetched to find out the size. Note that those attributes look structurally like a regular file. > to find out how big the file is. That would *dramatically* increase the > time it takes to read a directory. Third, while attributes *CAN* be used > to hold mountains of data, it is, IMHO, a bad idea. They aren't designed to > do so well. That's right. While the structure allows for unbounded attribute prolifferation, the performance and space usage will suffer badly. Note that each attribute not embedded will take on its own (not counting the storage requirements for the attribute directory) 2 blocks: 1 for the attribute inode and 1 for the attribute data itself. An optimization that was in the TODO list was to embedd the atribute data in the attribute inode for small enough attributes. Summing it up: while having huge attributes is ok (performance and storage wise), having tons of small attributes is bad (storage wise, and performance wise if you want to scan all attributes). manuel, > > >