> From: axeld@xxxxxxxxxxxxxxxx > > I'm not sure what you're aiming at (I guess it's solving a visibility > problem of ongoing transactions), but I would think that this would > mostly complicate the code. Is there any compelling reason for doing it > that way exactly? > > A journal can only really work if you write complete transactions to > it, not partial ones. The only advantage I see in what you do is that > you no longer need to keep a block in memory once it has been committed > to the journal. > In a real file system, this time span shouldn't be too large anyway, > and usually, most file system blocks remain unchanged. Furthermore, the > chances that the changed blocks are still in memory is relatively high > anyway. Therefore, I don't see much value in letting all read > operations go through the log as well. Hi, I may be wrong, but from what I understood from the journal is that there is a possibility of reading old blocks if it has been committed to the journal (ie. all altered blocks are written to the journal) but hasn't been written to the file system yet. The block is probably still cached in memory, this just guarantees that the read maps to the newest block. Maybe I'm missing something from how the block cache handle transactions. Does it do this mapping for me automatically? Thanks, Janito _________________________________________________________________ Hotmail: Free, trusted and rich email service. https://signup.live.com/signup.aspx?id=60969