On 2009-11-23 at 17:20:15 [+0100], Alexandre Deckner <alex@xxxxxxxxxxxx> wrote: > Axel Dörfler wrote: > >> So there's currently no way to catch such cases in a robust way. What > >> can we do about that? > >> > > > > I think the best solution would be to introduce a flag that tells a > > live query that you want to monitor all query results automatically. > > That's the only way of preventing the race condition between sending > > and receiving the query changes. > > > > > > > I don't mind starting a watch for each query result but in fact most of > my questioning is due to performance / complexity concerns. > It seems that node monitoring is quite cheap if not gratuitous, but i'm > not so sure, there's that limit (and the bebook advice) that make it > sound like it has a hidden cost or scales badly. > I tried to grok the sources a bit, but i wouldn't mind some explanations :) > For example, what is the cost of watching a directory / each node > separately? On a 32 bit machine watching a node (not watched by anyone else) costs 72 bytes of kernel heap. There's a minimal CPU overhead for checking whether an event for a watched node occurred and some overhead for actually sending a node monitoring message. Watching a directory's contents is just watching a single node for a specific event. > Why is there a watch limit? The mechanism requires a kernel resource (kernel heap), so a soft limit is in order to protect the system. CU, Ingo