On Mon, 10 Mar 2003, Erik Jaesler wrote: > Axel D=F6rfler wrote: > > Erik Jaesler <erik@xxxxxxxxxxxxxx> wrote: > > > >>map for an unused token. How about this compromise solution: > >>continue > >>to place "returned" tokens on the stack, but don't use them until the > >>token counter maxes out. Also, use a queue instead of a stack to add > > > > There is just one little problem with that method (depends on the > > implementation, though :-)): when the token counter overflows, there > > are about 4 billion entries on the stack... hm... that doesn't sound > > very nice, does it=3D3F :-) > > Well, I suppose if I used a *little* intelligence I might think to put > some kind of cap on the size of the queue. ;) Perhaps a few hundred or > a thousand tokens? My thinking is that while you probably aren't *ever* > going to overflow the counter and use the entire capacity of the queue > at one time, if you find yourself enjoying the extended up-time that is > often the priviledge of the BeOS user =3D) you may, over time, overflow > the counter. It would be a bummer if from there on out, every allocated > handler token had to checked against the map to make sure it wasn't > being used -- the queue gives us a fast way to get tokens we *know* > aren't being used. Granted, your box would have to be up a *very* long > time before you overflowed the counter; nevertheless, it would be nice > if the general case was that you either got a token based on the counter > or from the queue rather than having to do the check. As I wrote in my previous mail, I believe, a single check (i.e. lookup in the hash map) is cheap -- an `add if not already existing', if available, would be at no additional cost at all -- and actually I suspect it to be even cheaper than removing a token from the queue. Anyway, even if you have a million used tokens, the token space is still rather empty. In average you had to do one additional check per 4000 invocations. However, a token is allocated when a BHandler is created. That is usually not exactly a time critical operation in itself. So, even if the million used tokens form one big cluster -- a million hash table lookups won't take more than a fraction of a second on modern machines anyway -- that shouldn't do much harm. CU, Ingo