On Thu, 05 May 2011 00:28:14 +0200 Oliver Tappe <zooey@xxxxxxxxxxxxxxx> wrote: > Anyway, the bad news is: the suggested Mercurial extension doesn't exist > and > my hack doesn't work. Now that I have thought of dropping the > page/inode/dentry caches and try again, I have learned that a 'git log > --decorate >log' takes more than 4 minutes with those tags and only 4 > seconds > without those tags (both measured with empty caches). > > Starting gitk takes more than 5 minutes with those tags from cold caches > and > a couple of seconds without them. So, nice idea, but bad side-effect ;-) Instead of scrapping the idea completely we could consider reducing the number of tags, e.g. by deleting older ones. Staggering the tags might be an option: * Every single of the 100 most recent revisions has a tag. * Every 10th of the preceding 1000 revisions has a tag. * Every 100th of the preceding 10000 revisions has a tag. * ... That adds 100 tags per order of magnitude.(*) Currently that would be less than 400 in total instead of 40000+, which I assume would solve the performance issue. The server could tag continuously as originally intended and weed out older tags maybe once a day. As far as I can see removing older tags doesn't affect usability much. When building an image (unless one is intentionally building an old revision) the tags will still have full resolution, so one gets the exact revision number built into the image. When examining older revisions the staggered tags can be used with the tag-relative commit name notation. There doesn't seem to be an additive notation, so somewhat inconveniently one would have to write "r24900~73" to denote r24827 ("r24800+27" would have been nicer). CU, Ingo (*) To simplify things, I'd tag 100 revisions and further down to the next modulo-aligned number. E.g. at r40327 the single-step tags would not only go down to r40228, but further down to r40220. So it's up to 109 tags per order of magnitude.