[hashcash] compact stamps optimization

  • From: bas <beware@xxxxxxxxx>
  • To: hashcash@xxxxxxxxxxxxx
  • Date: Wed, 07 Dec 2005 23:19:11 +0100

<quote>
Yes tend to agree.  Looks like there are enough cases were it will be
2x that it's not worth that much to optimize the other compact cases.
(Well to say that the other way around there are only 3 lengths were
you can speed it up -- the 123^4, 12^34, 1^234 above).

Of course counters like this 1234^ (and indeed all the way back to
1234xxxxxxxx^) are doomed because the 9 padding chars comes afterwards
so will be 2x on all of those.
</quote>

i think one can *always* make compact strings and not have to process 2 sha1 block for each try, because the only thing which changes for every try, is the "least sifnigicant base64 character", which can either be in the first block (if the whole string is one block), or in the second block (if the string is longer).

changes which would require calculation of both blocks are
1: the string becoming longer (because the counter becomes longer), so it becomes 2 blocks
2: a char of the counter, other than the least significant, changes, which is in the first block.


what does the padding refer to? what am i missing?

i think if its possible at all to optimize -Z2 so it's like 1.1 slowdown, and not 2x, this is a good idea and it should become the default, and the existing -Z1 setting becomes obsolete.

Other related posts: