[hashcash] Re: hashcash-1.15 released

  • From: Adam Back <adam@xxxxxxxxxxxxxxx>
  • To: Simon Josefsson <jas@xxxxxxxxxxx>
  • Date: Sun, 16 Jan 2005 05:09:15 -0500

On Sun, Jan 16, 2005 at 10:57:56AM +0100, Simon Josefsson wrote:
> Adam Back <adam@xxxxxxxxxxxxxxx> writes:
> > There was towards end of last year some discussion about big stamps
> > being ugly.  So I implemented the previously unimplemented -Z option
> > to allow stamp compression and released as 1.15:
> Thanks!
> Perhaps a silly question, but will older versions understand the new
> tokens?

Yes.  Verifying stamps is much simpler -- just hash the stamp and
count the number of lsbits which are set to zero.

> The slowdown is somewhat painful, though.  I have changed from 23 bits
> tokens to 21 bits by default.
> [...]
> I'd like to make the shortest header format the default in Gnus, but
> if it is 2-4 times slower (which my benchmarks suggest, depending on
> compile flags) I don't think it is reasonable.

It is a bit better if you build with openSSL as you then benefit from
the assembler SHA1 library (rather than the pure C SHA1 library
included with hashcash).  To do this do:

        make x86 CFLAGS=-DOPENSSL -LDFLAGS=-lcrypto

There is scope to adapt Jonathan's fastmint code directly to make
compact stamps, just more complicated, so I may make that change in
future.  I believe Jonathan estimated it should be only around 15%
slower if done that way, and I think this slow down would be

> I was hoping the header size decrease could be achieved by simply
> changing formatting somehow, instead of changing the internal hashing
> mechanism.  Make
> hashcash token: 1:20:050116:foo::r8TXV8BUV1CkoBcP:2xkK
> be equivalent to
> hashcash token: 1:20:050116:foo::r8TXV8BUV1CkoBcP:000000000000002xkK
> i.e., remove leading zeros.
> Can't this be done without changing the hash computation?  Even if you
> are willing to drop backwards compatibility?

Well if one is willing to require some lossless compression
pre-processing before minting and corresponding decompression prior to
verification yes and one could do that without sacrificing any speed.

However it will drop backwards compatibility.

I think keeping backwards compatibility is more important.  Especially
as most of the speed loss can be recovered by adapting the fastmint
code instead of falling back to the sha1 library.  

Try 1.16 -- it has -Z2 implemented (the -Z2 is the same as -Z1 in


Other related posts: