[hashcash] Re: status of hashcash version 1?

  • From: hal@xxxxxxxxxx ("Hal Finney")
  • To: hashcash@xxxxxxxxxxxxx
  • Date: Sun, 22 Aug 2004 15:27:45 -0700 (PDT)

Jonathan Morton writes:
> This came to light when someone suggested a Dworkian algorithm based on 
> a modified RC4 and a 16MB array.

That was actually the precise algorithm from Dwork, Goldberg and Naor,
Crypto 03.

> Such an algorithm, unlike hashcash, 
> would be *impossible* to run on my Mac IIcx (circa 1988), which is 
> similar to machines handed out to disadvantaged folk quite recently in 
> the American desert.
>
> Yet workstations are now, or will soon be, available which have a total 
> cache size large enough to provide a substantial performance boost with 
> a 16MB random-access working set - a boost which would not be available 
> with hashcash.  Furthermore, it would be ridiculously easy to build a 
> dedicated machine using SRAM or fast graphics memory, which would also 
> have a substantial advantage.

In some discussion at the Crypto conference this week, Jon Callas pointed
out that in addition, memory bandwidth improvements have been picking
up recently.  Processors have gotten so fast that memory bandwidth
is now becoming the bottleneck in performance.  This is making system
designers pay more attention to the problem and work harder on speeding
it up, since that is where the payoff is in total system performance.
So we are seeing 800 MHz system buses and similar enhancements, after
those technologies had been relatively quiescent for several years.
This means that we are likely to see greater differentials in memory
performance between new and old, high-end and low-end, systems than was
true a few years ago.

> Then, unlike hashcash, the size of the array cannot be smoothly evolved 
> upwards to cope with increasing capabilities.  It also bloats the 
> download sizes and memory requirements of all applications that use it. 

On the other hand, with regard to download size, it might be that
eventually machines would ship with a large random file, so no one
would have to download it.  It's also possible to generate the random
file pseudo-randomly, as long as a slow enough algorithm is used that
it doesn't let you short-circuit the disk accesses.  Either of these
would avoid the cost of downloading it.

>   This Mac IIcx only has 40MB of disk space, total - would a typical 
> user of such a machine want to spend 40% of their disk space on a new 
> e-mail program?  No, I didn't think so.

Well, that's a pretty unrealistic example today.  Maybe a cell phone or
something would have problems with the memory, although such low powered
systems will struggle with any proof of work scheme.

There is also the point (I don't remember if it was raised here) that
constant-speed POW systems aren't really any more fair than ones where
the more powerful machine generates them faster.  Whatever way you do it,
if there is some architecture which gives you more POWs per dollar spent,
then that is where the spammers would focus their efforts.  If somehow
you came up with a design that took no longer to generate stamps on
a micro system than a high end PC, then spammers would build farms
of those small systems and generate tokens much more cheaply than the
average user.  By this reasoning the best system is one where, as much
as possible, the cost per stamp is constant across all architectures.
I suspect that hashcash more closely approximates this goal than the
Dwork memory based POW.

Hal Finney

Other related posts: