> You need to create a file mempow.dat of 16 MiB + 1 KiB in size. > As described in the comments I concatenated two 10 MB files from > random.org, but for timing purposes you can probably use anything. > Just copy a file that is > 16 MB in size to mempow.dat. I assume that if implemented, every sender needs to have the same file. That would result in very big download sizes for mail clients. That's a huge barrier to adoption. There's a further general problem with memory-limited algorithms: it's impossible to make an algorithm which is small enough for consumer machines while still being too big to be optimised by a high-end workstation. There are still a huge number of Win98 machines out there which have barely enough RAM to load the OS, and would be crippled by an additional 16MB block which was needed to send e-mail. I also have a real example of low-end Macs - often with only 8MB total RAM - being distributed to deprived individuals to keep them in touch with relatives and support organisations. Such old machines would be totally incapable of running a Dworkian algorithm, rather than merely being slow as with a hash-based algorithm. At the same time, it's technically plausible to build a high-end workstation with 16MB of high-speed, low-latency cache, which would be at a massive performance advantage compared to a DRAM-limited system. Such machines may already be available - the IBM POWER4 processor module contains a total of 8MB of shared L2 cache, and I haven't reviewed IBM's POWER5 workstation specs yet. Even if not, a determined spam ring could easily build such a machine themselves, based on, say, an ARM MPCore processor bank and an SRAM-based memory array. Once designed, that machine would not be greatly more expensive to produce (in orders of magnitude, that is) than a headless PC, but could be as much as 50x as fast.