John Honan sent an email offlist about stamp the definition. This is kind of academic / independent from stamp format as that is already fixed for version 1. But one aspect that came up is that there are two possible stamp defintions: A. (as in v1) value = claimed_bits if claimed_bits <= measured_bits B. or value = claimed bits if claimed_bits == measured_bits in B the measured_bits has to match the claimed bits exactly (> claimed bits would be rejected). Question is which is better? Which has lower standard deviation I think is the right question. Clearly computing an n-bit stamp of type B costs more than a type A n-bit stamp because you have to throw away > n-bits. But that is ok as we just pick a convenient bit size. What happens to the std. deviation -- is it better with type B or type A? Adam