On Wed, 2008-06-25 at 17:09 +0100, David Chadwick wrote: > Hi Erik > > the signature can actually be based on any encoding, since it generated > from a hash of the encoded byte string. There is no requirement to > decode and re-encode to check a signature. You simply need to hash the > received byte string and decrypt the signature bits and compare the two > hashes. > > The confusion over DER came when it was wrongly assumed, due to the 7 > layer model, that the presentation layer would decode the byte string > and the application layer would never have access to the byte string. > But this is wrong in all practical implementations. The standard should > have been changed to remove any mention of DER, but because all > implementations use DER it was considered too destabilising to remove > this. But the looser wording probably reflects the fact that DER is not > essential now It is true that for a SIGNED { Type } value, then the encoding of the value is carried along with the signature, so the latter can be validated by using the encoding octet string (as long as it is available). However, this does not work for the SIGNATURE {Type} or HASH {Type} constructions, where the value which has been signed might be a "virtual" value, composed of a composite of several different items. Examples of these are the X.400 MessageOriginAuthenticationCheck, and hash within the X.500 AttributeIntegrityInfo. However, "always re-encode" falls foul of the cases where the ASN.1 type of a value is not known. Simply having a BER encoding is not enough to be able to form the DER encoding, not only because of SET/SET OF, but any value encoding with IMPLICIT tags will hide the underlying primitive types. Perhaps the only way round this is for each individual case to specify precisely how the value to be hashed is constructed. regards David Wilson ----- www.x500standard.com: The central source for information on the X.500 Directory Standard.