Diffie Hellman parameter checking

Steven J. Murdoch yadis+Steven.Murdoch at cl.cam.ac.uk
Wed Sep 28 06:24:59 PDT 2005

Another thing I have noticed is that the OpenID specification does not
mention how received Diffie-Hellman parameters (p, g and g^x) should be
checked. The Perl implementation does limited checking (p>10, g>1,
g^x!=0) and the Python implementation doesn't appear to do any.

This has caused security vulnerabilities in Tor[1] and IKE[2 (12.9)]
where the system is attacked by a man in the middle. OpenID does not
appear to be designed to protect against these sorts of attacks,
although I think an explicit statement of the threat model would be
useful. So I don't think this is an immediate problem for OpenID, but
it may be as more features are added.

Not checking Diffie Hellman parameters means that the consumer can
choose weak parameters, and so let g^xy be easily sniffable. Provided
this only hurts the consumer and not anyone else, this is OK.

It also allows a man in the middle (MitM) to change the parameters so
that the consumer and server think the key exchange happened
successfully, but actually they key is trivially guessable. There are a
number of ways to do this, but the simplest is to modify the g^x sent
by the consumer (X') and the g^y sent by the server (Y') to both be 1.
Then the consumer thinks the DH secret is Y'^x = 1^x = 1 and the
server thinks they DH secret is X'^y = 1^y = 1. Now the consumer and
server have a shared key which the MitM also knows.

The reason this attack doesn't really cause a problem in OpenID is
that a MitM can already break the system in other ways. For example,
if the connection between the identify server and consumer can be
redirected, the attacker can point the consumer at a malicious
authentication server which says yes when it shouldn't.

If the MitM is between the consumer and authentication server then he
could modify the DH parameters as above, but even if good parameter
checking is done, he can still change X' to be valid but corresponding
to a g^x' which he knows, similarly with Y'. A side effect of this
attack is that the consumer and server will have different versions of
the DH secret. This means that for the checkid_setup step to succeed,
the attacker would need to MitM this too. Whereas by setting X' and Y'
to 1, he only needs to MitM one connection.

It could be argued that this is a good reason to do parameter
checking, two MitMs, separated by time and space is harder than one
MitM, but the attacker can do something else to avoid this. The HMAC
key is enc_mac_key XOR DH secret*, where DH secret should be g^xy. In
a MitM attack, the consumer thinks the DH secret is g^y'x and the
server thinks it is g^x'y and where the DH parameters are valid, these
are different. The attacker knows g^x, g^y, x' and y' so when he
receives HMAC key XOR g^x'y from the server he sends HMAC key XOR
g^xy', where g^xy'==g^y'x, which is the consumers version of the DH
secret. The consumer can thus decrypt enc_mac_key and both consumer
and server will have the same HMAC key, so no future transactions need
to be MitMed.

In the above cases, checking the DH parameters doesn't buy you much
(but wouldn't do any harm). However, I can think of some protocol
changes where it would make an improvement.

For example, if the session key derivation process was changed. If the
HMAC key was simply SHA1(DH secret) then during a MitM, both ends
would get a different version of the HMAC key. Then to make the other
transactions work, the attacker would need to MitM these too. This
does increase the difficulty of the attack since the MitM needs to be
performed at two different times and two different places in order to
be undetectable. This change would break the current way the Perl
module avoids saving per-association state (hashing a secret and the
handle to derive the HMAC key), but a different mechanism should work,
for example encrypting the HMAC key to form the handle.
Implementations which simply store HMAC keys would work as normal.
This is a fairly obscure improvement and there may be better ways to
do it though (I am still thinking about this).

If signing and/or non-repudiation are brought in, then MitM attacks
will be relevant, so checking for bad DH parameters would be
important in that case.

So in conclusion, I don't know of any significant case where checking
for bad DH parameters currently is a vulnerability, but it does make
the system more fragile. Checking for good DH parameters would not
cause that much overhead (the result can be cached) and may make the
protocol more secure if extended. For information of checking DH
parameters, see Chapter 12 and 15 of [2]. Checking g^x can be done by
implementation changes, but checking p and g will probably require
another protocol field to store the prime q, where p=2q+1. The details
are tricky, so also looking at other implementations would be

Hope this helps,
Steven Murdoch.

[1] http://archives.seul.org/or/announce/Aug-2005/msg00002.html
[2] "Practical Cryptography" by Neils Ferguson and Bruce Schneier

* It is actually enc_mac_key XOR SHA1(DH secret), but I have omitted
  this to save space.

w: http://www.cl.cam.ac.uk/users/sjm217/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://lists.danga.com/pipermail/yadis/attachments/20050928/8c3bb064/attachment.pgp

More information about the yadis mailing list