Progress and some thoughts
meepbear *
meepbear at hotmail.com
Thu Jun 23 02:57:59 PDT 2005
>If you're really paranoid, you can keep a per domain success/fail counter,
>and refuse to accept domains that fail too often. This would still let
>malicious agents supply evil urls, but at least you would not hit them that
>often.
Looking at logs of what people all try to put in forms or querystrings and
reading about exploits has made me very paranoid :).
>If fetchhing a url can do bad things without any authentication, I don't
>think that's OpenID's fault. You could included X-Forwarded-For headers in
>the consumer, so the administrator of an attacked site could have something
>else to go on.
I put in a User-Agent of "OpenId consumer proxying for {IP} using proxy
{IP}" right now since I'm not too sure if X-Forwarded-For really gets logged
in all cases but I know UA strings do, and I add a Referer header when
following openid.server/openid.delegate URLs but it doesn't really prevent
anything.
http://www.example.com/myblog/ looks fine but can still 302 anywhere
http://www.example.com/info.php?id=1;%20DELETE%20FROM%20sometablename; is an
obvious SQL injection attempt
http://www.example.com/cgi-bin/formmail.pl?etc an attempt to exploit
formmail or some other script
http://someone.someisp.com:1234/ will show up as a TCP port probe on
someone's firewall
With no visible feedback on the result of what's being fetched it's very
limited, but you could still get it to do things it's not supposed to be
doing. In the end I can easily avoid it by only checking any URL against a
safe list but that sort of defeats the whole intent of openid if I'm willing
to id A, but not B.
>For those running this on bigger sites, it's probably worth mentioning that
>the consumer should likely be expressly prohibited from accessing
>'internal' sites, possibly by placing the consumer machines on a different
>network segment.
Good catch, I hadn't even considered that :).
More information about the yadis
mailing list