[Dnssec-deployment] cases - was Re: Barbie sez: "Algorithm rollovers are HARD!"

Michael Sinatra michael at rancid.berkeley.edu
Tue Aug 17 09:43:33 EDT 2010


On 08/17/10 06:00, Edward Lewis wrote:
> At 23:07 -0700 8/15/10, Michael Sinatra wrote:
>
>> The problem is that this method can lead to validation failures and zone
>> bogusity if not done exactly right. The reason is to accommodate those
>> implementations that enforce (probably correctly) a strict interpretation
>> of the last paragraph of section 2.2 of RFC 4035:
>>
>> "There MUST be an RRSIG for each RRset using at least one DNSKEY of
>> each algorithm in the zone apex DNSKEY RRset. The apex DNSKEY RRset
>> itself MUST be signed by each algorithm appearing in the DS RRset
>> located at the delegating parent (if any)."
>>
>> Here's the wrinkle: Although I can remove all algo 5 DS records in the
>> parent, as long as I have an algo 5 DNSKEY in the DNSKEY RRSet, I MUST
>> continue to sign with algo 5 (using, of course, one of the algo 5 keys
>> in that DNSKEY RRSet). Moreover, clients who have cached the DNSKEY
>> records WILL experience validation failures until TTL+propagation delay
>> after you remove the algo 5 DNSKEYs, *if* you stop signing with those
>> keys when you remove them. I have verified this with such an
>> implementation (unbound 1.4.6, which strictly enforces RFC 4035
>> section 2.2).
>>
>> The irony is that RFC 4035 section 2.2 is designed to prevent algorithm
>> downgrade attacks, but it also makes *upgrading* algorithms more
>> difficult.
>
> I'm having a little trouble coming to understand this as a protocol
> issue, so I gotta ask something. It might be a validation algorithm issue.

I didn't intend to imply that it was a protocol issue, only an 
unintended difficulty caused by the protocol specification's desire to 
avoid downgrade attacks.  But it is useful to think about if/how the 
protocol might be more accommodating.

> The issue is when a cache receives a set and attempts to validate it.
>
> Here are the cases:
>
>
> The cache has:--> old-alg-only both-alg-keys new-alg-only
>
> The cache gets-v
>
> Old sig only I II III
>
> Both sigs IV V VI
>
> New sig only VII VIII IX
>
> Case I, II, IV, V, VI, VIII, IX are no problem, right?

If I am reading your cases correctly (and I may well be not due to the 
earliness of the hour on the US west coast), cases II and VIII should 
also be a problem, since there are keys with algorithms that are not 
represented in the signatures, thereby violating the last paragraph of 
Section 2.2.

> Case III - the signature's key is not around anymore. In this case
> validation will fail. If we include RFC 2181's trustworthyness
> calculation, the cache should be aware that it should seek a more
> authortitative version of the set. I am assuming this case only happens
> when the subject cache has learned this off another cache.

Does the cache have any way of knowing that this case is different that 
case VII?  It simply knows that it has a signature that doesn't match 
the algorithm of the key in the apex rrset.

> Case VII - the problem is that the cache holds the set with the old key
> and perhaps that has the highest trustworthiness (the authority itself).
> This is the one case we have to avoid - and I believe that through TTL
> management we can.
>
> Is there a problem in one of the other cases?

Since you also have to avoid cases II and VIII, which will produce 
validation failures (at least with unbound--some versions of BIND will 
validate), the administrator of an authoritative zone must carefully 
negotiate the path along the following of your cases (while allowing 
time in between each stage for TTL expiration): I -> IV -> V -> VI -> 
IX.  This is different from an ordinary KSK rollover, where the admin 
doesn't really have to worry about the same issues as long as s/he 
correctly manages his/her parent's DS records.

michael


More information about the Dnssec-deployment mailing list