[Dnssec-deployment] cases - was Re: Barbie sez: "Algorithm rollovers are HARD!"

Michael Sinatra michael at rancid.berkeley.edu
Tue Aug 17 11:13:41 EDT 2010


On 08/17/10 07:17, Edward Lewis wrote:
> At 6:43 -0700 8/17/10, Michael Sinatra wrote:
>
>> I didn't intend to imply that it was a protocol issue, only an unintended
>> difficulty caused by the protocol specification's desire to avoid
>> downgrade attacks. But it is useful to think about if/how the protocol
>> might be more accommodating.
>
> It's my job to make sure the protocol is right... ;) so if it sounds
> like it might be a protocol problem I look at it closely.

That's why I want to be careful in how much blame I ascribe to the 
protocol specification.  I understand why it's there and I do want it to 
do the right thing, but I also want operators and deployers to 
understand that it has a consequence in this case.

[snip]

> I see.
>
> The problem is that the specification is not conveying the right
> message. (I know because I was heavily into the crafting of the
> requirement.)
>
> The motivation for the specification statement is to avoid a downgrade
> vulnerability. If it were optional to generate a signature, then it
> would be easy to fool a validator into dropping its checking procedure.
> By requiring a signer to generate a signature for each algorithm this
> meant that a stripped signature wouldn't lead a validator to ever assume
> that the data was intentionally unsigned. That is, a validator should
> assume a record, in a secure subtree, is signed unless proven otherwise.
>
> The reason one of "every algorithm" is mentioned is to cover the case
> where some population of validators did not implement a particular
> cryptographic algorithm. If a validator saw the DS set and validated it,
> the validator could then flip through the DS records and determine if
> there was any key whose algorithm was understood. It's possible that a
> zone has a DS record but a validator would proceed as if the zone was
> not signed because the signatures would be unparseable.
>
> The problem with the specification language is that I didn't give a
> thought to a mismatch of an old key set and a new data set (or vice
> versa). This can only happen in a cache. The requirement was written in
> the context of the signer, which always has access to the current key
> and data sets.
>
> What should be happening at the validator is that if it can validate the
> set with the available signature and an key on hand, that is sufficient.
> A validator should not worry about what might be other stripped
> signatures. A validator should not consider case II and VIII as failed
> states - it is not INTTENDED that a validator should check for other
> algorithms.
>
> I've said before that DNSSEC has to be loose in it's judgement. The DNS
> is loose. If DNSSEC gets too pendantic, then we crack the DNS.
> Validators have to establish "some way" of validating data and if any
> one path works, cling to it. DNSSEC is supposed to make sure every
> nook-and-crany is clean, just that the data is verifiable.
>
> I've also said that DNSSEC is about the protection of the cache, not the
> authority, the authority is merely offering up ancillary attestations
> that a validator can use to establish confidence in the data. In this
> case, the signer is told to generate all the signatures because you
> don't know what the validator will pick to use, the validator knows that
> there should be at least on signature it can use if the set is signed.
>
> DNSSEC is supposed to be liberal in accepting data that has a shred of a
> trust chain. DNSSEC isn't there to deny based on "technicalities."
>
> Cases II and VIII, the cases where a cache has many algorithm keys for a
> zone yet sees just one useful signature should result in a thumbs up in
> validation.
>
> Implementors may feel that's too loose, but it was what was meant in the
> design of the protocol extensions. To clean up the specification, it
> should be emphasized that although the signer is supposed to supply one
> of every signature, the validator only needs one working signature to
> okay the data.

Would it be reasonable for the cache, in a situation where it saw an 
algorithm mismatch between a cached DNSKEY and a newly-fetched RRSIG, to 
re-fetch (ONCE only) the DNSKEY and then declare a validation failure if 
it still didn't match?  (And vice-versa: If a newly-fetched DNSKEY RRSet 
didn't match a cached RRSIG, then it would re-fetch the RRSIG.)  Is that 
too much work for the validator?  And you'd have to be careful you 
didn't get into a fetching loop like Microsoft once did with lame 
delegations in their implementation...

It seems the issue is, how does the admin of the authoritative zone 
signal what algorithms are to be used for signing?  Currently, it is the 
presence of the key in the DNSKEY RRset.  It also could be the presence 
of the DS record(s) in the parent and/or trust anchor in the cache.  But 
that is not currently the case--even if my only trust anchor/DS record 
is algorithm 10, the presence of an algorithm 5 DNSKEY in the zone apex 
means there must be both algorithm 5 and 10 RRSIGs.

A third way to signal is by setting some (currently nonexistent and 
unspecified) flags in the DNSKEY records, which would mean more protocol 
changes.  The idea is that the admin can signal that s/he no longer 
wants the key/algorithm to be a requirement for validation, and would 
allow for more graceful algorithm rollovers.  But that's a lot of work.

Assuming we're stuck with DNSKEY, the administrator has to take into 
account cached DNSKEY and RRSIG sets, and/or set their TTLs to 
ridiculously low levels to gracefully roll the algorithm.  But if the 
caches could do a little more work to try to better determine what the 
administrator is trying to do (by some careful refetching), then we may 
be able to avoid cases II and VIII.

michael


More information about the Dnssec-deployment mailing list