Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A few quick questions and feedback #2

Open
sondreb opened this issue Feb 23, 2023 · 9 comments
Open

A few quick questions and feedback #2

sondreb opened this issue Feb 23, 2023 · 9 comments

Comments

@sondreb
Copy link

sondreb commented Feb 23, 2023

What is the rational to have patching action? Wouldn't it be enough to just update the whole DID on updates?

  • Should relays validate updates to DIDs?
  • Clients need to trust relays, or need to get full history of (full replaces or patches) to validate.
  • If a relay "looses" or deletes a DID, the stolen keys can create new DID documents that are returned to clients.
  • How would a client decide which relay returns the most valid DID Documents?

I guess perhaps the "created_at" could be used to retrieve the DID document at a specific date?

To reduce the problem of stolen keys making new DID history, the pubkey SHOULD have some hint to where users should resolve (kind 9325) events from that is approved/trusted by the user. E.g. I could run my own corporate nostr relay and all employees should have in their profile or relay list, some hint that maps back to the "corporate relay" which can't be manipulated.

@csuwildcat
Copy link

csuwildcat commented Feb 23, 2023

I think these are great questions, most of which are rooted in the inherent limitations/issues with using an ephemeral trusted-node floodsub system for reliable, secure decentralized identifiers and PKI lineage. Patching or not it won't alleviate these issues, so it's great people are noting them, because the contrast with reliable, secure identifier methods will be easier to spot the more people examine the fundamental requirements.

Couple things I'll note related to the points you raised:

  • Any key history compromise at any point destroys the security of your ID, because the network cannot reliably deal with branched states and decentralized oracle absence.

  • Use of a centralized oracle / corporate trusted node defeats the entire purpose, because at that point the trusted node operator can pwn your DID at will with resolution denial or inaccurate Tip - N past history replay.

Decentralized identifier systems are hard, and hopefully this method can help folks understand the problems you alluded to.

@mistermoe
Copy link
Member

mistermoe commented Feb 24, 2023

@sondreb

What is the rational to have patching action? Wouldn't it be enough to just update the whole DID on updates?

Good question. Are you suggesting a new event with a full did doc every time a change is made or are you referring to replaceable events?

Afaict, replaceable events (described in nip16) require that the event be signed with the same pubkey which means that there would need to be a new event any time a key was rolled

regarding new event with full did doc for every change, I’d be interested to hear what others think. Imo, Trade off is chunkier events in exchange for potentially being able to grab just the latest event, but you’d still need to get the remaining events if a key was rolled to guarantee integrity

If a relay "looses" or deletes a DID, the stolen keys can create new DID documents that are returned to clients.

how does a relay deleting DID events imply that keys are stolen?

To reduce the problem of stolen keys making new DID history, the pubkey SHOULD have some hint to where users should resolve (kind 9325) events from that is approved/trusted by the user. E.g. I could run my own corporate nostr relay and all employees should have in their profile or relay list, some hint that maps back to the "corporate relay" which can't be manipulated.

DID history does not have any private keys in it.

@mistermoe
Copy link
Member

mistermoe commented Feb 24, 2023

@csuwildcat

Any key history compromise at any point destroys the security of your ID

is this not true of every DID method? if your recovery key is compromised, isn't it simply a race of who rolls first? seems like that applies to any did method. not just what's proposed here

@mistermoe
Copy link
Member

mistermoe commented Feb 24, 2023

@sondreb

How would a client decide which relay returns the most valid DID Documents?

can you provide more context around what you mean by "most valid"? each event can be independently verified by any client using sig and pubkey

@mistermoe
Copy link
Member

mistermoe commented Feb 24, 2023

@csuwildcat

in ION, for example, your root entropy never needs to be floating hot on devices

i might be missing something here, but can you point out exactly where/how you're inferring the need to have root entropy floating around hot on devices that is specific to this thought experiment? key management is entirely orthogonal.

@mistermoe
Copy link
Member

mistermoe commented Feb 24, 2023

sry not sure how that relates to #2 (comment). looking to address the first claim you made (floating root entropy) before moving on to the rest of your comment

so, can you point out exactly where/how you're inferring the need to have root entropy floating around hot on devices? genuinely curious where i alluded to that in what's been scribbled so far

@mistermoe
Copy link
Member

mistermoe commented Feb 24, 2023

presumably, anyone using the nostr derivation path described in nip06 (or just an HDWallet in general) would derive the key they're using. can't imagine anyone concerned with security is typing their mnemonic into every form field they can find on the internet

okk onto your other points

@sondreb
Copy link
Author

sondreb commented Feb 27, 2023

DID history does not have any private keys in it.

The idea was to have a hint to where to resolve the DID Document on the nostr protocol. It should be "forced", so that the owner of an identity tells clients that "my DID authority are these relays". This would mitigate many potential attacks, like if my primary key is lost, nobody can re-create a different history on a different relay - except of course, they replace my event that holds the hint (link) to my authority relays... but it does allow for key rotation more easily, but won't mitigate lost key that is being abused.

is this not true of every DID method? if your recovery key is compromised, isn't it simply a race of who rolls first? seems like that applies to any did method. not just what's proposed here

If recovery key is gone it will be trouble, but I think the primary need is leaking of the primary (used) key. That's the scenario that will happen for most users, that their keys are leaked through clients. If the solution supports recovery key, then private key in the wild can be mitigated.

can you provide more context around what you mean by "most valid"? each event can be independently verified by any client using sig and pubkey

The problem scenario I gave was if the primary key is leaked, then someone can create a fake DID operation history on a relay that I didn't use before. And if someone controls a relay, they can fake a history that goes beyond my own history, then nobody will be able to actually verify which is correct.

Recovery key solves that though.

@csuwildcat also elaborates this well above here.

At minimum, the "Nostr DID protcol" should deconstruct to a basic DID Document similar to did:key, but I think that can already be done using the nostr key and using did:key scheme. That's one use-case that will be useful for many users, being able to easily login to Web5 services using their existing nostr key.

It's interesting work and I think it just needs to be built with reduced security, you truly need something like ION that is attested on Bitcoin or something similar to get best possible security. Still with ION, from web clients, you still need to have a certain degree of trust as you're relying on third parties hosting and serving DID/Bitcoin data for most users.

I think there must be some special event triggered that clients can observe when keys are leaked, or maybe it will be fine if there's a single JSON retrieved from relays needed to perform public key validation/lookup.

For example NIP-05, in the client I work on, I only perform NIP-05 verification when an individual profile is opened, not when rendering a thread or large lists of users. I think perhaps there might be some solution doing something similar to NIP-05, where the profile holds a specific DID:ID that clients can lookup and verify. And then I think clients MUST begin to pin trust on profile, meaning that if an updated kind 0 (metadata) event is received for user X, then it should do more than simply validate the signature, it should lookup the latest DID Document and perform more rigid validation before it actually approves the replacement locally. Just thinking out loud like I always do :-)

@csuwildcat
Copy link

csuwildcat commented Feb 27, 2023

DID history does not have any private keys in it.

I wasn't suggesting that, I was saying that if any past private key is leaked, even many rotations/years later, the DID is fully compromised - you must abandon it. This isn't possible if your lineage is immutably locked in a linear sequencing oracle the attacker with the long-dead key can't branch/modify.

The idea was to have a hint to where to resolve the DID Document on the nostr protocol. It should be "forced", so that the owner of an identity tells clients that "my DID authority are these relays". This would mitigate many potential attacks, like if my primary key is lost, nobody can re-create a different history on a different relay - except of course, they replace my event that holds the hint (link) to my authority relays... but it does allow for key rotation more easily, but won't mitigate lost key that is being abused.

I think you should all deeply ponder the ramifications of this statement: "The owner of an identity tells clients that my DID authority are these relays" <-- thar be dragons of great size and ferocity. The first big problem is that you just turned this into a trusted authority model where your DID state can be manipulated by those trusted authorities (e.g. return a Tip - N state that looks legit, but is from the past). Beyond that, rotation of that set of trusted authorities over time introduces issues of its own. Game theory is likely to be a harsh mistress to trusted node schemes like these in practice, as they tend to devolve into consolidated operator silos, which results in users picking just one or two trusted nodes (not much different than OIDC trusted provider IDs in the end). If you like this model, I'll note this is trending towards basically cloning KERI's existing DID scheme, so you might check that out.

is this not true of every DID method? if your recovery key is compromised, isn't it simply a race of who rolls first? seems like that applies to any did method. not just what's proposed here

If recovery key is gone it will be trouble, but I think the primary need is leaking of the primary (used) key. That's the scenario that will happen for most users, that their keys are leaked through clients. If the solution supports recovery key, then private key in the wild can be mitigated.

The recovery key should be rotatable too, and it can suffer the same issues highlighted in other areas of these replies: if a recovery key rotated off long ago is ever found, exposed, or broken for numerous reasons, the DID is fully compromised. This doesn't occur in systems like ION, where you can laugh at an attacker who gains access to any past keys, because good luck reorging Bitcoin to change its history.

can you provide more context around what you mean by "most valid"? each event can be independently verified by any client using sig and pubkey

"Most valid": I am an attacker who gained access to a key rotated off long ago and present you with a branched state. It'll appear valid, but you'll have to decide which is truly valid.

Recovery key solves that though.

In a perfect world where no recovery keys rolled long ago are ever exposed, no crypto library vulnerabilities are ever found, and QC is never able to break EC, you may be OK, but if either happen, all these IDs are instantly, irrevocably vaporized. Not so with more robust DID constructions, because even those powerful adversaries cannot change Bitcoin's txn history, in which a DID's lineage sequence is anchored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@csuwildcat @sondreb @mistermoe and others