Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Private Computation Abstraction (MPC and TEEs) #14

Merged
merged 9 commits into from
Oct 21, 2022

Conversation

eriktaubeneck
Copy link
Collaborator

This pull request attempts to add "private computation" as an abstraction, which can be (potentially) be achieved by both MPC and TEEs. This avoids the need to include a specific technology in the standard, and enables web platform vendors (i.e. implementers) to decide which constructions to support.

It also proposes that web platform vendors can choose their specific privacy budget allotment (which is somewhat tied to the ability to support multiple private computation construction.

Copy link
Collaborator

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lots of comments here. I'm not happy with the tail end of this, but I'm running out of steam for doing suggestions.

The build-up is fairly solid though. I've some points on framing throughout.

I couldn't identify where to put the DP piece in; that might need some more thought.

threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
@eriktaubeneck
Copy link
Collaborator Author

Thanks Martin, very helpful feedback. Merged a bunch of the suggestions in, and will spend some more time continuing to iterate.

@bmayd
Copy link

bmayd commented Oct 18, 2022

Is there a value in noting the potential for combining private computation methods i.e., TEEs and MPCs being leveraged for different aspects of workflows?

@martinthomson
Copy link
Collaborator

@bmayd, if we choose something other than TEE, then that technology could be used to fortify the operational security of any alternative. I don't believe that this would be a strict requirement from our perspective as a prospective client of these services, but it is an option that becomes available if we are less than perfectly certain about the operational practices of a potential MPC participant (for example).

@palenica
Copy link

Nit: this doc talks briefly about privacy budgeting in the privacy section. It also talks about trust and security in the MPC and TEE private computation options, but only in the context of keeping the data and computation private. Would it make sense to explicitly cover how to make privacy budgeting trusted?
Keeping un-replayable privacy budget state tends to require a trusted party (or multiple trusted parties who check on each other) as well as some sort of access control to prevent parties from consuming (or observing) each other's budgets.

@bmayd
Copy link

bmayd commented Oct 19, 2022

From the introduction:

In the presence of this adversary, APIs should aim to achieve the following goals:
-Privacy: Clients (and, more specifically, the vendors who distribute the clients) trust that (within the threat models), the API is purpose constrained. That is, all parties learn nothing beyond the intended result (e.g., a differentially private aggregation function computed over the client inputs.)
-Correctness: Parties receiving the intended result trust that the protocol is executed correctly. Moreover, the amount that a result can be skewed by malicious input is bounded and known.

I suggest instead of Privacy, we make the first bullet:

Purpose Limitation: User-agents are reasonably assured that the API is purpose constrained such that no party can acquire data outputs other than what is intended and expected by the user-agent, given the inputs it provides.

Add bullets for verifiable input and auditability:

Verifiable Input: Parties using the API are reasonably assured that data provided by user-agents is accurate, reliable and honest.

Auditability: Parties providing data to, or receiving data from, the API can receive reports identifying: when, how, by whom and to whom data was communicated; when, how and by whom data and processed.

Correctness: Parties receiving the intended result can verify that the protocol is executed correctly and that the amount a result can be skewed, intentionally by adding noise, or by malicious input, is bounded, known and reported.

threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A solid start. More comments, but I think that you should consider merging and filing some issues for any shortfall. Not long until we next meet and you want to give people time to digest this.

threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Show resolved Hide resolved
threat-model/readme.md Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@csharrison csharrison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @eriktaubeneck for this writeup. In general it looks good to me, although I have one major comment about the min size of coordinator networks.

threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Show resolved Hide resolved
threat-model/readme.md Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
threat-model/readme.md Outdated Show resolved Hide resolved
Adding suggestions from @martinthomson and @csharrison. Thanks both!

Co-authored-by: Charlie Harrison <[email protected]>
Co-authored-by: Martin Thomson <[email protected]>
@eriktaubeneck
Copy link
Collaborator Author

Thank you @martinthomson @bmayd @marianapr @palenica @csharrison for all the feedback. THIS IS STILL A DRAFT, so I'm going to merge it at this point incorporating that feedback.

We also opened a bunch new issues, where we can continue to discuss some of these individual points:
#15, #16, #17, #18, #19, #20, #21, #22, #23, #24, #25

@AramZS we might have a whole mess of agenda topics for our next meeting 😜

@eriktaubeneck eriktaubeneck merged commit f4c7a57 into main Oct 21, 2022
@eriktaubeneck eriktaubeneck deleted the private-computation-abstraction branch October 21, 2022 20:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants