Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential attack through error-correcting codes #1

Open
appascoe opened this issue Nov 18, 2020 · 15 comments
Open

Potential attack through error-correcting codes #1

appascoe opened this issue Nov 18, 2020 · 15 comments
Labels

Comments

@appascoe
Copy link
Contributor

appascoe commented Nov 18, 2020

Hey, @michaelkleber,

We just had a few minutes to chat yesterday during the IWABG call, and I want to make sure your critique of MURRE gets the attention it deserves, so I'm opening this issue. I also want to make sure I understand the critique to its fullest.

Correct me if I'm wrong, but this is how I interpreted your objections: the use of error-correcting codes (ECCs) can be used to effectively eliminate the noise introduced by MURRE, through the clever construction of strings to be hashed, such that no additional privacy can be gained. This is true for any level of noise, and this would push epsilon to infinity.

I don't think I expressed my rebuttal well during the meeting. I have a few points to make. MURRE has two parameters, p and M, to control the overall privacy level.

Let's start with p. It seems to me that if I were to set p = 0 (flip a fair coin for every dimension), the resultant vector would be completely random and thus privacy preserving. It's unclear to me how any ECC could be used to recover the original signal in a scenario like this. Now, this is a very extreme example, but it indicates to me that MURRE is indeed a privacy-preserving mechanism. We may disagree on what an appropriate value of p should be, and the higher the p, the more likely an ECC is to succeed, but the MURRE mechanism at least has the facility to mitigate these attacks in principle.

Then there's M. To take another extreme case, set M = 1. We can interpret this single dimension as just "was there at least one string to hash or not." Note that even in this situation, if we set p = 1 (always tell the truth), we calculate an infinite epsilon. So while the metric itself seems to indicate that "there's no privacy at all," nothing of true import is actually revealed. This is why the MURRE document goes to some length discussing that values of epsilon must be taken into context with what the underlying dataset represents. High epsilons are not inherently bad.

That being said, I concede that the MURRE mechanism can be used to track individuals, just not at scale. For example, in the M = 1, p = 1 case, I could decide that for exactly one user, I will provide a string, and all other users nothing. When I look through my dataset, if I see any 1s, I know that this user is in the dataset, which, under the strict definitions of differential privacy, result in an infinite epsilon. There's simply no plausible deniability.

But let's not consider such an edge case. Before publication, internally at NextRoll, @polvorin suggested an attack thusly: the extractor detects that an impression was served on publisher.example and adds a string to the set "site:publisher.example". But it can also generate "site:publisher.example+", "site:publisher.example++", etc. to rip apart the independence assumption (which I grant, is a major assumption). In effect, the DSP is using the vector as a Bloom filter. I grant that, without sufficient noise, a DSP could, with very high probability, discover that a particular training example in the dataset comes from publisher.example.

However, we think with the standard cardinalities of features we see in adtech, that this is unlikely to be an issue at scale, for a few reasons:

  1. The DSP could attempt to construct a very sophisticated extractor in order to guarantee that the strings it produces get hashed to desirable dimensions. I strongly suspect that this would be a giant extractor that the browser could refuse to download, or if attempting to run it, would likely want to time out anyway. This is effectively equivalent to inverting a hash function.

  2. For DSPs to recover the strings that went into the hash, they could, offline, pump the strings they would expect to see through the hash function (including the Bloom filter attack), and construct a lookup table to, probabilistically deanonymize the feature vector. This is also effectively inverting a hash function, and, in many ways, is in the same spirit as Dovekey. However, we have analyzed the Dovekey mechanism and find it to be intractable.

  3. The addition of noise complicates matters. Supposing that a DSP did something like the Bloom filter attack with four hashes, adding 256 more random dimensions results in a 260-choose-4 ~= 186 million space of possible items that could have resulted in this vector (leaving out any other potential features that also would have been hashed in). Now, of course, there may be no other feature strings that would hash into this set. But in order to know this, the DSP needs to compute something akin to 2).

  4. It depends on the dimension M, of course, but hash collisions in and of themselves provide some privacy protection, not unlike some k-anonymity. Suppose that the DSP just produces strings of user IDs to hash, say on the order of 2^32 ~= 4 billion IDs. The standard approximation for false positive rate of a Bloom filter is (1 - exp(-kn/M))^k with an optimal number of hashes k of (M log 2) / n. Note that because the size of M we're talking about are much less than the number of insertions (the maximum we tested was M = 2^27), the optimal number of hashes is 1. So we'd expect a false positive rate of simply (1 - exp(-2^32 / 2^27)) = 1 - exp(-32) = 99.99999999999873 % (pretty sure there's some floating point fuzz in there at this point). (Of course, we'd expect that there would be on average 32 users in each bucket, which is not far off the level of k-anonymity we've discussed in the past.) In a way, MURRE is almost self-protecting when a DSP attempts to use the mechanism to track users granularly at scale.

All this being said, yes, the edge case of identifying a particular user is real. Differential privacy measures the worst-case scenario. That's all well and good and makes sense from a strict mathematical definition. I think, however, that the ad tech use case should be more concerned about the properties of the average case; DSPs are not particularly interested in tracking very specific individuals. Even if they were, there are cheaper, easier techniques for doing so than mounting an attack on the MURRE mechanism.

Thwarting mass tracking of users at scale, I think, is the more pressing problem facing ad tech today. I believe that MURRE has some legs here on making gains toward this goal while preserving the ML optimization that props up value.

@bartek-siudeja
Copy link

I think self-protecting also applies to no collisions setup, with big M. At least assuming Bloom filter attack is not practical. Having a hash that does not repeat much is the same as having rare randomly added hash. Unless you know exactly each ID you cannot distinguish a dimension that showed up 2^10 times for a reason from one added randomly in a setup where p ~ 1 - 2^10 / data_size (this number is really about 1, and probably wrong, but the point is that you would need to really one-hot encode all IDs). One can fight this with "bloom filter attack" (dependent dimensions), but it would have to be applied to all IDs, not just one, and that means way more fill on the whole hash space, hashes repeat a lot more.

@michaelkleber
Copy link

Hi Andrew. Sure, I think we can focus on the Bloom-filter-esque technique that @polvorin suggested. The simplest error-correcting code is just "send the same bit T times in a row", and "site:publisher.example++"-etc is a version of this.

To maliciously track users at scale in a MURRE world, there's no need to incorporate the information "this ML signal happened on domain foo.com" into the trail itself — that's wasteful. Instead it's sufficient to send the ML signals directly to my 3p server, along with a per-domain user ID, e.g. stored in a first-party cookie on the site where the signal event took place. The trail's job would be to provide enough information to join up that per-domain user ID on all the different domains. (Once that ID-joining is successful, the malicious party can do all the machine learning in the world, on plaintext data — and all the user profiling in the world too, of course.)

So let's say that on each domain I visit, I'm assigned a 64-bit random number. The malicious use of MURRE is to learn all the different random numbers that go with the same browser.

Let's say when I'm on foo.com, I get the 64-bit ID that begins "110....". Pick any old hash function and take hash("foo.com") mod 1000, let's say the result is 123. Then the 64 strings of information to maliciously communicate via MURRE are "on-domainhash-123-the-ID-bit-one-is-1", "on-domainhash-123-the-ID-bit-two-is-1", "on-domainhash-123-the-ID-bit-three-is-0", etc.

When the server receives an encoded trail, it only needs a small lookup table to decode this: just the 1000642 hashes of the strings "on-domainhash-[000-to-999]-the-ID-bit-[one-to-sixtyfour]-is-[0 or 1]". (The thousand different domainhash values are just to probabilistically keep the bits from foo.com and bar.com from clobbering each other.) This isn't about inverting a hash function, this is just about looking up words drawn from a small codebook.

Of course the bit flipping means some bits wouldn't get transmitted correctly due to noise. That's where error correction is needed. To keep it simple, sure, just encode each string redundantly 100 times. If you flip bits with probability p=1/3, the the chance that the majority-of-100 is the true value is 99.93%. Of course you can crank up p or clamp down on M, but the channel capacity for this ID-joining attack is just not very large at all.

@appascoe
Copy link
Contributor Author

We had some long discussions at NextRoll about this. I think we're still gathering our thoughts for a more thorough response, but here's a quick brief of our initial thoughts.

This attack appears credible, notwithstanding configuration of p and M (values of which may not be useful for ML). But this attack is also interesting for some other reasons:

  1. Using this technique precludes the ability to do sophisticated ML at ad tech scales. A DSP could implement this, but in a way, is cutting off their nose to spite their face. As discussed, the cardinalities of features in ad tech are so large that the collisions would start to destroy the information contained in the ID. If this is the only mechanism to receive granular, robust training data, the DSP can track at the expense of running ineffectual campaigns and wasting large sums of money.

  2. The bucketing of domains you propose itself provides some level of anonymity. A thousand hashes for millions of domains does supply some meaningful plausible deniability that a particular domain was actually visited. Of course, you can increase the domain hash space, but as you point out, this will also result in more collisions that would cause issues in the identification mechanism. Perhaps some further analysis here is warranted to see what size of hash space is problematic under a given M.

  3. This is not, in particular, an attack on MURRE, but rather an attack on the notion that DSPs have the capability of defining their own ML features and receiving those for a granular dataset. We need to think of this more deeply, but at least my first impressions are that, without some drastic obfuscation, it is likely a malicious actor could construct features to encode really whatever they want. Drastic obfuscation seems unlikely to work for a couple of reasons: a) degradation of the ML model performance, and/or b) inability to perform inference unless the (necessarily deterministic) obfuscation mechanism is provided to the DSP; providing that mechanism to the DSP can provide for some type of codebook attack. This point is probably our most major concern because, as previously stated many times, the efficacy of ML is really what keeps publisher revenues up.

  4. This attack, in some ways, is also an attack on differential privacy for ad tech in general. Usually the assumption in differential privacy is that the database is representative of the data that is to be used for a specific purpose or analysis and that the user provides their entry to the database themselves. As mentioned in point 1, this attack skirts the use of ML in favor of identification. Pretty much everything we've discussed for differential privacy in ad tech revolves around the browser, under some provided code, to submit entries to the database, i.e. the user isn't actually in control of the response. This can result in a requirement that the differential privacy mechanism be more stringent in its epsilon than what would be otherwise acceptable for the intended use case.

For example, similar types of attacks could be lodged against the Aggregate Reporting API. As a sketch of an idea, suppose that an attacker just absolutely doesn't care about getting a spend number out. They assign to a set of users flat spend reporting numbers in the set {1, 2, 4, 8, 16...} (a superincreasing knapsack). The sum of any spend perfectly encodes which users were shown an impression. To deal with mean zero noise, the attacker duplicates reports for the publisher under names such as publisher.example, publisher.example+, publisher.example++, etc. (Of course, these names can be more obfuscated than this to evade detection.) The attacker then reads the spend reports for all of these, takes the mean to reduce noise, rounds to the nearest integer, and can be sure that a certain user was shown an ad on the given publisher.

This attack is, of course, defeatable with enough noise or limiting the number of queries to the database. However, without a clear means of detecting that such an attack is occurring, that noise and query limitation must be applied to all potential reports and queries to the database, which destroys the utility of the mechanism for other participants.

@bartek-siudeja
Copy link

bartek-siudeja commented Nov 19, 2020

I think that a really perverse case is to just attack a single cookie, and use spend to get exact impressions. Given a known noise mean and variance one can simply attach fixed spend for that cookie that is 1000x(mean+3*std_dev) and then read off exact imps on all domains. Just put 0 on all other cookies, or maybe some small number if 0s are filtered out of reports. So no amount of noise is enough, unless it is not known and changing in response to our actions.

@michaelkleber
Copy link

Hey Andrew, I think I partially agree and partially disagree with your "initial thoughts" comments:

  1. Using this technique precludes the ability to do sophisticated ML at ad tech scales...

I don't think that's correct. In my proposed attack, the malicious tracker assigns a unique 64-bit ID to each (user, site) pair — so "Michael Kleber on foo.com" is tracker-id=1234567890987654321. Then every ML signal associated with my activity on that particular site gets sent directly to the tracker, with that ID attached. The MURRE attack would be used to learn which pairs of IDs correspond to the same person on different sites, but all the ML can still happen.

  1. The bucketing of domains you propose itself provides some level of anonymity...

No, there is no plausible deniability, for the same reason as in 1. The events for me-on-foo.com are being reported directly to the tracker. The domain-hash thing is just so that the MURRE report can easily include my 64-bit foo.com ID and also my 64-bit bar.com ID, since (99.9% of the time) hash("foo.com") and hash("bar.com") will be different mod 1000.

  1. This is not, in particular, an attack on MURRE, but rather an attack on the notion that DSPs have the capability of defining their own ML features and receiving those for a granular dataset.

If those features (a) come from many different domains, and (b) are received at the event level rather than aggregated, then I agree, this kind of attack is a risk. It may be possible to overcome this problem; the whole point of approaches like Federated Learning are that they offer ways to train ML models even while the training data is kept private.

In some sense, the problem is that you didn't propose a system for gathering ML features, you proposed a system for gathering arbitrary granular cross-domain data, which you planned to use for ML features. It might work well, if used as intended! But if there's nothing to force using it that way, then it's really just a data collection system with built-in noise... and anyone who circumvents that noise can just use it for tracking.

  1. This attack, in some ways, is also an attack on differential privacy for ad tech in general.[...] For example, similar types of attacks could be lodged against the Aggregate Reporting API.

I agree that any reporting APIs need to consider this kind of threat model. The attack you propose is exactly why DP adds noise with magnitude proportional to the largest single event contribution. But as you point out, that's only one component of its protection story, not an automatic win.

@appascoe
Copy link
Contributor Author

appascoe commented Nov 23, 2020

Hi, Michael,

  1. I understand the tracking attack, but it is unclear to me how the ML features would be sent to the malicious tracker. If you're using the MURRE mechanism, the hashes of the ML features will trash the data (as per my analysis above). If you're suggesting some other side channel, I'd be curious to know what you're thinking on how all the data gets from point A to point B. There's also the question that using some other identity solution, such as forced logins with email hashing, would be cheaper and easier to execute, sidestepping the browser privacy mechanisms entirely.

  2. Correct me if I'm wrong, but it seems that in order for this to be effective, the publisher would have to communicate their ID back to the tracker through a backend server-to-server call.

  3. At NextRoll, we're also currently investigating federated learning techniques. However, we're still thinking over this attack and how it applies. It seems that under arbitrary code, a malicious tracker can compute gradients in such a way to also deanonymize users... if they don't actually care about using the gradient for optimization. Anyway, as regards MURRE, my thinking has moved to how such a tracking attack may be detected and thwarted, thus forcing condoned usage. I have no concrete proposal thus far, but would probably rely on some statistical analyses of the un-noised vector. It seems if some features are hammered over and over again, tracking is likely. @mbowiewilson points out that there's another way of mitigating this: the feature extraction JS is public and available for auditing.

  4. Yes, the L1-sensitivity is needed to understand the amount of noise to add to a result. But that's the issue: an attacker can construct systems for tracking that have a large L1-sensitivity, and thus the only responses are a) increase the noise, which makes the system useless for everyone, particularly for matters like spend, b) accept the risk of deanonymization which seems to be a no-go, or c) detect abuse and block it. As you mention, this is only an L1-sensitivity attack, let alone dealing with attacks that exacerbate the query release problem, as I alluded to above.

I suppose what I'm saying is, if the Aggregate Reporting API is proposed as a valid path forward (on which many existing proposals are hinged), then, at the present level of information, it would seem that MURRE is an equally valid path forward.

@csharrison
Copy link

Yes, the L1-sensitivity is needed to understand the amount of noise to add to a result. But that's the issue: an attacker can construct systems for tracking that have a large L1-sensitivity, and thus the only responses are a) increase the noise, which makes the system useless for everyone, particularly for matters like spend, b) accept the risk of deanonymization which seems to be a no-go, or c) detect abuse and block it. As you mention, this is only an L1-sensitivity attack, let alone dealing with attacks that exacerbate the query release problem, as I alluded to above.

Our current thinking for aggregate reporting is to fix a maximum contribution with the assumption that all users will apply scaling factors on their inputs to produce the maximally efficient encoding of their data. E.g. if the maximum contribution is fixed at 65536 and an advertiser only cares about reporting values between 0 and $100 they could apply a scaling factor of 655 to their inputs. Likewise for advertisers that need reporting for values greater than the maximum.

@appascoe
Copy link
Contributor Author

At first glance, that doesn't appear to be sufficient to me.

  1. The L1-sensitivity is a function of the function that is run over the data. The size of the inputs don't particularly matter. I suppose in practice the Aggregate Reporting API is in control of the functions it intends to evaluate over the data; that effectively makes the maximum contribution a no-op.

  2. Limiting the noise, I think, exacerbates the query release problem. The less noise reported, the less duplicate reports I need to try to average the noise out. Hence, the less I can query, which also is an attack on the utility of the mechanism.

@csharrison
Copy link

For the summing we do in the aggregate API the sensitivity is a direct function of the maximum input, which describes the maximum distance any input database could be to any other neighbor after applying the sum.

You're right on the query release problem, in our proposal we discuss fixing a limit on the number of times a particular record could be queried in the system which can be used to compute the per-record DP bounds. There is a simple relationship between noise and the number of queries you can support if you use naive composition, although there are more sophisticated techniques we are also researching.

@appascoe
Copy link
Contributor Author

appascoe commented Nov 23, 2020

Yes, summing is a very direct computation for L1-sensitivity, but if you're doing that, you may as well scale the noise with the maximum input. There's no need to limit the range of the inputs if you're calculating the maximum value anyway.

Limiting the number of times a single record can be queried isn't sufficient with an attack described above:

To deal with mean zero noise, the attacker duplicates reports for the publisher under names such as publisher.example, publisher.example+, publisher.example++, etc. (Of course, these names can be more obfuscated than this to evade detection.) The attacker then reads the spend reports for all of these, takes the mean to reduce noise, rounds to the nearest integer, and can be sure that a certain user was shown an ad on the given publisher.

@csharrison
Copy link

Yes, summing is a very direct computation for L1-sensitivity, but if you're doing that, you may as well scale the noise with the maximum input. There's no need to limit the range of the inputs if you're calculating the maximum value anyway.

This is not the case. If you scale the noise by the maximum input it can reveal the scale of the maximum input which can leak sensitive information (imagine trying to detect if the one user that bought a $1B house participated in a database where everyone else is buying small ticket items).

Limiting the number of times a single record can be queried isn't sufficient with an attack described above

Ah yes, if you can get the client to send duplicate separate reports all including the same data this is possible, although we do intend to bound the privacy in this case by introducing user-level rate limiting or something similar. We have some discussion on these topics here

@appascoe
Copy link
Contributor Author

Is that true? Unless you also report back the distribution of the noise, I don't think you can make that claim. The receiver of the report, in your $1B house example wouldn't know that in the database there was a single large purchase, or multiple, or even that the data are dominated by small ticket items.

User-level rate limiting is also probably not sufficient. An attacker could potentially spoof being a set of users, or even just through good old-fashioned collusion among multiple parties.

@csharrison
Copy link

Is that true? Unless you also report back the distribution of the noise, I don't think you can make that claim. The receiver of the report, in your $1B house example wouldn't know that in the database there was a single large purchase, or multiple, or even that the data are dominated by small ticket items.

Yes I believe this is true. This is discussed in some detail here and intuitively it makes sense. If you have two databases D (which contains small values) and D' that adds a user with an extremely large value, then your mechanism that uses local sensitivity (i.e. noise proportional to the max actual input) applied to D and D' will result in very different distributions that intuitively should be distinguishable, and get more distinguishable as the one user's large value increases.

User-level rate limiting is also probably not sufficient. An attacker could potentially spoof being a set of users, or even just through good old-fashioned collusion among multiple parties.

I don't fully follow the attack but feel free to open an issue if you think it applies in our case. I was imagining the browser client enforcing these limits so at the very least you could place bounds on the contributions from any honest browser.

@appascoe
Copy link
Contributor Author

I think that's a misapplication of differential privacy. You shouldn't have two separate databases, but rather treat them as a single large database. In that case, the L1-sensitivity should be the largest value across the two sub-databases, i.e. queries against the subset of entries with the smaller values should be subject to the same large noise. And thus, the problem with adding noise on a per-record level as it treats each record as its own database:

I think I see why you want some standardized range: you don't want to treat everything as one large database. The problem is that for some of these individual databases, you're adding more noise than necessary to deanonymize them. This, of course, depending on the data, can be deleterious to any analysis of those data.

As for opening an issue, we'll craft something up that's written more clearly and do so.

@csharrison
Copy link

Sorry I think there's a misunderstanding. There is one database D, but to achieve traditional DP you need your bounds to hold on any database D and neighbor D'. You can't inspect the database first to see if it is one with low local sensitivity. This forces you to use the global sensitivity (the maximum distance between any two databases) which you can achieve with clamping inputs to a range.

The approach you are looking for is more generally called an "instance-based noise" mechanism where the noise comes from the exact instance of the database rather than at the universe of databases. I am not super familiar with all the techniques here but it could be a possible research direction. This paper is another I found here which expands on the findings of the one in my last post.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants