Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add serveral differential privacy algorithms #369

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

DavdGao
Copy link
Collaborator

@DavdGao DavdGao commented Sep 8, 2022

Main modifications:

  • Add functional mechanism for differential privacy, including:
    • a trainer named fmtrainer
    • two models for linear regression and logistic regression in funcational mechanism
  • Support gaussian and laplace DP mechanism by adding new optimizers
  • Add unittest and scripts for DP mechanism

@DavdGao DavdGao added the Feature New feature label Sep 8, 2022
Copy link
Collaborator

@xieyxclack xieyxclack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but I have not checked the implementation details of Functional Mechanism: Regression Analysis under Differential Privacy. And please refer to the inline comments for minor suggestions, thx!

"""Returns the l2 norm of weight

Arguments:
p (int): The order of norm.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'p=2'?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modified accordingly

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the funcational mechanism original paper, the noise addition process is performed only once for the objective function, while your implementation performs noise addition in each iteration, is this supported by the theory?


def forward(self, ctx):
l2_norm = 0.
for param in ctx.model.parameters():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An interesting discussion about whether l2 regularization should be applied to beta and gamma of BN layers: https://discuss.pytorch.org/t/weight-decay-in-the-optimizers-is-a-bad-idea-especially-with-batchnorm/16994.
IMO, we can keep the implementation same as that in torch.norm

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current torch.norm will calculate the l2 norm for all parameters including bn weight and bias.
For now, we use the parameter skip_bn to aovid calculating l2-norm for bn layer(with 'bn' in its name).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Conflicts are solved

# Conflicts:
#	federatedscope/core/auxiliaries/optimizer_builder.py
#	federatedscope/register.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature New feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants