-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add serveral differential privacy algorithms #369
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but I have not checked the implementation details of Functional Mechanism: Regression Analysis under Differential Privacy
. And please refer to the inline comments for minor suggestions, thx!
"""Returns the l2 norm of weight | ||
|
||
Arguments: | ||
p (int): The order of norm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'p=2'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modified accordingly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the funcational mechanism original paper, the noise addition process is performed only once for the objective function, while your implementation performs noise addition in each iteration, is this supported by the theory?
|
||
def forward(self, ctx): | ||
l2_norm = 0. | ||
for param in ctx.model.parameters(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An interesting discussion about whether l2 regularization should be applied to beta and gamma of BN layers: https://discuss.pytorch.org/t/weight-decay-in-the-optimizers-is-a-bad-idea-especially-with-batchnorm/16994.
IMO, we can keep the implementation same as that in torch.norm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current torch.norm
will calculate the l2 norm for all parameters including bn weight and bias.
For now, we use the parameter skip_bn
to aovid calculating l2-norm for bn layer(with 'bn' in its name).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conflicts are solved
# Conflicts: # federatedscope/core/auxiliaries/optimizer_builder.py # federatedscope/register.py
Main modifications:
fmtrainer