You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added an implementation of AdamW to tff.learning.optimizers.
Changed
Support None gradients in tff.learning.optimizers. This mimics the
behavior of tf.keras.optimizers - gradients that are None will be
skipped, and their corresponding optimizer output (e.g. momentum and
weights) will not be updated.
The behavior of DPGroupingFederatedSum::Clamp: it now sets negatives to 0.
Associated test code has been updated. Reason: sensitivity calculation for
DP noise was calibrated for non-negative values.
Change tutorials to use tff.learning.optimizers in conjunction with tff.learning computations.
tff.simulation.datasets.TestClientData only accepts dictionaries whose
leaf nodes are not tf.Tensors.
Fixed
A bug where tff.learning.optimizers.build_adafactor would update its step
counter twice upon every invocation of .next().
A bug where tensor learning rates for tff.learning.optimizers.build_sgdm
would fail with mixed dtype gradients.
A bug where different optimizers had different behavior on empty weights
structures. TFF optimizers now consistently accept and function as no-ops on
empty weight structures.
A bug where tff.simulation.datasets.TestClientData.dataset_computation
yielded datasets of indeterminate shape.
Removed
tff.jax_computation, use tff.jax.computation instead.
tff.profiler, this API is not used.
Removed various stale tutorials.
Removed structure from tff.program.SavedModelFileReleaseManager's get_value method parameters.
Removed support for tf.keras.optimizers in tff.learning.