This repository has been archived by the owner on Nov 3, 2022. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
- What I did
I copied the Adam optimizer from the main keras repo and modified it according to the Adamod paper.
- How I did it
Mainly I added the exponential averaging of the learning rate after the gradients are divided by the 2nd moment, using the new beta_3 parameter. This exponential average is then used as an upper bound for this adaptive learning rate. The 1st and 2nd moment bias corrections had to be separated out into 2 different statements because the 1st moment term is applied after this upper bounding according to the Adamod paper. See this paper for further details.
- How you can verify it
I added a unit test in the same fashion as the other unit tests in the optimizers directory.
This pull request fixes #531