Releases: GFNOrg/torchgfn
Releases · GFNOrg/torchgfn
New Replay Buffer, Computation Caching, Helper Functions, and Tutorials
- License / Readme updates.
- Updates to package requirements.
- Addition of a Prioritised Replay Buffer.
- GFlowNets now optionally
save_logprobs
orestimator_outputs
-- this is to prevent unnecessary re-computation (depending on whether you are performing on-policy or off-policy learning). - Added
self.logF*_parameters()
methods to help when passing to a dedicated optimizer (differently from saypf
andpb
. - Helper functions (e.g.,
stack_states
). - Improved tutorials - new use-cases and improved notebooks.
v1.2 Substantial Updates to Environment Definition and Sampling
- Sampling now saves estimator outputs to avoid recomputation.
- The user no longer has to define a class factory when defining environments.
- New examples added.
- Other small quality of life improvements to prevent silent bugs (often these require the user to more explicitly define expected behaviours when sampling etc).
Version 1.1.1
Bug fix : #134
- From now on, the published version (on pypi) and the release should correspond to the
stable
branch
Version 1.1
Minor API changes from v1 - for simplicity.
Version 1.0
Major API change. More flexibility in environment creation.
torchgfn v0.2
This version incudes all the functionalities used in other codebases that rely on this library up to this day.
With the given code, results published in several papers can be reproduced.
This should be the last version before v1, that supports more generic environments.
In this version, the name of the repo (as well as the pypi package and the docs) has changed to torchgfn
.
Version 0.1
This version supports simple discrete environments, and is used in published research papers.