Releases: keras-team/keras
Keras 2.1.3
Areas of improvement
- Performance improvements (esp. convnets with TensorFlow backend).
- Usability improvements.
- Docs & docstrings improvements.
- New models in the
applications
module. - Bug fixes.
API changes
trainable
attribute inBatchNormalization
now disables the updates of the batch statistics (i.e. iftrainable == False
the layer will now run 100% in inference mode).- Add
amsgrad
argument inAdam
optimizer. - Add new applications:
NASNetMobile
,NASNetLarge
,DenseNet121
,DenseNet169
,DenseNet201
. - Add
Softmax
layer (removing need to use aLambda
layer in order to specify theaxis
argument). - Add
SeparableConv1D
layer. - In
preprocessing.image.ImageDataGenerator
, allowwidth_shift_range
andheight_shift_range
to take integer values (absolute number of pixels) - Support
return_state
inBidirectional
applied to RNNs (return_state
should be set on the child layer). - The string values
"crossentropy"
and"ce"
are now allowed in themetrics
argument (inmodel.compile()
), and are routed to eithercategorical_crossentropy
orbinary_crossentropy
as needed. - Allow
steps
argument inpredict_*
methods on theSequential
model. - Add
oov_token
argument inpreprocessing.text.Tokenizer
.
Breaking changes
- In
preprocessing.image.ImageDataGenerator
,shear_range
has been switched to use degrees rather than radians (for consistency). This should not actually break anything (neither training nor inference), but keep this change in mind in case you see any issues with regard to your image data augmentation process.
Credits
Thanks to our 45 contributors whose commits are featured in this release:
@Dref360, @OliPhilip, @TimZaman, @bbabenko, @bdwyer2, @berkatmaca, @caisq, @decrispell, @dmaniry, @fchollet, @fgaim, @gabrieldemarmiesse, @gklambauer, @hgaiser, @hlnull, @icyblade, @jgrnt, @kashif, @kouml, @lutzroeder, @m-mohsen, @mab4058, @manashty, @masstomato, @mihirparadkar, @myutwo150, @nickbabcock, @novotnj3, @obsproth, @ozabluda, @philferriere, @piperchester, @pstjohn, @roatienza, @souptc, @spiros, @srs70187, @sumitgouthaman, @taehoonlee, @tigerneil, @titu1994, @tobycheese, @vitaly-krumins, @yang-zhang, @ziky90
Keras 2.1.2
Areas of improvement
- Bug fixes and performance improvements.
- API improvements in Keras applications, generator methods.
API changes
- Make
preprocess_input
in all Keras applications compatible with both Numpy arrays and symbolic tensors (previously only supported Numpy arrays). - Allow the
weights
argument in all Keras applications to accept the path to a custom weights file to load (previously only supported the built-inimagenet
weights file). steps_per_epoch
behavior change in generator training/evaluation methods:- If specified, the specified value will be used (previously, in the case of generator of type
Sequence
, the specified value was overridden by theSequence
length) - If unspecified and if the generator passed is a
Sequence
, we set it to theSequence
length.
- If specified, the specified value will be used (previously, in the case of generator of type
- Allow
workers=0
in generator training/evaluation methods (will run the generator in the main process, in a blocking way). - Add
interpolation
argument inImageDataGenerator.flow_from_directory
, allowing a custom interpolation method for image resizing. - Allow
gpus
argument inmulti_gpu_model
to be a list of specific GPU ids.
Breaking changes
- The change in
steps_per_epoch
behavior (described above) may affect some users.
Credits
Thanks to our 26 contributors whose commits are featured in this release:
@Alex1729, @alsrgv, @apisarek, @asos-saul, @athundt, @cherryunix, @dansbecker, @datumbox, @de-vri-es, @drauh, @evhub, @fchollet, @heath730, @hgaiser, @icyblade, @jjallaire, @knaveofdiamonds, @lance6716, @luoch, @mjacquem1, @myutwo150, @ozabluda, @raviksharma, @rh314, @yang-zhang, @zach-nervana
Keras 2.1.1
This release amends release 2.1.0 to include a fix for an erroneous breaking change introduced in #8419.
Keras 2.1.0
This is a small release that fixes outstanding bugs that were reported since the previous release.
Areas of improvement
- Bug fixes (in particular, Keras no longer allocates devices at startup time with the TensorFlow backend. This was causing issues with Horovod.)
- Documentation and docstring improvements.
- Better CIFAR10 ResNet example script and improvements to example scripts code style.
API changes
- Add
go_backwards
to cuDNN RNNs (enablesBidirectional
wrapper on cuDNN RNNs). - Add ability to pass
fetches
toK.Function()
with the TensorFlow backend. - Add
steps_per_epoch
andvalidation_steps
arguments inSequential.fit()
(to sync it withModel.fit()
).
Breaking changes
None.
Credits
Thanks to our 14 contributors whose commits are featured in this release:
@Dref360, @LawnboyMax, @anj-s, @bzamecnik, @datumbox, @diogoff, @farizrahman4u, @fchollet, @frexvahi, @jjallaire, @nsuh, @ozabluda, @roatienza, @yakigac
Keras 2.0.9
Areas of improvement
- RNN improvements:
- Refactor RNN layers to rely on atomic RNN cells. This makes the creation of custom RNN very simple and user-friendly, via the
RNN
base class. - Add ability to create new RNN cells by stacking a list of cells, allowing for efficient stacked RNNs.
- Add
CuDNNLSTM
andCuDNNGRU
layers, backend by NVIDIA's cuDNN library for fast GPU training & inference. - Add RNN Sequence-to-sequence example script.
- Add
constants
argument inRNN
'scall
method, making RNN attention easier to implement.
- Refactor RNN layers to rely on atomic RNN cells. This makes the creation of custom RNN very simple and user-friendly, via the
- Easier multi-GPU data parallelism via
keras.utils.multi_gpu_model
. - Bug fixes & performance improvements (in particular, native support for NCHW data layout in TensorFlow).
- Documentation improvements and examples improvements.
API changes
- Add "fashion mnist" dataset as
keras.datasets.fashion_mnist.load_data()
- Add
Minimum
merge layer askeras.layers.Minimum
(class) andkeras.layers.minimum(inputs)
(function) - Add
InceptionResNetV2
tokeras.applications
. - Support
bool
variables in TensorFlow backend. - Add
dilation
toSeparableConv2D
. - Add support for dynamic
noise_shape
inDropout
- Add
keras.layers.RNN()
base class for batch-level RNNs (used to implement custom RNN layers from a cell class). - Add
keras.layers.StackedRNNCells()
layer wrapper, used to stack a list of RNN cells into a single cell. - Add
CuDNNLSTM
andCuDNNGRU
layers. - Deprecate
implementation=0
for RNN layers. - The Keras progbar now reports time taken for each past epoch, and average time per step.
- Add option to specific resampling method in
keras.preprocessing.image.load_img()
. - Add
keras.utils.multi_gpu_model
for easy multi-GPU data parallelism. - Add
constants
argument inRNN
'scall
method, used to pass a list of constant tensors to the underlying RNN cell.
Breaking changes
- Implementation change in
keras.losses.cosine_proximity
results in a different (correct) scaling behavior. - Implementation change for samplewise normalization in
ImageDataGenerator
results in a different normalization behavior.
Credits
Thanks to our 59 contributors whose commits are featured in this release!
@alok, @Danielhiversen, @Dref360, @HelgeS, @JakeBecker, @MPiecuch, @MartinXPN, @RitwikGupta, @TimZaman, @adammenges, @aeftimia, @ahojnnes, @akshaychawla, @alanyee, @aldenks, @andhus, @apbard, @aronj, @bangbangbear, @bchu, @bdwyer2, @bzamecnik, @cclauss, @colllin, @datumbox, @deltheil, @dhaval067, @durana, @ericwu09, @facaiy, @farizrahman4u, @fchollet, @flomlo, @fran6co, @grzesir, @hgaiser, @icyblade, @jsaporta, @julienr, @jussihuotari, @kashif, @lucashu1, @mangerlahn, @myutwo150, @nicolewhite, @noahstier, @nzw0301, @olalonde, @ozabluda, @patrikerdes, @podhrmic, @qin, @raelg, @roatienza, @shadiakiki1986, @smgt, @souptc, @taehoonlee, @y0z
Keras 2.0.8
The primary purpose of this release is to address an incompatibility between Keras 2.0.7 and the next version of TensorFlow (1.4). TensorFlow 1.4 isn't due until a while, but the sooner the PyPI release has the fix, the fewer people will be affected when upgrading to the next TensorFlow version when it gets released.
No API changes for this release. A few bug fixes.
Keras 2.0.7
Areas of improvement
- Bug fixes.
- Performance improvements.
- Documentation improvements.
- Better support for training models from data tensors in TensorFlow (e.g. Datasets, TFRecords). Add a related example script.
- Improve TensorBoard UX with better grouping of ops into name scopes.
- Improve test coverage.
API changes
- Add
clone_model
method, enabling to construct a new model, given an existing model to use as a template. Works even in a TensorFlow graph different from that of the original model. - Add
target_tensors
argument incompile
, enabling to use custom tensors or placeholders as model targets. - Add
steps_per_epoch
argument infit
, enabling to train a model from data tensors in a way that is consistent with training from Numpy arrays. - Similarly, add
steps
argument inpredict
andevaluate
. - Add
Subtract
merge layer, and associated layer functionsubtract
. - Add
weighted_metrics
argument incompile
to specify metric functions meant to take into accountsample_weight
orclass_weight
. - Make the
stop_gradients
backend function consistent across backends. - Allow dynamic shapes in
repeat_elements
backend function. - Enable stateful RNNs with CNTK.
Breaking changes
- The backend methods
categorical_crossentropy
,sparse_categorical_crossentropy
,binary_crossentropy
had the order of their positional arguments (y_true
,y_pred
) inverted. This change does not affect thelosses
API. This change was done to achieve API consistency between thelosses
API and the backend API. - Move constraint management to be based on variable attributes. Remove the now-unused
constraints
attribute on layers and models (not expected to affect any user).
Credits
Thanks to our 47 contributors whose commits are featured in this release!
@5ke, @alok, @Danielhiversen, @Dref360, @NeilRon, @abnerA, @acburigo, @airalcorn2, @angeloskath, @athundt, @brettkoonce, @cclauss, @denfromufa, @enkait, @erg, @ericwu09, @farizrahman4u, @fchollet, @georgwiese, @ghisvail, @gokceneraslan, @hgaiser, @inexxt, @joeyearsley, @jorgecarleitao, @kennyjacob, @keunwoochoi, @krizp, @lukedeo, @milani, @n17r4m, @nicolewhite, @nigeljyng, @nyghtowl, @nzw0301, @rapatel0, @souptc, @srinivasreddy, @staticfloat, @taehoonlee, @td2014, @titu1994, @tleeuwenburg, @udibr, @waleedka, @wassname, @yashk2810
Keras 2.0.6
Areas of improvement
- Improve generator methods (
predict_generator
,fit_generator
,evaluate_generator
) and add data enqueuing utilities. - Bug fixes and performance improvements.
- New features: new
Conv3DTranspose
layer, newMobileNet
application, self-normalizing networks.
API changes
- Self-normalizing networks: add
selu
activation function,AlphaDropout
layer,lecun_normal
initializer. - Data enqueuing: add
Sequence
,SequenceEnqueuer
,GeneratorEnqueuer
toutils
. - Generator methods: rename arguments
pickle_safe
(replaced withuse_multiprocessing
) andmax_q_size
(replaced withmax_queue_size
). - Add
MobileNet
to the applications module. - Add
Conv3DTranspose
layer. - Allow custom print functions for model's
summary
method (argumentprint_fn
).
Keras 2.0.5
- Add beta CNTK backend.
- TensorBoard improvements.
- Documentation improvements.
- Bug fixes and performance improvements.
- Improve style transfer example script.
API changes:
- Add
return_state
constructor argument to RNNs. - Add
skip_compile
option toload_model
. - Add
categorical_hinge
loss function. - Add
sparse_top_k_categorical_accuracy
metric. - Add new options to
TensorBoard
callback. - Add
TerminateOnNaN
callback. - Generalize the
Embedding
layer to N (>=2) input dimensions.
Keras 2.0.4
- Documentation improvements.
- Docstring improvements.
- Update some examples scripts (in particular, new deep dream example).
- Bug fixes and performance improvements.
API changes:
- Add
logsumexp
andidentity
to backend. - Add
logcosh
loss. - New signature for
add_weight
inLayer
. get_initial_states
inRecurrent
is nowget_initial_state
.