Skip to content

Releases: keras-team/keras

Keras Release 2.10.0

02 Sep 21:31
b80dd12
Compare
Choose a tag to compare

Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.10.0 for more details.

Full Changelog: v2.9.0...v2.10.0

Keras Release 2.10.0 RC1

02 Sep 20:31
b80dd12
Compare
Choose a tag to compare
Pre-release

Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.10.0-rc3 for more details.

What's Changed

New Cont...

Read more

Keras Release 2.9.0

13 May 20:03
07e1374
Compare
Choose a tag to compare

Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.9.0 for more details.

Full Changelog: v2.8.0...v2.9.0

Keras Release 2.9.0 RC2

22 Apr 18:03
07e1374
Compare
Choose a tag to compare
Pre-release

What's Changed

Full Changelog: v2.9.0-rc1...v2.9.0-rc2

Keras Release 2.9.0 RC1

18 Apr 17:43
27e3966
Compare
Choose a tag to compare
Pre-release

What's Changed

  • Cherrypick Keras DTensor related updates into keras 2.9 by @qlzh727 in #16379

Full Changelog: v2.9.0-rc0...v2.9.0-rc1

Keras Release 2.9.0 RC0

04 Apr 17:50
Compare
Choose a tag to compare
Pre-release

Please see https://github.com/tensorflow/tensorflow/blob/r2.9/RELEASE.md for Keras release notes.

Major Features and Improvements

  • tf.keras:
    • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
    • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
    • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
    • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
    • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
    • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
    • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
    • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
    • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.

What's Changed

New Contributors

Full Changelog: v2.8.0-rc0...v2.9.0-rc0

Keras Release 2.8.0

03 Feb 05:13
Compare
Choose a tag to compare

Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.8.0 for more details.

Keras Release 2.8.0 RC1

18 Jan 17:54
Compare
Choose a tag to compare
Pre-release

What's Changed

  • Compute LSTM and GRU via cuDNN for RaggedTensors. by @foxik in #15862

Full Changelog: v2.8.0-rc0...v2.8.0-rc1

Keras Release 2.8.0 RC0

22 Dec 18:26
Compare
Choose a tag to compare
Pre-release

Please see https://github.com/tensorflow/tensorflow/blob/r2.8/RELEASE.md for Keras release notes.

  • tf.keras:
    • Preprocessing Layers
      • Added a tf.keras.layers.experimental.preprocessing.HashedCrossing
        layer which applies the hashing trick to the concatenation of crossed
        scalar inputs. This provides a stateless way to try adding feature crosses
        of integer or string data to a model.
      • Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users
        should migrate to the HashedCrossing layer or use
        tf.sparse.cross/tf.ragged.cross directly.
      • Added additional standardize and split modes to TextVectorization.
        • standardize="lower" will lowercase inputs.
        • standardize="string_punctuation" will remove all puncuation.
        • split="character" will split on every unicode character.
      • Added an output_mode argument to the Discretization and Hashing
        layers with the same semantics as other preprocessing layers. All
        categorical preprocessing layers now support output_mode.
      • All preprocessing layer output will follow the compute dtype of a
        tf.keras.mixed_precision.Policy, unless constructed with
        output_mode="int" in which case output will be tf.int64.
        The output type of any preprocessing layer can be controlled individually
        by passing a dtype argument to the layer.
    • tf.random.Generator for keras initializers and all RNG code.
      • Added 3 new APIs for enable/disable/check the usage of
        tf.random.Generator in keras backend, which will be the new backend for
        all the RNG in Keras. We plan to switch on the new code path by default in
        tf 2.8, and the behavior change will likely to cause some breakage on user
        side (eg if the test is checking against some golden nubmer). These 3 APIs
        will allow user to disable and switch back to legacy behavior if they
        prefer. In future (eg tf 2.10), we expect to totally remove the legacy
        code path (stateful random Ops), and these 3 APIs will be removed as well.
    • tf.keras.callbacks.experimental.BackupAndRestore is now available as
      tf.keras.callbacks.BackupAndRestore. The experimental endpoint is
      deprecated and will be removed in a future release.
    • tf.keras.experimental.SidecarEvaluator is now available as
      tf.keras.utils.SidecarEvaluator. The experimental endpoint is
      deprecated and will be removed in a future release.
    • Metrics update and collection logic in default Model.train_step() is now
      customizable via overriding Model.compute_metrics().
    • Losses computation logic in default Model.train_step() is now
      customizable via overriding Model.compute_loss().
    • jit_compile added to Model.compile() on an opt-in basis to compile the
      model's training step with XLA. Note that
      jit_compile=True may not necessarily work for all models.

What's Changed

  • Cleanup legacy Keras files by @qlzh727 in #14256
  • Sync OSS keras to head. by @qlzh727 in #14300
  • Update build script for GPU build. by @copybara-service in #14336
  • Move the LossReduction class from tf to Keras. by @copybara-service in #14362
  • Update keras API generate script. by @copybara-service in #14418
  • Adding extra target that are needed by PIP package dependency. by @copybara-service in #14421
  • Add test related targets to PIP package list. by @copybara-service in #14427
  • Sync OSS keras to head. by @copybara-service in #14428
  • Update visibility setting for keras/tests to enable PIP package testing. by @copybara-service in #14429
  • Remove items from PIP_EXCLUDED_FILES which is needed with testing PIP. by @copybara-service in #14431
  • Split bins into num_bins and bin_boundaries arguments for discretization by @copybara-service in #14507
  • Update pbtxt to use _PRFER_OSS_KERAS=1. by @copybara-service in #14519
  • Sync OSS keras to head. by @copybara-service in #14572
  • Sync OSS keras to head. by @copybara-service in #14614
  • Cleanup the bazelrc and remove unrelated items to keras. by @copybara-service in #14616
  • Sync OSS keras to head. by @copybara-service in #14624
  • Remove object metadata when saving SavedModel. by @copybara-service in #14697
  • Fix static shape inference for Resizing layer. by @copybara-service in #14712
  • Make TextVectorization work with list input. by @copybara-service in #14711
  • Remove deprecated methods of Sequential model. by @copybara-service in #14714
  • Improve Model docstrings by @copybara-service in #14726
  • Add migration doc for legacy_tf_layers/core.py. by @copybara-service in #14740
  • PR #43417: Fixes #42872: map_to_outputs_names always returns a copy by @copybara-service in #14755
  • Rename the keras.py to keras_lib.py to resolve the name conflict during OSS test. by @copybara-service in #14778
  • Switch to tf.io.gfile for validating vocabulary files. by @copybara-service in #14788
  • Avoid serializing generated thresholds for AUC metrics. by @copybara-service in #14789
  • Fix data_utils.py when name ends with .tar.gz by @copybara-service in #14777
  • Fix lookup layer oov token check when num_oov_indices > len(vocabulary tokens) by @copybara-service in #14793
  • Update callbacks.py by @jvishnuvardhan in #14760
  • Fix keras metric.result_state when the metric variables are sharded variable. by @copybara-service in #14790
  • Fix typos in CONTRIBUTING.md by @amogh7joshi in #14642
  • Fixed ragged sample weights by @DavideWalder in #14804
  • Pin the protobuf version to 3.9.2 which is same as the TF. by @copybara-service in #14835
  • Make variable scope shim regularizer adding check for attribute presence instead of instance class by @copybara-service in #14837
  • Add missing license header for leakr check. by @copybara-service in #14840
  • Fix TextVectorization with output_sequence_length on unknown input shapes by @copybara-service in #14832
  • Add more explicit error message for instance type checking of optimizer. by @copybara-service in #14846
  • Set aggregation for variable when using PS Strategy for aggregating variables when running multi-gpu tests. by @copybara-service in #14845
  • Remove unnecessary reshape layer in MobileNet architecture by @copybara-service in #14854
  • Removes caching of the convolution tf.nn.convolution op. While this provided some performance benefits, it also produced some surprising behavior for users in eager mode. by @copybara-service in #14855
  • Output int64 by default from Discretization by @copybara-service in #14841
  • add patterns to .gitignore by @haifeng-jin in #14861
  • Clarify documentation of DepthwiseConv2D by @vinhill in #14817
  • add DepthwiseConv1D layer by @fsx950223 in #14863
  • Make model summary wrap by @Llamrei in #14865
  • Update the link in Estimator by @hirobf10 in #14901
  • Fix int given for float args by @SamuelMarks in #14900
  • Fix RNN, StackedRNNCells with nested state_size, output_size TypeError issues by @Ending2015a in #14905
  • Fix the use of imagenet_utils.preprocess_input within a Lambda layer with mixed precision by @anth2o in #14917
  • Fix docstrings in MultiHeadAttention layer call argument return_attention_scores. by @guillesanbri in #14920
  • Check if layer has _metrics_lock attribute by @DanBmh in #14903
  • Make keras.Model picklable by @adriangb in #14748
  • Fix typo in docs by @seanmor5 in #14946
  • use getter setter by @fsx950223 in #14948
  • Close _SESSION.session in clear_session by @sfreilich in #14414
  • Fix keras nightly PIP package build. by @copybara-service in #14957
  • Fix EarlyStopping stop at fisrt epoch when patience=0 ; add auc to au… by @DachuanZhao in...
Read more

Keras Release 2.7.0

03 Nov 16:24
2c48a3b
Compare
Choose a tag to compare

Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.7.0 for more details.