Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update tensorflow requirement from !=2.6.0,!=2.6.1,<2.15.0,>=2.0.0 to >=2.0.0,!=2.6.0,!=2.6.1,<2.19.0 #1023

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Nov 18, 2024

Updates the requirements on tensorflow to permit the latest version.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.18.0

Release 2.18.0

TensorFlow

Breaking Changes

  • tf.lite

    • C API:
      • An optional, fourth parameter was added TfLiteOperatorCreate as a step forward towards a cleaner API for TfLiteOperator. Function TfLiteOperatorCreate was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
  • TensorRT support is disabled in CUDA builds for code health improvement.

  • Hermetic CUDA support is added.

    Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

Known Caveats

Major Features and Improvements

  • TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.
    • Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
    • Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
  • tf.lite:
    • The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
  • SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

  • tf.data

    • Add optional synchronous argument to map, to specify that the map should run synchronously, as opposed to be parallelizable when options.experimental_optimization.map_parallelization=True. This saves memory compared to setting num_parallel_calls=1.
    • Add optional use_unbounded_threadpool argument to map, to specify that the map should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.
    • Add tf.data.experimental.get_model_proto to allow users to peek into the analytical model inside of a dataset iterator.
  • tf.lite

    • Dequantize op supports TensorType_INT4.
      • This change includes per-channel dequantization.
    • Add support for stablehlo.composite.
    • EmbeddingLookup op supports per-channel quantization and TensorType_INT4 values.
    • FullyConnected op supports TensorType_INT16 activation and TensorType_Int4 weight per-channel quantization.
  • tf.tensor_scatter_update, tf.tensor_scatter_add and of other reduce types.

    • Support bad_indices_policy.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Akhil Goel, akhilgoe, Alexander Pivovarov, Amir Samani, Andrew Goodbody, Andrey Portnoy, Anthony Platanios, bernardoArcari, Brett Taylor, buptzyb, Chao, Christian Clauss, Cocoa, Daniil Kutz, Darya Parygina, dependabot[bot], Dimitris Vardoulakis, Dragan Mladjenovic, Elfie Guo, eukub, Faijul Amin, flyingcat, Frédéric Bastien, ganyu.08, Georg Stefan Schmid, Grigory Reznikov, Harsha H S, Harshit Monish, Heiner, Ilia Sergachev, Jan, Jane Liu, Jaroslav Sevcik, Kaixi Hou, Kanvi Khanna, Kristof Maar, Kristóf Maár, LakshmiKalaKadali, Lbertho-Gpsw, lingzhi98, MarcoFalke, Masahiro Hiramori, Mmakevic-Amd, mraunak, Nobuo Tsukamoto, Notheisz57, Olli Lupton, Pearu Peterson, pemeliya, Peyara Nando, Philipp Hack, Phuong Nguyen, Pol Dellaiera, Rahul Batra, Ruturaj Vaidya, sachinmuradi, Sergey Kozub, Shanbin Ke, Sheng Yang, shengyu, Shraiysh, Shu Wang, Surya, sushreebarsa, Swatheesh-Mcw, syzygial, Tai Ly, terryysun, tilakrayal, Tj Xu, Trevor Morris, Tzung-Han Juang, wenchenvincent, wondertx, Xuefei Jiang, Ye Huang, Yimei Sun, Yunlong Liu, Zahid Iqbal, Zhan Lu, Zoranjovanovic-Ns, Zuri Obozuwa

Changelog

Sourced from tensorflow's changelog.

Release 2.18.0

TensorFlow

Breaking Changes

  • tf.lite

    • Interpreter:
      • tf.lite.Interpreter gives warning of future deletion and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
    • C API:
      • An optional, fourth parameter was added TfLiteOperatorCreate as a step forward towards a cleaner API for TfLiteOperator. Function TfLiteOperatorCreate was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
  • TensorRT support is disabled in CUDA builds for code health improvement.

  • TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.

    • Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
    • Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
  • Hermetic CUDA support is added.

    Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

  • Remove the EnumNamesXNNPackFlags function in tensorflow/lite/acceleration/configuration/configuration_generated.h.

    This change is a bug fix in the automatically generated code. This change is automatically generated by the new flatbuffer generator. The flatbuffers library is updated to 24.3.25 in tensorflow/tensorflow@c17d64d. The new flatbuffers library includes the following change google/flatbuffers#7813 which fixed a underlying flatbuffer code generator bug.

Known Caveats

Major Features and Improvements

  • tf.lite:
    • The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
    • SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

  • tf.data

    • Add optional synchronous argument to map, to specify that the map should run synchronously, as opposed to be parallelizable when options.experimental_optimization.map_parallelization=True. This saves memory compared to setting num_parallel_calls=1.
    • Add optional use_unbounded_threadpool argument to map, to specify that the map should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.
    • Add tf.data.experimental.get_model_proto to allow users to peek into the analytical model inside of a dataset iterator.
  • tf.lite

    • Dequantize op supports TensorType_INT4.
      • This change includes per-channel dequantization.
    • Add support for stablehlo.composite.
    • EmbeddingLookup op supports per-channel quantization and TensorType_INT4 values.
    • FullyConnected op supports TensorType_INT16 activation and TensorType_Int4 weight per-channel quantization.
    • Enable per-tensor quantization support in dynamic range quantization of TRANSPOSE_CONV layer. Fixes TFLite converter bug.

... (truncated)

Commits
  • 6550e4b Merge pull request #78464 from tensorflow/rtg0795-patch-1
  • 7e0c244 Merge pull request #78463 from tensorflow-jenkins/version-numbers-2.18.0-21101
  • 35624d2 Update RELEASE.md to move TFLite SignatureRunner to the right section
  • 8d2c5e1 Update version numbers to 2.18.0
  • d5f4a3f Merge pull request #77589 from tensorflow-jenkins/version-numbers-2.18.0rc2-1...
  • 7cbcbf3 Update version numbers to 2.18.0-rc2
  • 84c9398 Merge pull request #77576 from tensorflow/r2.18-be4f646ec43
  • 8fca5e7 PR #17430: [ROCm] Use unique_ptr for TupleHandle in pjrt_se_client
  • 2c3c798 Merge pull request #77025 from tensorflow-jenkins/version-numbers-2.18.0rc1-2...
  • 10693a4 Update version numbers to 2.18.0-rc1
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Updates the requirements on [tensorflow](https://github.com/tensorflow/tensorflow) to permit the latest version.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](tensorflow/tensorflow@v2.0.0...v2.18.0)

---
updated-dependencies:
- dependency-name: tensorflow
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Nov 18, 2024
Copy link
Contributor Author

dependabot bot commented on behalf of github Nov 18, 2024

Dependabot tried to add @jklaise and @mauicv as reviewers to this PR, but received the following error from GitHub:

POST https://api.github.com/repos/SeldonIO/alibi/pulls/1023/requested_reviewers: 422 - Reviews may only be requested from collaborators. One or more of the users or teams you specified is not a collaborator of the SeldonIO/alibi repository. // See: https://docs.github.com/rest/pulls/review-requests#request-reviewers-for-a-pull-request

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
Member

@sakoush sakoush left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -35,7 +35,7 @@ jobs:
strategy:
matrix:
os: [ ubuntu-latest ]
python-version: [ '3.8', '3.9', '3.10', '3.11']
python-version: ['3.9', '3.10', '3.11']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add also 3.12 (and potentially 3.13)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am planning to do other PRs to completely remove 3.8 and another one to add 3.12

@@ -35,7 +35,7 @@ jobs:
strategy:
matrix:
os: [ ubuntu-latest ]
python-version: [ '3.8', '3.9', '3.10', '3.11']
python-version: ['3.9', '3.10', '3.11']
include: # Run windows tests on only one python version
- os: windows-latest
python-version: '3.11'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set at least to 3.12?

@@ -1,8 +1,8 @@
import pytest
from pytest_lazyfixture import lazy_fixture
import torch
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is torch required?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used by one of the fixtures.

setup.py Outdated
# `keras 3` becomes the default for `tensorflow >= 2.16.0``
# which is not yet supported by `transformers`
'tensorflow': [
'tensorflow>=2.0.0, !=2.6.0, !=2.6.1, <2.19.0',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the min version for tf be 2.16.0 now?

setup.py Outdated
@@ -17,14 +17,20 @@ def readme():
'numba>=0.50.0, !=0.54.0, <0.60.0', # Avoid 0.54 due to: https://github.com/SeldonIO/alibi/issues/466
],

'tensorflow': ['tensorflow>=2.0.0, !=2.6.0, !=2.6.1, <2.15.0'],
# `keras 3` becomes the default for `tensorflow >= 2.16.0``
# which is not yet supported by `transformers`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update comment as I am not sure how it relates to the version you are setting for example for tf_keras.

@RobertSamoilescu RobertSamoilescu merged commit e04a354 into master Dec 5, 2024
11 of 19 checks passed
@dependabot dependabot bot deleted the dependabot/pip/tensorflow-gte-2.0.0-and-neq-2.6.0-and-neq-2.6.1-and-lt-2.19.0 branch December 5, 2024 15:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants