Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update tensorflow requirement from !=2.6.0,!=2.6.1,<2.15.0,>=2.0.0 to >=2.0.0,!=2.6.0,!=2.6.1,<2.19.0 #1023

Merged
Show file tree
Hide file tree
Changes from 29 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
4000b36
Update tensorflow requirement
dependabot[bot] Nov 18, 2024
39adc9f
Upperbound spacy
RobertSamoilescu Nov 28, 2024
5ec4b7b
Upperbound tf to 2.16 because of keras 3 and transformers compatibility
RobertSamoilescu Nov 28, 2024
ba44c42
Fixed PDP tests
RobertSamoilescu Nov 29, 2024
cc47fc5
Updated alibi-testing branch for testing purposes
RobertSamoilescu Nov 29, 2024
b3641a8
Modified keras import
RobertSamoilescu Nov 29, 2024
9eeb657
Revert "Modified keras import"
RobertSamoilescu Nov 29, 2024
eee40d5
Included keras legacy
RobertSamoilescu Nov 29, 2024
5a1f10c
Fixed setup.py
RobertSamoilescu Dec 2, 2024
b17b03e
Fixed IG tests
RobertSamoilescu Dec 2, 2024
3e17adf
Fixed cfrl tests
RobertSamoilescu Dec 2, 2024
4af6c22
Fixed cfrl saving
RobertSamoilescu Dec 2, 2024
ca0dd4d
Replace tf.keras with legacy keras
RobertSamoilescu Dec 2, 2024
138c212
Fixed cfproto tests
RobertSamoilescu Dec 3, 2024
1db3f74
Refactored and fixed tf1 tests
RobertSamoilescu Dec 3, 2024
8623ecc
Fixed flake8 issues
RobertSamoilescu Dec 3, 2024
a916ec7
Fixed mypy issues
RobertSamoilescu Dec 3, 2024
9c0007e
Removed python3.8 from ci
RobertSamoilescu Dec 3, 2024
48f2fab
Fixed tf1 tests
RobertSamoilescu Dec 3, 2024
791f7af
Improved linting
RobertSamoilescu Dec 3, 2024
93e81e1
Included test makefile entry
RobertSamoilescu Dec 3, 2024
64beec2
Included tf_legacy for doc examples
RobertSamoilescu Dec 3, 2024
1f3bdc9
Temp trigger the test for notebooks on push
RobertSamoilescu Dec 3, 2024
8f86bd2
Removed python 3.8 from test_all_notebooks
RobertSamoilescu Dec 3, 2024
ea0f2cb
Fixed test make command
RobertSamoilescu Dec 3, 2024
77a0a46
Included tf_legacy to more notebooks
RobertSamoilescu Dec 3, 2024
a39d198
Removed legacy keras import
RobertSamoilescu Dec 5, 2024
f9db0d3
Fixed path dependent tree shap notebook
RobertSamoilescu Dec 5, 2024
f715c9d
Reverted dev requirements and test notebooks workflow
RobertSamoilescu Dec 5, 2024
9d4088e
Set lowerbound for tf
RobertSamoilescu Dec 5, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 2 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ jobs:
strategy:
matrix:
os: [ ubuntu-latest ]
python-version: [ '3.8', '3.9', '3.10', '3.11']
python-version: ['3.9', '3.10', '3.11']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add also 3.12 (and potentially 3.13)?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am planning to do other PRs to completely remove 3.8 and another one to add 3.12

include: # Run windows tests on only one python version
- os: windows-latest
python-version: '3.11'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set at least to 3.12?

Expand Down Expand Up @@ -68,9 +68,7 @@ jobs:
limit-access-to-actor: true

- name: Test with pytest
run: |
pytest -m tf1 alibi
pytest -m "not tf1" alibi
run: make test

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
Expand Down
5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -64,3 +64,8 @@ check_licenses:
tox-env=default
repl:
env COMMAND="python" tox -e $(tox-env)

.PHONY: test
test:
TF_USE_LEGACY_KERAS=1 pytest -m "tf1" alibi
pytest -m "not tf1" alibi
2 changes: 1 addition & 1 deletion alibi/datasets/default.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ def load_cats(target_size: tuple = (299, 299), return_X_y: bool = False) -> Unio
for member in tar.getmembers():
# data
img = tar.extractfile(member).read() # type: ignore[union-attr]
img = PIL.Image.open(BytesIO(img))
img = PIL.Image.open(BytesIO(img)) # type: ignore[attr-defined]
img = np.array(img.resize(target_size))
img = np.expand_dims(img, axis=0)
images.append(img)
Expand Down
3 changes: 3 additions & 0 deletions alibi/explainers/backends/tensorflow/cfrl_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,9 @@ def __len__(self) -> int:
return self.X.shape[0] // self.batch_size

def __getitem__(self, idx) -> Dict[str, np.ndarray]:
if idx >= self.__len__():
raise IndexError("Index out of bounds.")

if hasattr(self, 'num_classes'):
# Generate random targets for classification task.
tgts = np.random.randint(low=0, high=self.num_classes, size=self.batch_size)
Expand Down
2 changes: 1 addition & 1 deletion alibi/explainers/partial_dependence.py
Original file line number Diff line number Diff line change
Expand Up @@ -368,7 +368,7 @@ def _partial_dependence(self,
'average': averaged_predictions,
'individual': predictions
})
return pd
return pd # type: ignore[return-value]

@abstractmethod
def _compute_pd(self,
Expand Down
2 changes: 1 addition & 1 deletion alibi/explainers/permutation_importance.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ def explain(self, # type: ignore[override]
sample_weight=sample_weight)

# build and return the explanation object
return self._build_explanation(feature_names=feature_names, # type: ignore[arg-type]
return self._build_explanation(feature_names=feature_names,
individual_feature_importance=individual_feature_importance)

@staticmethod
Expand Down
6 changes: 3 additions & 3 deletions alibi/explainers/tests/test_anchor_image.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import pytest
from pytest_lazyfixture import lazy_fixture
import torch
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is torch required?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used by one of the fixtures.

import numpy as np
import tensorflow as tf
import torch

from alibi.api.defaults import DEFAULT_META_ANCHOR, DEFAULT_DATA_ANCHOR_IMG
from alibi.exceptions import PredictorCallError, PredictorReturnTypeError
Expand Down Expand Up @@ -44,7 +44,7 @@ def func(image: np.ndarray) -> np.ndarray:

@pytest.mark.parametrize('predict_fn', [lazy_fixture('models'), ], indirect=True)
@pytest.mark.parametrize('models',
[("mnist-cnn-tf2.2.0",), ("mnist-cnn-tf1.15.2.h5",), ("mnist-cnn-pt1.9.1.pt",)],
[("mnist-cnn-tf2.18.0.keras",), ("mnist-cnn-pt1.9.1.pt",)],
indirect=True,
ids='models={}'.format
)
Expand Down Expand Up @@ -99,7 +99,7 @@ def test_sampler(predict_fn, models, mnist_data):

@pytest.mark.parametrize('predict_fn', [lazy_fixture('models'), ], indirect=True)
@pytest.mark.parametrize('models',
[("mnist-cnn-tf2.2.0",), ("mnist-cnn-tf1.15.2.h5",), ("mnist-cnn-pt1.9.1.pt",)],
[("mnist-cnn-tf2.18.0.keras",), ("mnist-cnn-pt1.9.1.pt",)],
indirect=True,
ids='models={}'.format
)
Expand Down
27 changes: 19 additions & 8 deletions alibi/explainers/tests/test_cfrl.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,10 @@

import pytest
from pytest_lazyfixture import lazy_fixture

import numpy as np
from numpy.testing import assert_allclose

import tensorflow as tf
import tensorflow.keras as keras

Expand Down Expand Up @@ -260,8 +262,8 @@ def tf_keras_iris_explainer(models, iris_data, rf_classifier):
])

# need to define a wrapper for the decoder to return a list of tensors
class DecoderList(tf.keras.Model):
def __init__(self, decoder: tf.keras.Model, **kwargs):
class DecoderList(keras.Model):
def __init__(self, decoder: keras.Model, **kwargs):
super().__init__(**kwargs)
self.decoder = decoder

Expand Down Expand Up @@ -295,11 +297,20 @@ def call(self, input: Union[tf.Tensor, List[tf.Tensor]], **kwargs):
return explainer


@pytest.mark.parametrize('models', [('iris-ae-tf2.2.0', 'iris-enc-tf2.2.0')], ids='model={}'.format, indirect=True)
@pytest.mark.parametrize('rf_classifier',
[lazy_fixture('iris_data')],
indirect=True,
ids='clf=rf_{}'.format)
@pytest.mark.parametrize(
'models',
[
('iris-ae-tf2.18.0.keras', 'iris-enc-tf2.18.0.keras')
],
ids='model={}'.format,
indirect=True
)
@pytest.mark.parametrize(
'rf_classifier',
[lazy_fixture('iris_data')],
indirect=True,
ids='clf=rf_{}'.format
)
def test_explainer(tf_keras_iris_explainer, iris_data):
explainer = tf_keras_iris_explainer

Expand All @@ -317,7 +328,7 @@ def test_explainer(tf_keras_iris_explainer, iris_data):
# Fit the explainer
explainer.fit(X=iris_data["X_train"])

# Construct explanation object.
# # Construct explanation object.
explainer.explain(X=iris_data["X_test"], Y_t=np.array([2]), C=None)


Expand Down
117 changes: 61 additions & 56 deletions alibi/explainers/tests/test_integrated_gradients.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
from functools import partial

import numpy as np
import pytest
import tensorflow as tf
import numpy as np
from numpy.testing import assert_allclose
from tensorflow.keras import Model

import tensorflow as tf
import tensorflow.keras as keras

from alibi.api.interfaces import Explanation
from alibi.explainers import IntegratedGradients
Expand Down Expand Up @@ -39,7 +40,7 @@

# classification labels
y_classification_ordinal = (X[:, 0] + X[:, 1] > 1).astype(int)
y_classification_categorical = tf.keras.utils.to_categorical(y_classification_ordinal)
y_classification_categorical = keras.utils.to_categorical(y_classification_ordinal)

y_train_classification_ordinal = y_classification_ordinal[:N_TRAIN]
y_train_classification_categorical = y_classification_categorical[:N_TRAIN, :]
Expand All @@ -59,13 +60,13 @@ def ffn_model(request):
Simple feed-forward model with configurable data, loss function, output activation and dimension
"""
config = request.param
inputs = tf.keras.Input(shape=config['X_train'].shape[1:])
x = tf.keras.layers.Dense(20, activation='relu')(inputs)
x = tf.keras.layers.Dense(20, activation='relu')(x)
outputs = tf.keras.layers.Dense(config['output_dim'], activation=config['activation'])(x)
inputs = keras.Input(shape=config['X_train'].shape[1:])
x = keras.layers.Dense(20, activation='relu')(inputs)
x = keras.layers.Dense(20, activation='relu')(x)
outputs = keras.layers.Dense(config['output_dim'], activation=config['activation'])(x)
if config.get('squash_output', False):
outputs = tf.keras.layers.Reshape(())(outputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
outputs = keras.layers.Reshape(())(outputs)
model = keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(loss=config['loss'],
optimizer='adam')

Expand All @@ -80,17 +81,17 @@ def ffn_model_multi_inputs(request):
Simple multi-inputs feed-forward model with configurable data, loss function, output activation and dimension
"""
config = request.param
input0 = tf.keras.Input(shape=config['X_train_multi_inputs'][0].shape[1:])
input1 = tf.keras.Input(shape=config['X_train_multi_inputs'][1].shape[1:])
input0 = keras.Input(shape=config['X_train_multi_inputs'][0].shape[1:])
input1 = keras.Input(shape=config['X_train_multi_inputs'][1].shape[1:])

x = tf.keras.layers.Flatten()(input0)
x = tf.keras.layers.Concatenate()([x, input1])
x = keras.layers.Flatten()(input0)
x = keras.layers.Concatenate()([x, input1])

x = tf.keras.layers.Dense(20, activation='relu')(x)
outputs = tf.keras.layers.Dense(config['output_dim'], activation=config['activation'])(x)
x = keras.layers.Dense(20, activation='relu')(x)
outputs = keras.layers.Dense(config['output_dim'], activation=config['activation'])(x)
if config.get('squash_output', False):
outputs = tf.keras.layers.Reshape(())(outputs)
model = tf.keras.models.Model(inputs=[input0, input1], outputs=outputs)
outputs = keras.layers.Reshape(())(outputs)
model = keras.models.Model(inputs=[input0, input1], outputs=outputs)
model.compile(loss=config['loss'],
optimizer='adam')

Expand All @@ -106,13 +107,13 @@ def ffn_model_subclass(request):
"""
config = request.param

class Linear(Model):
class Linear(keras.Model):

def __init__(self, output_dim, activation):
super(Linear, self).__init__()
self.dense_1 = tf.keras.layers.Dense(20, activation='relu')
self.dense_2 = tf.keras.layers.Dense(20, activation='relu')
self.dense_3 = tf.keras.layers.Dense(output_dim, activation)
self.dense_1 = keras.layers.Dense(20, activation='relu')
self.dense_2 = keras.layers.Dense(20, activation='relu')
self.dense_3 = keras.layers.Dense(output_dim, activation)

def call(self, inputs, mask=None):
if mask is not None:
Expand All @@ -124,10 +125,10 @@ def call(self, inputs, mask=None):
outputs = self.dense_3(x)
return outputs

model = Linear(config['output_dim'], activation=config['activation'])
model.compile(loss=config['loss'],
optimizer='adam')

model_input = keras.layers.Input(shape=config['X_train'].shape[1:])
model_output = Linear(config['output_dim'], activation=config['activation'])(model_input)
model = keras.Model(inputs=model_input, outputs=model_output)
model.compile(loss=config['loss'], optimizer='adam')
model.fit(config['X_train'], config['y_train'], epochs=1, batch_size=256, verbose=1)

return model
Expand All @@ -141,14 +142,15 @@ def ffn_model_subclass_list_input(request):
"""
config = request.param

class Linear(Model):
class Linear(keras.Model):

def __init__(self, output_dim, activation):
super(Linear, self).__init__()
self.flat = tf.keras.layers.Flatten()
self.concat = tf.keras.layers.Concatenate()
self.dense_1 = tf.keras.layers.Dense(20, activation='relu')
self.dense_2 = tf.keras.layers.Dense(output_dim, activation)

self.flat = keras.layers.Flatten()
self.concat = keras.layers.Concatenate()
self.dense_1 = keras.layers.Dense(20, activation='relu')
self.dense_2 = keras.layers.Dense(output_dim, activation)

def call(self, inputs):
inp0 = self.flat(inputs[0])
Expand All @@ -158,12 +160,14 @@ def call(self, inputs):
outputs = self.dense_2(x)
return outputs

model = Linear(config['output_dim'], activation=config['activation'])
model.compile(loss=config['loss'],
optimizer='adam')

inputs = [
keras.layers.Input(shape=config['X_train_multi_inputs'][0].shape[1:]),
keras.layers.Input(shape=config['X_train_multi_inputs'][1].shape[1:])
]
output = Linear(config['output_dim'], activation=config['activation'])(inputs)
model = keras.Model(inputs=inputs, outputs=output)
model.compile(loss=config['loss'], optimizer='adam')
model.fit(config['X_train_multi_inputs'], config['y_train'], epochs=1, batch_size=256, verbose=1)

return model


Expand All @@ -174,18 +178,15 @@ def ffn_model_sequential(request):
"""
config = request.param
layers = [
tf.keras.layers.InputLayer(input_shape=config['X_train'].shape[1:]),
tf.keras.layers.Dense(20, activation='relu'),
tf.keras.layers.Dense(config['output_dim'], activation=config['activation'])
keras.layers.InputLayer(input_shape=config['X_train'].shape[1:]),
keras.layers.Dense(20, activation='relu'),
keras.layers.Dense(config['output_dim'], activation=config['activation'])
]
if config.get('squash_output', False):
layers.append(tf.keras.layers.Reshape(()))
model = tf.keras.models.Sequential(layers)
model.compile(loss=config['loss'],
optimizer='adam')

layers.append(keras.layers.Reshape(()))
model = keras.models.Sequential(layers)
model.compile(loss=config['loss'], optimizer='adam')
model.fit(config['X_train'], config['y_train'], epochs=1, batch_size=256, verbose=1)

return model


Expand Down Expand Up @@ -547,7 +548,7 @@ def test_integrated_gradients_binary_classification_layer_subclass(ffn_model_sub
target):
model = ffn_model_subclass
if layer_nb is not None:
layer = model.layers[layer_nb]
layer = model.layers[1].layers[layer_nb]
else:
layer = None

Expand Down Expand Up @@ -614,14 +615,18 @@ def test_integrated_gradients_regression(ffn_model, method, baselines):
def test_run_forward_from_layer(layer_nb,
run_from_layer_inputs):
# One layer ffn with all weights = 1.
inputs = tf.keras.Input(shape=(16,))
out = tf.keras.layers.Dense(8,
kernel_initializer=tf.keras.initializers.Ones(),
name='linear1')(inputs)
out = tf.keras.layers.Dense(1,
kernel_initializer=tf.keras.initializers.Ones(),
name='linear3')(out)
model = tf.keras.Model(inputs=inputs, outputs=out)
inputs = keras.Input(shape=(16,))
out = keras.layers.Dense(
8,
kernel_initializer=keras.initializers.Ones(),
name='linear1'
)(inputs)
out = keras.layers.Dense(
1,
kernel_initializer=keras.initializers.Ones(),
name='linear3'
)(out)
model = keras.Model(inputs=inputs, outputs=out)

# Select layer
layer = model.layers[layer_nb]
Expand All @@ -630,11 +635,11 @@ def test_run_forward_from_layer(layer_nb,
dummy_input = np.zeros((1, 16))

if run_from_layer_inputs:
x_layer = [tf.convert_to_tensor(np.ones((2,) + (layer.input_shape[1:])))]
x_layer = [tf.convert_to_tensor(np.ones((2,) + (layer.input.shape[1:])))]
expected_shape = (2, 1)
expected_values = 128
else:
x_layer = tf.convert_to_tensor(np.ones((3,) + (layer.output_shape[1:])))
x_layer = tf.convert_to_tensor(np.ones((3,) + (layer.output.shape[1:])))
expected_shape = (3, 1)
expected_values = 8

Expand Down
6 changes: 3 additions & 3 deletions alibi/explainers/tests/test_partial_dependence.py
Original file line number Diff line number Diff line change
Expand Up @@ -302,10 +302,10 @@ def test_grid_points_error(adult_data, use_int):
def assert_feature_values_equal(exp_alibi: Explanation, exp_sklearn: Bunch):
""" Compares feature values of `alibi` explanation and `sklearn` explanation. """
if isinstance(exp_alibi.data['feature_names'][0], tuple):
for i in range(len(exp_sklearn['values'])):
assert np.allclose(exp_alibi.data['feature_values'][0][i], exp_sklearn['values'][i])
for i in range(len(exp_sklearn['grid_values'])):
assert np.allclose(exp_alibi.data['feature_values'][0][i], exp_sklearn['grid_values'][i])
else:
assert np.allclose(exp_alibi.data['feature_values'][0], exp_sklearn['values'][0])
assert np.allclose(exp_alibi.data['feature_values'][0], exp_sklearn['grid_values'][0])


def get_alibi_pd_explanation(predictor: BaseEstimator,
Expand Down
Loading
Loading