Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix np.float and np.float32 to float. #1257

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

BrunoBelucci
Copy link

Description

This PR fixes #1236 and #1207.

Following the numpy error message, np.float was a deprecated alias for the builtin float. The aliases was originally deprecated in NumPy 1.20.

Checklist

  • Linked issues (if existing)
  • Amended changelog for large changes (and added myself there as contributor)
  • Added/modified tests
  • Used pre-commit hooks when committing to ensure that code is compliant with hooks. Install hooks with pre-commit install.
    To run hooks independent of commit, execute pre-commit run --all-files

Make sure to have fun coding!

@graille
Copy link

graille commented Feb 15, 2023

np.float is indeed depreacated, but I'm not sure np.float32 is also deprecated. I am pretty sure float is np.float64. So you should keep np.float32 expressions since GPU usually works with 32 bits floatting point numbers.

@@ -470,7 +470,7 @@ def _set_parameters(
if isinstance(y_center, torch.Tensor):
eps = torch.finfo(y_center.dtype).eps
else:
eps = np.finfo(np.float).eps
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

32 or 64 bits? Why not y_center.dtype?

@@ -785,7 +785,7 @@ def fit(self, y: pd.Series, X: pd.DataFrame):
self
"""
y = self.preprocess(y)
eps = np.finfo(np.float).eps
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

32 or 64 bits? We cannot use y.dtype because it is possible to have different type for y and X.

@BrunoBelucci
Copy link
Author

np.float is indeed depreacated, but I'm not sure np.float32 is also deprecated. I am pretty sure float is np.float64. So you should keep np.float32 expressions since GPU usually works with 32 bits floatting point numbers.

You are absolutely right, I have assumed the inverse, that float was np.float32. Therefore, I should not have changed np.float32 to float, but it is not clear to me when we are doing eps = np.finfo(float).eps if we want 32 or 64 bits.

@BrunoBelucci
Copy link
Author

After some thought, I decided to change np.float to np.float32 following the assumption that we usually work with GPUs, so eps from np.float64 can be 0 if we use np.float32, but eps from np.float32 will never be 0.

@EthanReid
Copy link

Is this update still in progress?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AttributeError: module 'numpy' has no attribute 'float'. Did you mean: 'cfloat'?
4 participants