Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add testing env without optional dependencies #170

Closed
wants to merge 10 commits into from

Conversation

cbrnr
Copy link
Contributor

@cbrnr cbrnr commented Apr 7, 2016

Fixes #166

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

Seems like the envs are working - but of course the first test in the minimal env has already failed because there is no sklearn :-).

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

Actually, only two tests seem to be affected: test_var_sklearn.py and test_xvschema.py. How do we proceed @mbillingr? Decorate all tests that require sklearn?

@mbillingr
Copy link
Member

Do we have to decorate each function separately or can we decorate the whole class?

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

No idea - I guess decorators only work with functions?

@mbillingr
Copy link
Member

You can also decorate classes (at least in Python 3, don't know about 2.7).

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

This would be nice, because in principle we could skip the whole Python file in some cases...

Can you check if the decorators from my last commit could work? How would you apply them to classes?

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

Before this whole decorator thing gets out of control - since we wrap all our tests in classes derived from unittest.TestCase anyway, can't we just require the import in the setUp method? Then maybe there's no need to create decorators at all.

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

There is @unittest.skipIf - maybe we can use this decorator for our needs?

@mbillingr
Copy link
Member

Before this whole decorator thing gets out of control - since we wrap all our tests in classes derived from unittest.TestCase anyway, can't we just require the import in the setUp method? Then maybe there's no need to create decorators at all.

Which means we have to repeat these boilerplate checks.
If not too many tests are affected this would be OK.

There is @unittest.skipIf - maybe we can use this decorator for our needs?

Maybe. We'd still have to pass a condition.

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

The solution in my last commit works. However, the following tests fail because they rely on all backends:

  • scot.tests.test_ooapi.Test_MVARICA.testFunctionality
  • scot.tests.test_ooapi.Test_MVARICA.testModelIdentification
  • scot.tests.test_ooapi.Test_MVARICA.test_plotting
  • scot.tests.test_parallel.TestFunctions.test_output
  • scot.tests.test_plainica.TestICA.testModelIdentification
  • scot.tests.test_plotting
  • scot.tests.test_varica.TestMVARICA.testModelIdentification

And probably even more.

This shows that our optional dependencies are heavily worked into the tests. It will probably be quite painful to tease these tests apart. The alternative is to make sklearn and matplotlib required deps.

@mbillingr
Copy link
Member

I would rather solve this problem by changing the backends so that they do not register themselves with the manager if their requirements are not met.

e.g. backend_sklearn.py (line 96+)

try:
    import sklearn
    backend.register('sklearn', generate)
except ImportError:
    pass

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

Very good - even cleaner with an else branch. What else?

@mbillingr
Copy link
Member

Nice! I did't know you coud use else with try/except.

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 7, 2016

I wonder

  1. why the non-MKL env doesn't work (again!)
  2. where the error in the 3.5 MKL env occurs (the log isn't very specific)

@mbillingr
Copy link
Member

  1. It's this awful libgfortran again 😠
  2. The log can't be more specific because the error happens in run_tests.sh line 14. These lines should take the possibility into account that the libraries are not available...

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 11, 2016

Re 2, I've fixed the broken run_tests.sh script. What do we do about 1? It seems like Anaconda is not really interested in keeping specific old package versions around - in our 2.7 env, it automatically updates

  • scipy from 0.13.3 to 0.17.0
  • sklearn from 0.15.0 to 0.15.2
  • numpy from 1.8.2 to 1.9.3

I think the best thing to do would be to just use whatever package versions the current Anaconda distribution ships (and completely ignore non-MKL because the Anaconda default is MKL). To test the oldest supported packages, we should create a non-Anaconda-based Python env (e.g. installed via Ubuntu packages).

I.e., I suggest the following testing envs:

  • Anaconda Python 3 (latest versions of everything)
  • Anaconda Python 2 (latest versions of everything)
  • Python 3.X (need to test which Python 3 still works) and oldest package versions installed via apt/pip
  • Python 2.7 and oldest package versions installed via apt/pip

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 11, 2016

  • How do we deal with scot.tests.test_ooapi.TestMVARICA.test_plotting?
  • The whole parallel processing business only works if joblib or sklearn is available. sklearn is an optional dependency, and we don't even mention joblib as an optional requirement. How do we deal with this (and the related tests in scot.tests.test_parallel.TestFunctions)?

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 11, 2016

@mbillingr do you know why the tests now fail after I've moved the imports into setUp?

@mbillingr
Copy link
Member

do you know why the tests now fail after I've moved the imports into setUp?

Because they are imported locally.
Well, OK.. they are just imported but the variables that refer to the modules are local to the setUp function. For solutions see StackOverflow.

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 12, 2016

Hm. I thought they were known to the object. What do you think is the best solution here? Make plt global?

In general, I have the feeling that mixing core tests with optional tests is rather ugly. Can't we do something nicer, such as completely separate core tests from optional tests?

@mbillingr
Copy link
Member

What do you think is the best solution here? Make plt global?

Global would work, but that is kinda ugly. self.plt could work too, but may require many changes.
It's also possible to catch import errors and skip certain tests if plt does not exist.

Can't we do something nicer, such as completely separate core tests from optional tests?

Sure. We even have the option to separate them at different levels:

  1. Some of the test cases are optional methods and can be skipped (I guess that's the current approach)
  2. Make another class for optional tests
  3. Make separate files for optional tests

@cbrnr
Copy link
Contributor Author

cbrnr commented Apr 12, 2016

Hm. With option 2, we wouldn't have to change the file structure, and we could try/except import errors to see if the tests in the optional class should be run. Shall we go for it?

@mbillingr
Copy link
Member

Hm. With option 2, we wouldn't have to change the file structure, and we could try/except import errors to see if the tests in the optional class should be run. Shall we go for it?

👍

@cbrnr
Copy link
Contributor Author

cbrnr commented Feb 7, 2024

Closing, if we ever need this, it is probably easier to start from scratch.

@cbrnr cbrnr closed this Feb 7, 2024
@cbrnr cbrnr deleted the min_test_env branch February 7, 2024 13:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create testing env without optional dependencies
2 participants