Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test() got an unexpected keyword argument 'test_dataloaders #362

Open
ltetrel opened this issue Apr 27, 2022 · 4 comments
Open

test() got an unexpected keyword argument 'test_dataloaders #362

ltetrel opened this issue Apr 27, 2022 · 4 comments

Comments

@ltetrel
Copy link
Contributor

ltetrel commented Apr 27, 2022

Hi,

We have some issue when running the evaluation on the training set. Same issue appears in your google collab:
https://colab.research.google.com/github/neuralaudio/hear-eval-kit/blob/main/heareval_evaluation_example.ipynb

Data

We were testing on all datasets that are available on zenodo (which btw is hard to download because slow and lots of interruptions, you should maybe consider removing the requests pay and open the gs or s3 bucket to the public).
But for this issue and to be able to compare with your collab, we focused on hear2021-mridangam_tonic-v1.5-full-48000.tar.gz.

Environment

Here is the link to our repo: https://github.com/courtois-neuromod/soundnetbrain_hear
The hearvalidator passed without errors, and we are using a custom model. We also tried with hearbaseline.

heareval==2021.1.1
hearvalidator==2021.0.5
hearbaseline==2021.1.0

Error log

Getting embeddings for split ['fold00'], which has 1396 instances.
  0%|                                                                                                                                       | 0/20 [00:21<?, ?it/s]
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/dist-packages/heareval/predictions/runner.py", line 181, in <module>
    runner()
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/heareval/predictions/runner.py", line 140, in runner
    task_predictions(
  File "/usr/local/lib/python3.8/dist-packages/heareval/predictions/task_predictions.py", line 1407, in task_predictions
    test_results[test_fold_str] = task_predictions_test(
  File "/usr/local/lib/python3.8/dist-packages/heareval/predictions/task_predictions.py", line 1108, in task_predictions_test
    test_results = trainer.test(
TypeError: test() got an unexpected keyword argument 'test_dataloaders'

Thank you,

@ltetrel
Copy link
Contributor Author

ltetrel commented Apr 27, 2022

Also for weird reason, I tried maybe last week the google collab and it was working. But I think google collab keeps the cache of a previous execution to save some cpu/gpu usage. By luck I have a copy of these logs (which shows execution at 2022-01-21 11:08:56, even if I run it last week).
heareval_colab.log

@ltetrel
Copy link
Contributor Author

ltetrel commented Apr 28, 2022

Should be fixed by #363

@jorshi
Copy link
Contributor

jorshi commented Apr 29, 2022

Hi @ltetrel ,

Thanks for pointing this issue out. Looks like this this parameter in Trainer.test was removed in PyTorch Lightning 1.6, Lightning-AI/pytorch-lightning#10325

And thank you for submitting a PR to fix the issue.

For the data for HEAR, sorry that you experienced difficulties in getting the data from Zenodo. I wish we could remove the requester pays from gs and s3, but it is just not financially feasible for us right now. I will bring this up with the HEAR committee though and see if we can find an alternative solution.

Thanks again for your contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants