Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix nn pca_metric computation and update tests #3138

Merged
merged 9 commits into from
Jul 15, 2024

Conversation

chrishalcrow
Copy link
Collaborator

Fixes a bug where nn_isolation and nn_noise_overlap were not being calculated.

In the compute_pc_metrics function, which calls the pca_metrics_one_unit function, the pca computations were surrounded by try/except statements. These meant that the computations were failing but the tests were passing.

The nn_isolation and nn_noise_overlap computations don’t fit into the quality metrics parallelisation easily, because they require waveforms. Hence they had to be moved so that they are calculated separately, and without parallelisation. It would be good idea to speed them up soon!

The tests involving the nn metrics were taking a long time (5mins+ on my machine) so I swapped in the small_sorting_analyzer, which sped things up. I also played a little with the small_sorting_analyzer to make sure all quality_metrics had non-NaN or non-trivial results for both the pca and non-pca quality metrics.

@chrishalcrow chrishalcrow added bug Something isn't working qualitymetrics Related to qualitymetrics module labels Jul 4, 2024
@chrishalcrow chrishalcrow requested a review from samuelgarcia July 4, 2024 08:28

for metric_name in res1.columns:
if metric_name != "nn_unit_id":
assert not np.all(np.isnan(res1[metric_name].values))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a non-Nan test here

for k in res1.columns:
mask = ~np.isnan(res1[k].values)
if np.any(mask):
assert np.array_equal(res1[k].values[mask], res2[k].values[mask])
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deleted these tests, as the test above does a more rigorous test of these functions, within compute_pc_metrics

@@ -0,0 +1,37 @@
import pytest
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alejoe91 alejoe91 added this to the 0.101.0 milestone Jul 9, 2024
@zm711
Copy link
Collaborator

zm711 commented Jul 11, 2024

Also in general on Windows we are having super variable test times and quality metrics is one of the modules that can be super variable. In #3173, one of the actions for windows took 64min. I'm wondering if there is some sort of Windows specific thing in the tests that is causing this. Since you've been digging around in these metrics has anything jumped out as a slowdown for testing?

@chrishalcrow
Copy link
Collaborator Author

Also in general on Windows we are having super variable test times and quality metrics is one of the modules that can be super variable. In #3173, one of the actions for windows took 64min. I'm wondering if there is some sort of Windows specific thing in the tests that is causing this. Since you've been digging around in these metrics has anything jumped out as a slowdown for testing?

Goodness, those times shouldn't be that long. Especially because, before this gets merged, it isn't even calculating the slow metrics... Could it be something to do with n_jobs? Lots of the quality_metrics tests are done with n_jobs=2? Although the gh ci machines should have at least 2 cores? We should investigate....

@zm711
Copy link
Collaborator

zm711 commented Jul 12, 2024

We changed the way n_jobs works to limit to the number of cores of the machine. I think some of the runners have 4 cores so we could think about bumping some up to 3. But yeah, I'm not sure. I need to dig into more. We are also seeing crazy slowdowns in sorting components sometimes, which is what you saw locally for sc2 right? Or was it the opposite where locally it was fast and in ci it was slow?

@alejoe91 alejoe91 merged commit ad3e924 into SpikeInterface:main Jul 15, 2024
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working qualitymetrics Related to qualitymetrics module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants