-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix nn pca_metric computation and update tests #3138
Fix nn pca_metric computation and update tests #3138
Conversation
|
||
for metric_name in res1.columns: | ||
if metric_name != "nn_unit_id": | ||
assert not np.all(np.isnan(res1[metric_name].values)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a non-Nan test here
for k in res1.columns: | ||
mask = ~np.isnan(res1[k].values) | ||
if np.any(mask): | ||
assert np.array_equal(res1[k].values[mask], res2[k].values[mask]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deleted these tests, as the test above does a more rigorous test of these functions, within compute_pc_metrics
@@ -0,0 +1,37 @@ | |||
import pytest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this file, to share fixtures across test files. See: https://docs.pytest.org/en/6.2.x/fixture.html#conftest-py-sharing-fixtures-across-multiple-files
Also in general on Windows we are having super variable test times and quality metrics is one of the modules that can be super variable. In #3173, one of the actions for windows took 64min. I'm wondering if there is some sort of Windows specific thing in the tests that is causing this. Since you've been digging around in these metrics has anything jumped out as a slowdown for testing? |
Goodness, those times shouldn't be that long. Especially because, before this gets merged, it isn't even calculating the slow metrics... Could it be something to do with |
We changed the way n_jobs works to limit to the number of cores of the machine. I think some of the runners have 4 cores so we could think about bumping some up to 3. But yeah, I'm not sure. I need to dig into more. We are also seeing crazy slowdowns in sorting components sometimes, which is what you saw locally for sc2 right? Or was it the opposite where locally it was fast and in ci it was slow? |
Fixes a bug where
nn_isolation
andnn_noise_overlap
were not being calculated.In the
compute_pc_metrics
function, which calls thepca_metrics_one_unit
function, the pca computations were surrounded by try/except statements. These meant that the computations were failing but the tests were passing.The
nn_isolation
andnn_noise_overlap
computations don’t fit into the quality metrics parallelisation easily, because they require waveforms. Hence they had to be moved so that they are calculated separately, and without parallelisation. It would be good idea to speed them up soon!The tests involving the nn metrics were taking a long time (5mins+ on my machine) so I swapped in the
small_sorting_analyzer
, which sped things up. I also played a little with thesmall_sorting_analyzer
to make sure all quality_metrics had non-NaN or non-trivial results for both the pca and non-pca quality metrics.