-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't use classical (1d) scores with joint_test (label_test and one statistical test) #19
Comments
Try setting |
Your suggestion makes the code not run into the error! However, the resulting scores are not numbers:
|
The judge function returns a score object (or for multiple tests a score collection object). Printing the object invokes its |
Actually I think printing a score object itself should just print the |
|
Of course, you are right! I forgot about this, which maps the single-score-level score call through the whole ScoreMatrix. |
I am trying to use the
kl_divergence
score to compare distributions from two sets of spiking data from two simulators (NEST and PyNN). Since the model I am investigating has multiple distinct populations, I want to compare the distributions of these populations separately. For this, I am using thejoint_test
example together with alabel_test
that checks for spiketrains to be from the same population and a conventional test (e.g. firing rate, cv isi, ...). However, this is currently not possible, even though thejoint_test
only produces a single score and not multiple ones, while usingwasserstein_distance
works.A first attempt at rectifying this was adding code that ensures that the
layer_prediction
andlayer_observation
are 1d (see below), but this did resulted in the same error which I append at the end.Here is a reduced code example (sorry, still quite long), provided that one has some spiking data at hand that can be separated according to their annotation:
Error thrown:
The text was updated successfully, but these errors were encountered: