Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

event handling: fix a bug and clean house #2421

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

boeschf
Copy link
Contributor

@boeschf boeschf commented Oct 16, 2024

  • fix event stream when more than one cell in a cell group have same synapse
    • events would previously no longer necessarily be sorted by time
    • in order to simplify: also sort with respect to mechanism index (as was previously only required for the gpu backend)
    • add pertinent test
  • while cleaning up: overhauled the event related files and data structures
    • removed dead code
    • made event handling less generic (this feature was not used anywhere)

Copy link
Contributor

@thorstenhater thorstenhater left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor requests, but overall, this is great simplification!
A question that didn't fit into the PR itself: Threaded spike sorting
as on GPU is gone now? Why?

*
* Time values must be well ordered with respect to `operator>`.
*/

template <typename Event>
struct default_event_time {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No criticism, just curiosity: Why choose this over a simple function?
Also, potentially we know that time must be of type arb_time_type, no?
Finally, might a concept help clean up things more?

v.spike_counter_.clear();
v.spike_counter_.resize(steps.size(), 0);
v.spikes_.clear();
v.spikes_.reserve(v.ev_data_.capacity());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are using capacity here, that seems unusual.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is intentional, as ev_data_ has been cleared before. I should probably make a comment. Otherwise, I can reorder the operations, so that v.clear() is called afterwards.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that would be confusing, just make a note.

auto first = stream.spikes_.begin();
auto last = stream.spikes_.end();
auto pivot = std::prev(last, 1);
std::rotate(std::upper_bound(first, pivot, *pivot, [](auto const& l, auto const& r) noexcept {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since #2401 this should be simply < with all the same semantics and performance, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

arb_size_type step = 0;
time_type time = 0;
cell_local_size_type mech_index = 0;
float weight = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
float weight = 0;
float weight = 0;
auto operator<=>(const spike_data&) const = default;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

absolutely, thanks for pointing this out

arbor/backends/event_stream_base.hpp Outdated Show resolved Hide resolved
@boeschf
Copy link
Contributor Author

boeschf commented Oct 18, 2024

A few minor requests, but overall, this is great simplification! A question that didn't fit into the PR itself: Threaded spike sorting as on GPU is gone now? Why?

Since we need sorting in any case due to the events not being in order w.r.t. time under certain conditions (when more than one cell in a cell group have same synapse) I just rolled the sorting w.r.t. mech_index into the same insertion sort. This is not strictly necessary on CPU, but it doesn't change the semantics. The GPU implementation previously did this additional sorting with the mentioned thread-parallel implementation.

@thorstenhater
Copy link
Contributor

Understoo!

Copy link
Contributor

@thorstenhater thorstenhater left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you're satisfied with the performance impact, we can merge.
🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants