-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some additions to generate.py after #1948 #1970
Some additions to generate.py after #1948 #1970
Conversation
|
||
pos = 0 | ||
for block_index in range(start_block_index, end_block_index + 1): | ||
for block_index in range(first_block_index, last_block_index + 1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was very strange that this version is faster than my vectorized one. Really surprised about this. That said, I realized that I had a some bad variable names which I have corrected here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can be even faster in we would use the "out" the rng.standard_normal but the code logic between the 2 modes could not be shared. So lets keep this.
For testing the default mode should be always "tile_pregenerated"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I profiled it was showing the np.concatenate
was the culprit of it being so slow. I will check out on what you say, I remember that I thought about it.
OK, the error here is not related to this PR as it is already on main in #1971. |
Merci beaucoup.
My plan was to do this in 2 steps first keep "legacy" by default and the switch to lazy. I am ok with lazy now.
Of course this avoid one amalooc! Really cool. thank you!
If we do this. I would like to still have 5 in generate_ground_truth_recording but I think this already the case.
Not sure we have to mimic the neuropixel always. sometimes it is good to change number for testing. |
Great. I think this makes sense.
Yeah, I agree with you, it is just a better default than the one I had before which I chosen because of numerology (e.g. 1024 looks nice)! |
Some additiosn after #1948.
generate_recording
. We say that we will deprecate legacy in the next release but we don't have a deprecation warning. Next release is 0.99, so when this is released it should start throwing a deprecation warning, then we can deprecate it in 0.100. I added the deprecation warning. Plus, if we are going to deprecate we should make lazy the default in main so the code that we start suggestin people to use from main remains valid. Otherwise, they we will need to keep an argument with only one possible value (mode="lazy").rng.standard_normal
as this was generated an expensive computation when I profiled (got 30 % increase in performance for a one hour recorder with "on_the_fly")generate_recording_by_size
to be the same as SpikeGLX. I remember that at some point it caused me some problems in testing for it to have such an incovenient value (1024) which I think I set because it will play nice with round number of seconds at the beginning but it does not make a lot of sense in retrospective.