Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix overflow of Plexon in numpy 2.0 #1613

Merged
merged 9 commits into from
Dec 19, 2024

Conversation

h-mayorquin
Copy link
Contributor

This should fix the errors in #1612.

At least for Plexon.

@h-mayorquin h-mayorquin marked this pull request as ready for review December 13, 2024 21:38
@h-mayorquin
Copy link
Contributor Author

@zm711 Ok, this fixes Plexon error with the current build on the docs but then the error goes to some other io.

I think this can be merged after #1612 and then BlackRock can be fixed separately.

@zm711
Copy link
Contributor

zm711 commented Dec 13, 2024

@h-mayorquin merge conflict from #1612.

@h-mayorquin
Copy link
Contributor Author

Solved.

@zm711
Copy link
Contributor

zm711 commented Dec 13, 2024

Cool. I'll double check this weekend :)

Copy link
Contributor

@zm711 zm711 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One strategy question @h-mayorquin :)

@@ -209,6 +209,10 @@ def _parse_header(self):
for index, pos in enumerate(positions):
bl_header = data[pos : pos + 16].view(DataBlockHeader)[0]

# To avoid overflow errors when doing arithmetic operations on numpy scalars
np_scalar_to_python_scalar = lambda x: x.item() if isinstance(x, np.generic) else x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason not to make this lambda function near the top so it is reusable. Or even define this as a true private function. How are you thinking about using the same lambda function twice being generated in the for loop. It should be relatively low cost although seems a bit of a waste no? We could define the function once and reuse in both the for loops, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keeping the definition close to the use so people can quickly see what it is for. This takes nanoseconds:

image

Extracting the data from plexon takes minutes. It does not factor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured you'd have checked. Fair enough. :)

Comment on lines 262 to 269
channel_headers = slowChannelHeaders[chan_index]

# To avoid overflow errors when doing arithmetic operations on numpy scalars
np_scalar_to_python_scalar = lambda x: x.item() if isinstance(x, np.generic) else x
channel_headers = {key: np_scalar_to_python_scalar(channel_headers[key]) for key in channel_headers.dtype.names}

name = channel_headers["Name"].decode("utf8")
chan_id = channel_headers["Channel"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last question.

so below you called it dsp_channel_headers which was really nice. Any interest in changing this to slow_channel_headers, just so we make sure we never accidentally overwrite any variable. I think the explicit makes it cleaner and since we are already fixing up the variable naming might be worth the push?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sure.

@@ -209,6 +209,10 @@ def _parse_header(self):
for index, pos in enumerate(positions):
bl_header = data[pos : pos + 16].view(DataBlockHeader)[0]

# To avoid overflow errors when doing arithmetic operations on numpy scalars
np_scalar_to_python_scalar = lambda x: x.item() if isinstance(x, np.generic) else x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured you'd have checked. Fair enough. :)

@zm711 zm711 merged commit b6a721c into NeuralEnsemble:master Dec 19, 2024
3 checks passed
@h-mayorquin h-mayorquin deleted the fix_overflow_2.0 branch December 20, 2024 16:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants