Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

messing around with alternatives #41

Open
amueller opened this issue Aug 2, 2019 · 0 comments
Open

messing around with alternatives #41

amueller opened this issue Aug 2, 2019 · 0 comments

Comments

@amueller
Copy link

amueller commented Aug 2, 2019

Hey.
I was just playing around with this and was trying to see if there's a way to implement this efficiently with standard libs.

My usual way to do things like this is using the scipy.sparse.coo_matrix construct.

import scipy.sparse as sp

def bincount2d(x, y, bins):
	return sp.coo_matrix((np.ones(x.shape[0]), (x, y)), shape=(bins, bins), dtype=np.int)

If the data was scaled so that making it ints would put it in the right bins, this would work.

import numpy as np
x = np.random.random(10_000_000)
y = np.random.random(10_000_000)

from fast_histogram import histogram2d
%timeit _ = histogram2d(x, y, range=[[0, 1], [0, 1]], bins=30)

36.8 ms ± 4.14 ms per loop

xx = (x * 30)
yy = (y * 30)

%timeit bincount2d(xx, yy, bins=30)

153 ms ± 4.04 ms per loop

So your code would "only" be 5x faster, so it's about a 4x speedup over numpy.
Unfortunately I cheated and didn't include shifting ``xx` so that the data aligns with the bins. I don't think it's possible to make this work without copying the data at least once, which is why I'm giving up on this route.

Thought it might be of interest, though ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant