Avoiding looping over trial for 1d convolutions #34
BalzaniEdoardo
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
I think this is something that jax.vmap can do for us in one line.
…On Wed, Aug 2, 2023, 6:07 PM Edoardo Balzani ***@***.***> wrote:
just dropping this idea for the sake of discussion. right now the
convolution is performed in a loop one trial at the time if the trials have
different length. The code below implements a way of computing a
convolution without loops at the cost of more operations and decreased
readability of the code.
Should we keep this solution in mind?
@gviejo <https://github.com/gviejo> @ahwillia
<https://github.com/ahwillia> @billbrod <https://github.com/billbrod>
import numpy as np
# initialize 3 vectors with different sizes and a filter, here each vector represents a trialnp.random.seed(4)b = [np.random.uniform(size=12), np.random.uniform(size=13), np.random.uniform(size=15)]ws = 5
filt = np.random.uniform(size=ws)
# compute the convolution by: 1) stacking the vectors interleaved with a nan in a single array,# 2) run a single convolution on it, and 3) chunk the result back into the different "trials";
# 1) interleave and stackoutput = np.hstack([i for sub in zip(b[:-1], [[np.nan]]*len(b[:-1])) for i in sub] + [b[-1]])
# 2) convolve the nan-paddedres_conv = np.convolve(output, filt, mode='valid')
# 3) chunk# calculate the indices the nans - equivalent to np.where(np.isnan(output)).get_trial_len = lambda lst: np.array([len(x) for x in lst])get_index_nan = lambda lst: np.cumsum(get_trial_len(lst[:-1])) + np.arange(len(lst)-1)index_nan_init = get_index_nan(b)
# calculate the indices for chunkingcrop_idx_start = np.hstack(([0], index_nan_init + 1))crop_idx_end = np.hstack((index_nan_init - ws + 1, len(res_conv))) # store in a list
# chunk the array in a listresult = [res_conv[i: j] for i, j in zip(crop_idx_start, crop_idx_end)]
# Compare with looping the numpy.convolve# perform the convolution in a loop insteadresult2 = [np.convolve(x, filt, mode='valid') for x in b]
# assert that the results are the sameassert all(np.all(x == y) for x, y in zip(result, result2))
—
Reply to this email directly, view it on GitHub
<#34>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAE3NUPAYPA3EPE4H76VLWLXTLFRHANCNFSM6AAAAAA3B4MBME>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
just dropping this idea for the sake of discussion. right now the convolution is performed in a loop one trial at the time if the trials have different length. The code below implements a way of computing a convolution without loops at the cost of more operations and decreased readability of the code.
Should we keep this solution in mind?
@gviejo @ahwillia @billbrod
Beta Was this translation helpful? Give feedback.
All reactions