-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Error in group_binfile_parcellation #6
Comments
What is the resolution of your data? Did you use a gray matter mask? How many voxels are in the mask?
…Sent from my iPhone
On Feb 14, 2017, at 9:47 AM, mehrshadg ***@***.***> wrote:
Hey,
I am using this package to parcellate 60 fMRI data. When I run the group_binfile_parcellation script, I get Memory Error. My computer has 16 GB of RAM. I read the code and the line in which you are calculating:
W=W + csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double)
raises Memory Error after about 4 data are processed. So I changed it to
W+=csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double)
to prevent numpy from creating another array. But again the error occurred after 7 data.
Is there a way to workaround this issue? Somehow optimizing the code? I am not familiar with Python and its optimization techniques.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Yes I used a gray matter mask. I created an average binary gray matter mask and then standard it to the MNI 152. Each of my functional data are also standardized to MNI 152. For functional TR is 2.2. The mask contains 181676 non-zero and 720953 zero voxels. I used |
What is your vocal size?
181,767 is very large for your Grey matter mask. In the paper, which used 4mm isotropic voxel sizes there were about 18,500 voxels. I'm guessing yours may be at 2mm ISO, which is likely higher resolution than what you acquired the data at. This upsampling does not provide any new information and only serves to make the problem more computationally intensive.
I would use a larger voxel size or a computer with significantly more RAM.
…Sent from my iPhone
On Feb 16, 2017, at 2:16 AM, mehrshadg ***@***.***> wrote:
Yes I used a gray matter mask. I created an average binary gray matter mask and then standard it to the MNI 152. Each of my functional data are also standardized to MNI 152. For functional TR is 2.2. The mask contains 181676 non-zero and 720953 zero voxels. I used (nibabel.load(gm_mask_standard).get_data().flatten() > 0).sum() to calculate total number of non-zero voxels
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
The original structural and functional voxel sizes are 1mm and 3mm isometric, but when I created the structural mask I standardized both my mask and my functional data to MNI 152 with 2mm isometric voxels. So based on what I have understand, you are telling my to down sample my mask to 3mm voxels, and standardize both the data and mask with 3mm voxels? |
Hey,
I am using this package to parcellate 60 fMRI data. When I run the group_binfile_parcellation script, I get Memory Error. My computer has 16 GB of RAM. I read the code and the line in which you are calculating:
W=W + csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double)
raises Memory Error after about 4 data are processed. So I changed it to
W+=csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double)
to prevent numpy from creating another array. But again the error occurred after 7 data.
Is there a way to workaround this issue? Somehow optimizing the code? I am not familiar with Python and its optimization techniques.
The text was updated successfully, but these errors were encountered: