Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update subtraction ale script and data files #4

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,4 @@ results/**
*Icon*
testing/**
results_27-08-2020/**
subtractions/**
30 changes: 29 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ usage: nimare-ales.py [-h] [--iters ITERS] [--cores CORES]
Read in text file(s) and

positional arguments:
in_file Sleuth-style text files with coordinates to be meta-analyzed
in_file Sleuth-style text file(s) with coordinates to be meta-analyzed
(separate files for MNI, Talairach).
out_dir Absolute or relative path to directory where output (figures
and results) will be saved.
Expand Down Expand Up @@ -55,11 +55,39 @@ optional arguments:
Default is 'z'
--verbose If selected, script will narrate its progress.
```
5. Compare patterns of convergent activation with `nimare-ale-subtraction.py`
```
usage: nimare-ale-subtraction.py [-h] [-1 DSET1 [DSET1 ...]]
[-2 DSET2 [DSET2 ...]] [--out_dir OUT_DIR]
[--iters ITERS]

Read in text or dataset files and perform an activation likelihood estimation
subtraction analysis to compare patterns of convergent activation between
meta-analytic datasets.

optional arguments:
-h, --help show this help message and exit
-1 DSET1 [DSET1 ...], --dset1 DSET1 [DSET1 ...]
Sleuth-style text file(s) with coordinates of first
dataset (separate files for MNI, Talairach) OR a
gzipped, pickled file (`.pkl.gz`) containing both MNI
and Talairach coordinateds.
-2 DSET2 [DSET2 ...], --dset2 DSET2 [DSET2 ...]
Sleuth-style text files with coordinates tof second
dataset (separate files for MNI, Talairach) OR a
gzipped, pickled file (`.pkl.gz`) containing both MNI
and Talairach coordinateds..
--out_dir OUT_DIR Absolute or relative path to directory where output
(figures and results) will be saved.
--iters ITERS The number of iterations the FWE corrector should run.
```
Overall, you can use the code included here to run a meta-analysis and make figures with 3 commands, once you've prepared your Sleuth-style coordinate text files, navigate to the folder in which you've saved the `code` folder and run the following commands in a command line (e.g., Terminal on MacOS):
```
bash code/setup.sh
python code/nimare-ales.py /path/to/sleuth_file-mni.txt /path/to/sleuth_file-tal.txt /path/to/output-directory
python code/make-figs.py /path/to/output-directory/results /path/to/output-directory/figures
# optionally
python code/nimare-ale-subtraction.py -1 /path/to/dataset1_file [/path/to/dataset1_file2 ...] -2 /path/to/dataset2_file [/path/to/dataset2_file2 ...] /path/to/output-directory
```
All other arguments are optional and without them, you'll run a perfectly good ALE meta-analysis. Make sure you replace all the `path/to/...` with file paths to your text files and to your output directory, respectively.

Expand Down
20 changes: 13 additions & 7 deletions code/make-figs.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,19 @@
\'z\' - axial, \'ortho\' - three cuts are performed in orthogonal
directions, \'tiled\' - three cuts are performed
and arranged in a 2x2 grid. Default is \'z\'''')
parser.add_argument('--subtraction', action='store_true',
help='If selected, script will use contrasting colormaps.')
parser.add_argument('--verbose', action='store_true',
help='If selected, script will narrate its progress.')
args = parser.parse_args()

if args.cmaps:
cmaps = args.cmaps
else:
cmaps = ['PuRd', 'BuPu','GnBu', 'BuGn', 'OrRd']
if args.subtraction:
cmaps = ['PuOr', 'RdBu_r', 'PRGn_r', 'PiYG', 'BrBG', 'RdGy', ]
else:
cmaps = ['PuRd', 'BuPu','GnBu', 'BuGn', 'OrRd']

if args.orient:
orientation = args.orient
Expand All @@ -44,7 +49,7 @@
map_dir = args.map_dir
out_dir = args.out_dir

z_maps = glob('{0}/*z*voxel*.nii.gz'.format(map_dir))
z_maps = glob('{0}/*z*.nii.gz'.format(map_dir))

#make the surface + slices plots!
fsaverage = datasets.fetch_surf_fsaverage()
Expand All @@ -69,29 +74,30 @@
else:
cuts = None

thresh = 1.5

for i in np.arange(0, len(z_maps)):
basename = z_maps[i].split('/')[-1][:-7]
g = plot_stat_map(z_maps[i], colorbar=False, threshold=1.5,
g = plot_stat_map(z_maps[i], colorbar=True, threshold=thresh,
display_mode='z', cut_coords=cuts,
cmap=cmaps[i], draw_cross=False)
g.savefig('{0}/{1}-slices.png'.format(out_dir,basename), dpi=300)
r_texture = surface.vol_to_surf(z_maps[i], fsaverage.pial_right)
l_texture = surface.vol_to_surf(z_maps[i], fsaverage.pial_left)
h = plot_surf_stat_map(fsaverage.pial_right, r_texture,
colorbar=False, cmap=cmaps[i], threshold=1.5,
colorbar=False, cmap=cmaps[i], threshold=thresh,
bg_map=fsaverage.sulc_right, view='medial')
h.savefig('{0}/{1}-surf-RL.png'.format(out_dir,basename), dpi=300)
h = plot_surf_stat_map(fsaverage.pial_right, r_texture,
colorbar=False, cmap=cmaps[i], threshold=1.5,
colorbar=False, cmap=cmaps[i], threshold=thresh,
bg_map=fsaverage.sulc_right, view='lateral')
h.savefig('{0}/{1}-surf-RM.png'.format(out_dir,basename), dpi=300)
h = plot_surf_stat_map(fsaverage.pial_left, l_texture,
colorbar=False, cmap=cmaps[i], threshold=1.5,
colorbar=False, cmap=cmaps[i], threshold=thresh,
bg_map=fsaverage.sulc_left, view='medial')
h.savefig('{0}/{1}-surf-LM.png'.format(out_dir,basename), dpi=300)
h = plot_surf_stat_map(fsaverage.pial_left, l_texture,
colorbar=False, cmap=cmaps[i], threshold=1.5,
colorbar=False, cmap=cmaps[i], threshold=thresh,
bg_map=fsaverage.sulc_left, view='lateral')
h.savefig('{0}/{1}-surf-LL.png'.format(out_dir,basename), dpi=300)

31 changes: 23 additions & 8 deletions code/nimare-ale-subtraction.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from nilearn.plotting import plot_stat_map, plot_surf_stat_map
from nilearn.surface import vol_to_surf
from nilearn.datasets import fetch_surf_fsaverage
from nimare.dataset import Dataset

print('NiMARE version:', nim.__version__)
print('Nibabel version:', nib.__version__)
Expand All @@ -22,11 +23,11 @@
help='The number of iterations the FWE corrector should run.')
args = parser.parse_args()

sleuth1 = args.dset1
print('dset1:', sleuth1)
sleuth2 = args.dset2
print('dset2:', sleuth2)
basename = sleuth1[0].split('/')[-1][:-4] + '+' + sleuth2[0].split('/')[-1][:-4]
dset1 = args.dset1
print('dset1:', dset1)
dset2 = args.dset2
print('dset2:', dset2)
basename = dset1[0].split('/')[-1][:-4] + '+' + dset2[0].split('/')[-1][:-4]
print(basename)
out_dir = args.out_dir
today = date.today().strftime('%d_%m_%Y')
Expand All @@ -40,11 +41,25 @@
else:
n_iters = 10000

print(sleuth1, '\n', sleuth2, '\n', basename, '\n', out_dir, '\n', today, '\nniters =', n_iters)
print(dset1, '\n', dset2, '\n', basename, '\n', out_dir, '\n', today, '\nniters =', n_iters)

dset1 = nim.io.convert_sleuth_to_dataset(sleuth1)
dset2 = nim.io.convert_sleuth_to_dataset(sleuth2)
if any('.txt' in string for string in dset1):
print('Converting Sleuth coordinate file to NiMARE dataset...')
dset1 = nim.io.convert_sleuth_to_dataset(dset1)

elif any('.pkl' in string for string in dset1):
print('Converting pickled dataset file to NiMARE dataset...')
dset1 = Dataset.load(dset1[0])

if any('.txt' in string for string in dset2):
print('Converting Sleuth coordinate file to NiMARE dataset...')
dset2 = nim.io.convert_sleuth_to_dataset(dset2)

elif any('.pkl' in string for string in dset2):
print('Converting pickled dataset file to NiMARE dataset...')
dset2 = Dataset.load(dset2[0])

print('Starting subtraction analysis...')
meta = nim.meta.ale.ALESubtraction(n_iters=n_iters)
result = meta.fit(dset1, dset2)
print(result.maps)
Expand Down
4 changes: 2 additions & 2 deletions code/setup.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
pip install argparse
pip install nilearn
pip install nimare
pip install nibabel
pip install nimare==0.0.3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a particular reason to pin to 0.0.3?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should maybe be >= 0.0.3, I think there's a change to the sleuth-to-dset conversion function in there.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're on 0.0.9rc2 (I think 0.0.9 will be the version for the paper), and there have been improvements to the subtraction analysis method, along with various fixes to a variety of things. We could look through the release history to see if there is anything especially relevant for these analyses.

pip install nibabel==3.1.1
32 changes: 32 additions & 0 deletions code/subset-nimare-ds.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
import os
import nimare as nim
from nimare.dataset import Dataset
from datetime import date

today = date.today().strftime('%d_%m_%Y')

constructs = ['Affiliation', 'Others', 'Self', 'Soc_Comm']
root_dir = '/Users/kbottenh/Dropbox/Projects/metas/ro-flux'

all_dat = [f'{root_dir}/data/ALL_Talairach.txt', f'{root_dir}/data/ALL_MNI.txt']

all_dset = nim.io.convert_sleuth_to_dataset(all_dat)

for construct in constructs:
print(construct)
con_dat = [os.path.join(root_dir, 'data', f'{construct}_Pure_Talairach.txt'),
os.path.join(root_dir, 'data', f'{construct}_Pure_MNI.txt')]

con_dset = nim.io.convert_sleuth_to_dataset(con_dat)
con_dset.save(f'{root_dir}/data/{construct}_all.pkl.gz')

non_construct_ids = list(set(all_dset.ids) - set(con_dset.ids))
non_construct_dset = all_dset.slice(non_construct_ids)

non_construct_dset.save(f'{root_dir}/data/Not_{construct}_all.pkl.gz')

#non_construct_ids = list(set(tal_dset.ids) - set(tal_construct.ids))
#tal_non_construct = mni_dset.slice(non_construct_ids)

#tal_non_construct.save(f'{root_dir}/data/Not_{construct}_Talairach.pkl', compress=False)

Loading