From 6986e90351bca4071ea61387bee1f52f1125aceb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 31 Aug 2023 14:35:10 +0200 Subject: [PATCH 01/35] [BUG] inside unit_tests workflow --- .github/workflows/unit_tests.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/unit_tests.yml b/.github/workflows/unit_tests.yml index 20f20ea3..d0097882 100644 --- a/.github/workflows/unit_tests.yml +++ b/.github/workflows/unit_tests.yml @@ -34,7 +34,7 @@ jobs: - name: Checkout repository uses: actions/checkout@v3 - - name: Load configuration for self-hosted runner + - name: Load configuration for self-hosted runner run: cp /home/neuro/local_testing_config.toml narps_open/utils/configuration/testing_config.toml - name: Install dependencies From 671595011cc613e419a5f5b51559fc39cc489b80 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 22 Sep 2023 16:05:41 +0200 Subject: [PATCH 02/35] [DOC] fix some broken links --- docs/ci-cd.md | 2 +- docs/data.md | 2 +- docs/pipelines.md | 4 ++-- docs/running.md | 2 +- docs/testing.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/ci-cd.md b/docs/ci-cd.md index c292eed1..ad3f33bc 100644 --- a/docs/ci-cd.md +++ b/docs/ci-cd.md @@ -35,7 +35,7 @@ For now, the following workflows are set up: | Name / File | What does it do ? | When is it launched ? | Where does it run ? | How can I see the results ? | | ----------- | ----------- | ----------- | ----------- | ----------- | | [code_quality](/.github/workflows/code_quality.yml) | A static analysis of the python code (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request if there are changes on `.py` files. | On GitHub servers. | Outputs (logs of pylint) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | -| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](codespell-project/codespell: check code for common misspellings). | For every push or pull_request to the `maint` branch. | On GitHub servers. | Outputs (logs of codespell) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | +| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `maint` branch. | On GitHub servers. | Outputs (logs of codespell) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | | [pipeline_tests](/.github/workflows/pipelines.yml) | Runs all the tests for changed pipelines. | For every push or pull_request, if a pipeline file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | | [test_changes](/.github/workflows/test_changes.yml) | It runs all the changed tests for the project. | For every push or pull_request, if a test file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | | [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On GitHub servers. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | diff --git a/docs/data.md b/docs/data.md index 1e6b4fc3..edf5a757 100644 --- a/docs/data.md +++ b/docs/data.md @@ -2,7 +2,7 @@ The datasets used for the project can be downloaded using one of the two options below. -The path to these datasets must conform with the information located in the configuration file you plan to use (cf. [documentation about configuration](docs/configuration.md)). By default, these paths are in the repository: +The path to these datasets must conform with the information located in the configuration file you plan to use (cf. [documentation about configuration](/docs/configuration.md)). By default, these paths are in the repository: * `data/original/`: original data from NARPS * `data/results/`: results from NARPS teams diff --git a/docs/pipelines.md b/docs/pipelines.md index fb7d2afc..db60c831 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -126,6 +126,6 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin ## Test your pipeline -First have a look at the [testing topic of the documentation](./docs/testing.md). It explains how testing works for inside project and how you should write the tests related to your pipeline. +First have a look at the [testing topic of the documentation](/docs/testing.md). It explains how testing works for inside project and how you should write the tests related to your pipeline. -Feel free to have a look at [tests/pipelines/test_team_2T6S.py](./tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives a good example. +Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives a good example. diff --git a/docs/running.md b/docs/running.md index 6344c042..6bc15647 100644 --- a/docs/running.md +++ b/docs/running.md @@ -61,4 +61,4 @@ python narps_open/runner.py -t 2T6S -r 4 -f python narps_open/runner.py -t 2T6S -r 4 -f -c # Check the output files without launching the runner ``` -In this usecase, the paths where to store the outputs and to the dataset are picked by the runner from the [configuration](docs/configuration.md). +In this usecase, the paths where to store the outputs and to the dataset are picked by the runner from the [configuration](/docs/configuration.md). diff --git a/docs/testing.md b/docs/testing.md index 2bd96584..5294ea9b 100644 --- a/docs/testing.md +++ b/docs/testing.md @@ -55,7 +55,7 @@ Use pytest [markers](https://docs.pytest.org/en/7.1.x/example/markers.html) to i | Type of test | marker | Description | | ----------- | ----------- | ----------- | | unit tests | `unit_test` | Unitary test a method/function | -| pipeline tests | `pieline_test` | Compute a whole pipeline and check its outputs are close enough with the team's results | +| pipeline tests | `pipeline_test` | Compute a whole pipeline and check its outputs are close enough with the team's results | ## Save time by downsampling data From 57b8c86d289f79c0a29c96b0c5ec1b1d9b0f3a5f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 22 Sep 2023 16:52:22 +0200 Subject: [PATCH 03/35] [DOC] adding template for pipeline testing --- docs/ci-cd.md | 4 +- docs/pipelines.md | 7 +- tests/pipelines/templates/test_team_XXXX.py | 101 ++++++++++++++++++++ 3 files changed, 108 insertions(+), 4 deletions(-) create mode 100644 tests/pipelines/templates/test_team_XXXX.py diff --git a/docs/ci-cd.md b/docs/ci-cd.md index ad3f33bc..92175866 100644 --- a/docs/ci-cd.md +++ b/docs/ci-cd.md @@ -35,10 +35,10 @@ For now, the following workflows are set up: | Name / File | What does it do ? | When is it launched ? | Where does it run ? | How can I see the results ? | | ----------- | ----------- | ----------- | ----------- | ----------- | | [code_quality](/.github/workflows/code_quality.yml) | A static analysis of the python code (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request if there are changes on `.py` files. | On GitHub servers. | Outputs (logs of pylint) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | -| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `maint` branch. | On GitHub servers. | Outputs (logs of codespell) are stored as [downloadable artifacts](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts) during 15 days after the push. | +| [codespell](/.github/workflows/codespell.yml) | A static analysis of the text files for commonly made typos using [codespell](https://github.com/codespell-project/codespell). | For every push or pull_request to the `main` branch. | On GitHub servers. | Typos are displayed in the workflow summary. | | [pipeline_tests](/.github/workflows/pipelines.yml) | Runs all the tests for changed pipelines. | For every push or pull_request, if a pipeline file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | | [test_changes](/.github/workflows/test_changes.yml) | It runs all the changed tests for the project. | For every push or pull_request, if a test file changed. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | -| [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On GitHub servers. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | +| [unit_testing](/.github/workflows/unit_testing.yml) | It runs all the unit tests for the project (see the [testing](/docs/testing.md) topic of the documentation for more information). | For every push or pull_request, if a file changed inside `narps_open/`, or a file related to test execution. | On Empenn runners. | Outputs (logs of pytest) are stored as downloadable artifacts during 15 days after the push. | ### Cache diff --git a/docs/pipelines.md b/docs/pipelines.md index db60c831..414533d4 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -126,6 +126,9 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin ## Test your pipeline -First have a look at the [testing topic of the documentation](/docs/testing.md). It explains how testing works for inside project and how you should write the tests related to your pipeline. +First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. -Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives a good example. +All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. + +> [!NOTE] +> Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example. diff --git a/tests/pipelines/templates/test_team_XXXX.py b/tests/pipelines/templates/test_team_XXXX.py new file mode 100644 index 00000000..9946a070 --- /dev/null +++ b/tests/pipelines/templates/test_team_XXXX.py @@ -0,0 +1,101 @@ +#!/usr/bin/python +# coding: utf-8 + +""" This template can be use to test a pipeline. + + - Replace all occurrences of XXXX by the actual id of the team. + - All lines starting with [INFO], are meant to help you during the reproduction, these can be removed + eventually. + - Also remove lines starting with [TODO], once you did what they suggested. + - Remove this docstring once you are done with coding the tests. +""" + +""" Tests of the 'narps_open.pipelines.team_XXXX' module. + +Launch this test with PyTest + +Usage: +====== + pytest -q test_team_XXXX.py + pytest -q test_team_XXXX.py -k +""" + +# [INFO] About these imports : +# [INFO] - pytest.helpers allows to use the helpers registered in tests/conftest.py +# [INFO] - pytest.mark allows to categorize tests as unitary or pipeline tests +from pytest import helpers, mark + +from nipype import Workflow + +# [INFO] Of course, import the class you want to test, here the Pipeline class for the team XXXX +from narps_open.pipelines.team_XXXX import PipelineTeamXXXX + +# [INFO] All tests should be contained in the following class, in order to sort them. +class TestPipelinesTeamXXXX: + """ A class that contains all the unit tests for the PipelineTeamXXXX class.""" + + # [TODO] Write one or several unit_test (and mark them as such) + # [TODO] ideally for each method of the class you test. + + # [INFO] Here is one example for the __init__() method + @staticmethod + @mark.unit_test + def test_create(): + """ Test the creation of a PipelineTeamXXXX object """ + + pipeline = PipelineTeamXXXX() + assert pipeline.fwhm == 8.0 + assert pipeline.team_id == 'XXXX' + + # [INFO] Here is one example for the methods returning workflows + @staticmethod + @mark.unit_test + def test_workflows(): + """ Test the workflows of a PipelineTeamXXXX object """ + + pipeline = PipelineTeamXXXX() + assert pipeline.get_preprocessing() is None + assert pipeline.get_run_level_analysis() is None + assert isinstance(pipeline.get_subject_level_analysis(), Workflow) + group_level = pipeline.get_group_level_analysis() + + assert len(group_level) == 3 + for sub_workflow in group_level: + assert isinstance(sub_workflow, Workflow) + + # [INFO] Here is one example for the methods returning outputs + @staticmethod + @mark.unit_test + def test_outputs(): + """ Test the expected outputs of a PipelineTeamXXXX object """ + pipeline = PipelineTeamXXXX() + + # 1 - 1 subject outputs + pipeline.subject_list = ['001'] + assert len(pipeline.get_preprocessing_outputs()) == 0 + assert len(pipeline.get_run_level_outputs()) == 0 + assert len(pipeline.get_subject_level_outputs()) == 7 + assert len(pipeline.get_group_level_outputs()) == 63 + assert len(pipeline.get_hypotheses_outputs()) == 18 + + # 2 - 4 subjects outputs + pipeline.subject_list = ['001', '002', '003', '004'] + assert len(pipeline.get_preprocessing_outputs()) == 0 + assert len(pipeline.get_run_level_outputs()) == 0 + assert len(pipeline.get_subject_level_outputs()) == 28 + assert len(pipeline.get_group_level_outputs()) == 63 + assert len(pipeline.get_hypotheses_outputs()) == 18 + + # [TODO] Feel free to add other methods, e.g. to test the custom node functions of the pipeline + + # [TODO] Write one pipeline_test (and mark it as such) + + # [INFO] The pipeline_test will most likely be exactly written this way : + @staticmethod + @mark.pipeline_test + def test_execution(): + """ Test the execution of a PipelineTeamXXXX and compare results """ + + # [INFO] We use the `test_pipeline_evaluation` helper which is responsible for running the + # [INFO] pipeline, iterating over subjects and comparing output with expected results. + helpers.test_pipeline_evaluation('XXXX') From 2c891c27ec628634fe575950a20d16cf23a8c3d6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 22 Sep 2023 17:10:20 +0200 Subject: [PATCH 04/35] [DOC] adding template for pipeline testing --- docs/pipelines.md | 6 ++++-- tests/conftest.py | 5 +++++ tests/pipelines/templates/test_team_XXXX.py | 7 ++++--- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/docs/pipelines.md b/docs/pipelines.md index 414533d4..226204c5 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -5,6 +5,7 @@ Here are a few principles you should know before creating a pipeline. Further in Please apply these principles in the following order. ## Create a file containing the pipeline + The pipeline must be contained in a single file named `narps_open/pipelines/team_.py`. ## Inherit from `Pipeline` @@ -89,7 +90,8 @@ def get_group_level_outputs(self): """ Return the names of the files the group level analysis is supposed to generate. """ ``` -:warning: Do not declare the method if no files are generated by the corresponding step. For example, if no preprocessing was done by the team, the `get_preprocessing_outputs` method must not be implemented. +> [!WARNING] +> Do not declare the method if no files are generated by the corresponding step. For example, if no preprocessing was done by the team, the `get_preprocessing_outputs` method must not be implemented. You should use other pipeline attributes to generate the lists of outputs dynamically. E.g.: @@ -128,7 +130,7 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. -All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. +All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the instructions and tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. > [!NOTE] > Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example. diff --git a/tests/conftest.py b/tests/conftest.py index e1530e48..a30315af 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -18,6 +18,11 @@ from narps_open.utils.configuration import Configuration from narps_open.data.results import ResultsCollection +# A list of test files to be ignored +collect_ignore = [ + 'tests/pipelines/templates/test_team_XXXX.py' # test template + ] + # Init configuration, to ensure it is in testing mode Configuration(config_type='testing') diff --git a/tests/pipelines/templates/test_team_XXXX.py b/tests/pipelines/templates/test_team_XXXX.py index 9946a070..0c3683fd 100644 --- a/tests/pipelines/templates/test_team_XXXX.py +++ b/tests/pipelines/templates/test_team_XXXX.py @@ -4,8 +4,8 @@ """ This template can be use to test a pipeline. - Replace all occurrences of XXXX by the actual id of the team. - - All lines starting with [INFO], are meant to help you during the reproduction, these can be removed - eventually. + - All lines starting with [INFO], are meant to help you during the reproduction, + these can be removed eventually. - Also remove lines starting with [TODO], once you did what they suggested. - Remove this docstring once you are done with coding the tests. """ @@ -25,6 +25,7 @@ # [INFO] - pytest.mark allows to categorize tests as unitary or pipeline tests from pytest import helpers, mark +# [INFO] Only for type testing from nipype import Workflow # [INFO] Of course, import the class you want to test, here the Pipeline class for the team XXXX @@ -36,7 +37,7 @@ class TestPipelinesTeamXXXX: # [TODO] Write one or several unit_test (and mark them as such) # [TODO] ideally for each method of the class you test. - + # [INFO] Here is one example for the __init__() method @staticmethod @mark.unit_test From 552e18cb6c3fd203dddbb4ba4020ac245238ceca Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Mon, 25 Sep 2023 11:36:32 +0200 Subject: [PATCH 05/35] About implemented_pipelines --- docs/pipelines.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/docs/pipelines.md b/docs/pipelines.md index 226204c5..ed233579 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -126,6 +126,22 @@ As explained before, all pipeline inherit from the `narps_open.pipelines.Pipelin * `fwhm` : full width at half maximum for the smoothing kernel (in mm) : * `tr` : repetition time of the fMRI acquisition (equals 1.0s) +## Set your pipeline as implemented + +Inside `narps_open/pipelines/__init__.py`, set the pipeline as implemented. I.e.: if the pipeline you reproduce is 2T6S, update the line : + +```python + '2T6S': None, +``` + +with : + +```python + '2T6S': 'PipelineTeam2T6S', +``` + +inside the `implemented_pipelines` dictionary. + ## Test your pipeline First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. From b6f21f490158f9eb2793c73882ceff2ba5e415eb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Mon, 25 Sep 2023 13:57:58 +0200 Subject: [PATCH 06/35] Deal with test template --- docs/pipelines.md | 2 +- .../pipelines/templates/{test_team_XXXX.py => template_test.py} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename tests/pipelines/templates/{test_team_XXXX.py => template_test.py} (100%) diff --git a/docs/pipelines.md b/docs/pipelines.md index ed233579..d59f9a9e 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -146,7 +146,7 @@ inside the `implemented_pipelines` dictionary. First have a look at the [testing page of the documentation](/docs/testing.md). It explains how testing works for the project and how you should write the tests related to your pipeline. -All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/test_team_XXXX.py](/tests/pipelines/templates/test_team_XXXX.py) inside the `tests/pipelines/` directory, replacing `XXXX` with the team id. Then, follow the instructions and tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document as well. +All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/template_test.py](/tests/pipelines/templates/template_test.py) inside the `tests/pipelines/` directory, renaming it accordingly. Then, follow the instructions and tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document. > [!NOTE] > Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example. diff --git a/tests/pipelines/templates/test_team_XXXX.py b/tests/pipelines/templates/template_test.py similarity index 100% rename from tests/pipelines/templates/test_team_XXXX.py rename to tests/pipelines/templates/template_test.py From 0436fe40900c89c6e655d813a5338ac3928914a2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 13 Oct 2023 12:01:58 +0200 Subject: [PATCH 07/35] [DOC] new readme for the doc --- docs/README.md | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/docs/README.md b/docs/README.md index 8c4fd662..f2c9a77e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -2,13 +2,16 @@ :mega: This is the starting point for the documentation of the NARPS open pipelines project. -Here are the available topics : - -* :runner: [running](/docs/running.md) tells you how to run pipelines in NARPS open pipelines -* :brain: [data](/docs/data.md) contains instructions to handle the data needed by the project -* :hammer_and_wrench: [environment](/docs/environment.md) contains instructions to handle the software environment needed by the project -* :goggles: [description](/docs/description.md) tells you how to get convenient descriptions of the pipelines, as written by the teams involved in NARPS. -* :microscope: [testing](/docs/testing.md) details the testing features of the project, i.e.: how is the code tested ? -* :package: [ci-cd](/docs/ci-cd.md) contains the information on how continuous integration and delivery (knowned as CI/CD) is set up. -* :writing_hand: [pipeline](/docs/pipelines.md) tells you all you need to know in order to write pipelines -* :vertical_traffic_light: [status](/docs/status.md) contains the information on how to get the work progress status for a pipeline. +## Use the project +* :brain: [data](/docs/data.md) - handle the data needed by the project +* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment needed by the project +* :rocket: [running](/docs/running.md) - launch pipelines in NARPS open pipelines + +## Contribute to the code +* :goggles: [description](/docs/description.md) - conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. +* :writing_hand: [pipeline](/docs/pipelines.md) - how to write pipelines. + +## Main +* :vertical_traffic_light: [status](/docs/status.md) - work progress status for a pipeline. +* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? +* :package: [ci-cd](/docs/ci-cd.md) - how continuous integration and delivery (knowned as CI/CD) is set up. From bfcf3dda05a8546b10f19f92badcc8d879fe52b9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 13 Oct 2023 13:21:07 +0200 Subject: [PATCH 08/35] Changes in README.md --- README.md | 2 +- docs/README.md | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 20125d83..5a7505a0 100644 --- a/README.md +++ b/README.md @@ -43,7 +43,7 @@ We also created a [shared spreadsheet](https://docs.google.com/spreadsheets/d/1F - :snake: :package: `narps_open/` contains the Python package with all the pipelines logic. - :brain: `data/` contains data that is used by the pipelines, as well as the (intermediate or final) results data. Instructions to download data are available in [INSTALL.md](/INSTALL.md#data-download-instructions). -- :blue_book: `docs/` contains the documentation for the project. Start browsing it with the entry point [docs/README.md](/docs/README.md) +- :blue_book: `docs/` contains the documentation for the project. Start browsing it [here](/docs/README.md) ! - :orange_book: `examples/` contains notebooks examples to launch of the reproduced pipelines. - :microscope: `tests/` contains the tests of the narps_open package. diff --git a/docs/README.md b/docs/README.md index f2c9a77e..55112428 100644 --- a/docs/README.md +++ b/docs/README.md @@ -3,15 +3,15 @@ :mega: This is the starting point for the documentation of the NARPS open pipelines project. ## Use the project -* :brain: [data](/docs/data.md) - handle the data needed by the project -* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment needed by the project -* :rocket: [running](/docs/running.md) - launch pipelines in NARPS open pipelines +* :brain: [data](/docs/data.md) - handle the needed data +* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment +* :rocket: [running](/docs/running.md) - launch pipelines ## Contribute to the code * :goggles: [description](/docs/description.md) - conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. * :writing_hand: [pipeline](/docs/pipelines.md) - how to write pipelines. +* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? -## Main +## For maintainers * :vertical_traffic_light: [status](/docs/status.md) - work progress status for a pipeline. -* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? * :package: [ci-cd](/docs/ci-cd.md) - how continuous integration and delivery (knowned as CI/CD) is set up. From d212e1d8aa521c5103ddeb7de5bd5426b3e25092 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 13 Oct 2023 14:06:14 +0200 Subject: [PATCH 09/35] [DOC] slight changes to docs/README.md --- docs/README.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/README.md b/docs/README.md index 55112428..32d33986 100644 --- a/docs/README.md +++ b/docs/README.md @@ -3,15 +3,15 @@ :mega: This is the starting point for the documentation of the NARPS open pipelines project. ## Use the project -* :brain: [data](/docs/data.md) - handle the needed data -* :hammer_and_wrench: [environment](/docs/environment.md) - handle the software environment -* :rocket: [running](/docs/running.md) - launch pipelines +* :brain: [data](/docs/data.md) - Handle the needed data. +* :hammer_and_wrench: [environment](/docs/environment.md) - Handle the software environment. +* :rocket: [running](/docs/running.md) - Launch pipelines ! ## Contribute to the code -* :goggles: [description](/docs/description.md) - conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. -* :writing_hand: [pipeline](/docs/pipelines.md) - how to write pipelines. -* :microscope: [testing](/docs/testing.md) - testing features of the project, i.e.: how is the code tested ? +* :goggles: [description](/docs/description.md) - Conveniently access descriptions of the pipelines, as written by the teams involved in NARPS. +* :writing_hand: [pipelines](/docs/pipelines.md) - How to write pipelines. +* :microscope: [testing](/docs/testing.md) - How to test the code. ## For maintainers -* :vertical_traffic_light: [status](/docs/status.md) - work progress status for a pipeline. -* :package: [ci-cd](/docs/ci-cd.md) - how continuous integration and delivery (knowned as CI/CD) is set up. +* :vertical_traffic_light: [status](/docs/status.md) - Work progress status for pipelines. +* :package: [ci-cd](/docs/ci-cd.md) - Continuous Integration and Delivery (a.k.a. CI/CD). From 29870d5243ddfc31b9c88283912536ef3e47c02d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 30 Nov 2023 15:30:40 +0100 Subject: [PATCH 10/35] Add links to past events --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index e008bf53..ab933bc5 100644 --- a/README.md +++ b/README.md @@ -50,8 +50,8 @@ This project is supported by Région Bretagne (Boost MIND) and by Inria (Explora This project is developed in the Empenn team by Boris Clenet, Elodie Germani, Jeremy Lefort-Besnard and Camille Maumet with contributions by Rémi Gau. In addition, this project was presented and received contributions during the following events: - - OHBM Brainhack 2022 (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet. - - e-ReproNim FENS NENS Cluster Brainhack (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook. + - [OHBM Brainhack 2022](https://ohbm.github.io/hackathon2022/) (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet. + - [e-ReproNim FENS NENS Cluster Brainhack](https://repro.school/2023-e-repronim-brainhack/) (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook. - [OHBM Brainhack 2023](https://ohbm.github.io/hackathon2023/) (July 2023): Arshitha Basavaraj, Boris Clénet, Rémi Gau, Élodie Germani, Yaroslav Halchenko, Camille Maumet, Paul Taylor. - [ORIGAMI lab](https://neurodatascience.github.io/) hackathon (September 2023): - [Brainhack Marseille 2023](https://brainhack-marseille.github.io/) (December 2023): From e4f369dc6963a98696bbdad3331bc1b0e04d0179 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 30 Nov 2023 15:51:49 +0100 Subject: [PATCH 11/35] Changes in readme.md --- README.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index ab933bc5..7172e25b 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# The NARPS Open Pipelines project +# NARPS Open Pipelines

@@ -23,16 +23,17 @@ We base our reproductions on the [original descriptions provided by the teams](h ## Contributing -NARPS open pipelines uses [nipype](https://nipype.readthedocs.io/en/latest/index.html) as a workflow manager and provides a series of templates and examples to help reproduce the different teams’ analyses. +There are many ways you can contribute 🤗 :wave: Any help is welcome ! -There are many ways you can contribute 🤗 :wave: Any help is welcome ! Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! +NARPS Open Pipelines uses [nipype](https://nipype.readthedocs.io/en/latest/index.html) as a workflow manager and provides a series of templates and examples to help reproducing the different teams’ analyses. Nevertheless knowing Python or Nipype is not required to take part in the project. -### Installation +Follow the guidelines in [CONTRIBUTING.md](/CONTRIBUTING.md) if you wish to get involved ! -To get the pipelines running, please follow the installation steps in [INSTALL.md](/INSTALL.md) +## Using the codebase -## Getting started -If you are interested in using the codebase to run the pipelines, see the [user documentation (work-in-progress)]. +To get the pipelines running, please follow the installation steps in [INSTALL.md](/INSTALL.md). + +If you are interested in using the codebase, see the user documentation in [docs](/docs/) (work-in-progress). ## References From 142f89ca6a761f2ee2c472a600d60d139b08bdac Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 30 Nov 2023 17:19:44 +0100 Subject: [PATCH 12/35] fMRI trail --- CONTRIBUTING.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7429acdd..30253929 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -23,11 +23,19 @@ Feel free to have a look to the following pipelines, these are examples : | team_id | softwares | fmriprep used ? | pipeline file | | --- | --- | --- | --- | | 2T6S | SPM | Yes | [/narps_open/pipelines/team_2T6S.py](/narps_open/pipelines/team_2T6S.py) | -| X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_2T6S.py) | +| X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_X19V.py) | ## 👩‍🎤 fMRI software trail -... +From the description provided by the team you chose, perform the analysis on the associated software to get as many log / configuration files / as possible from the execution. + +Complementary hints on the process would definitely be , to description + +Especially these files contain valuable information about model desing: +* for FSL pipelines, `design.fsf` setup files coming from FEAT. +* for SPM pipelines, + +spm matlabbatch ## Find or propose an issue :clipboard: Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. From 7b7fb8922facf50fea6780c3c753f09bee23ef32 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 6 Dec 2023 11:05:48 +0100 Subject: [PATCH 13/35] Adding trail description in contribution guide --- CONTRIBUTING.md | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 30253929..1cffca90 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -27,15 +27,11 @@ Feel free to have a look to the following pipelines, these are examples : ## 👩‍🎤 fMRI software trail -From the description provided by the team you chose, perform the analysis on the associated software to get as many log / configuration files / as possible from the execution. +From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints on the process would definitely be welcome, to enrich the description. -Complementary hints on the process would definitely be , to description - -Especially these files contain valuable information about model desing: -* for FSL pipelines, `design.fsf` setup files coming from FEAT. -* for SPM pipelines, - -spm matlabbatch +Especially these files contain valuable information about model design: +* for FSL pipelines, `design.fsf` setup files coming from FEAT ; +* for SPM pipelines, matlabbatch files. ## Find or propose an issue :clipboard: Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. From 23b93f602037231138e22d9a070ec97579e05ae6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 6 Dec 2023 13:16:36 +0100 Subject: [PATCH 14/35] Separate trails in contribution guide --- CONTRIBUTING.md | 97 +++++++++++++++++++++++-------------------------- 1 file changed, 45 insertions(+), 52 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1cffca90..5ebe8af3 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,78 +1,71 @@ # How to contribute to NARPS Open Pipelines ? For the reproductions, we are especially looking for contributors with the following profiles: - - 👩‍🎤 SPM, FSL, AFNI or nistats has no secrets for you? You know this fMRI analysis software by heart 💓. Please help us by reproducing the corresponding NARPS pipelines. 👣 after step 1, follow the fMRI expert trail. - - 🧑‍🎤 You are a nipype guru? 👣 after step 1, follow the nipype expert trail. + - `🧠 fMRI soft` SPM, FSL, AFNI or nistats has no secrets for you ; you know one of these fMRI analysis tools by :heart:. + - `🐍 Python` You are a Python guru, willing to use [Nipype](https://nipype.readthedocs.io/en/latest/). -# Step 1: Choose a pipeline to reproduce :keyboard: -:thinking: Not sure which pipeline to start with ? 🚦The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick any pipeline that is in red (not started). +In the following, read the instruction sections where the badge corresponding to your profile appears. -Need more information to make a decision? The `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md) provides easy access to all the info we have on each pipeline. +## 1 - Choose a pipeline +`🧠 fMRI soft` `🐍 Python` -When you are ready, [start an issue](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose **Pipeline reproduction**! +Not sure which pipeline to start with :thinking:? The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick any pipeline that is not fully reproduced, i.e.: not started :red_circle: or in progress :orange_circle: . -# Step 2: Reproduction +> [!NOTE] +> Need more information to make a decision? The `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md) provides easy access to all the info we have on each pipeline. -## 🧑‍🎤 NiPype trail +## 2 - Interact using issues +`🧠 fMRI soft` `🐍 Python` -We created templates with modifications to make and holes to fill to create a pipeline. You can find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). +Browse [issues](https://github.com/Inria-Empenn/narps_open_pipelines/issues/) before starting a new one. If the pipeline is :orange_circle: the associated issues are listed on the [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status). -If you feel it could be better explained, do not hesitate to suggest modifications for the templates. +You can either: +* comment on an existing issue with details or your findings about the pipeline; +* [start an issue](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose **Pipeline reproduction**. -Feel free to have a look to the following pipelines, these are examples : -| team_id | softwares | fmriprep used ? | pipeline file | -| --- | --- | --- | --- | -| 2T6S | SPM | Yes | [/narps_open/pipelines/team_2T6S.py](/narps_open/pipelines/team_2T6S.py) | -| X19V | FSL | Yes | [/narps_open/pipelines/team_X19V.py](/narps_open/pipelines/team_X19V.py) | +> [!WARNING] +> As soon as the issue is marked as `🏁 status: ready for dev` you can proceed to the next step. -## 👩‍🎤 fMRI software trail +## 3 - Use pull requests +`🧠 fMRI soft` `🐍 Python` -From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints on the process would definitely be welcome, to enrich the description. +1. If needed, [fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository; +2. create a separate branch for the issue you're working on (do not make changes to the default branch of your fork). +3. push your work to the branch as soon as possible; +4. visit [this page](https://github.com/Inria-Empenn/narps_open_pipelines/pulls) to start a draft pull request. -Especially these files contain valuable information about model design: -* for FSL pipelines, `design.fsf` setup files coming from FEAT ; -* for SPM pipelines, matlabbatch files. - -## Find or propose an issue :clipboard: -Issues are very important for this project. If you want to contribute, you can either **comment an existing issue** or **proposing a new issue**. - -### Answering an existing issue :label: -To answer an existing issue, make a new comment with the following information: - - Your name and/or github username - - The step you want to contribute to - - The approximate time needed +> [!WARNING] +> Make sure you create a **Draft Pull Request** as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork), and please stick to the description of the pull request template as much as possible. -### Proposing a new issue :bulb: -In order to start a new issue, click [here](https://github.com/Inria-Empenn/narps_open_pipelines/issues/new/choose) and choose the type of issue you want: - - **Feature request** if you aim at improving the project with your ideas ; - - **Bug report** if you encounter a problem or identified a bug ; - - **Classic issue** to ask question, give feedbacks... +## 4 - Reproduction -Some issues are (probably) already open, please browse them before starting a new one. If your issue was already reported, you may want complete it with details or other circumstances in which a problem appear. +Continue writing your work and push it to the branch. Make sure you perform all the items of the pull request checklist. -## Pull Requests :inbox_tray: -Pull requests are the best way to get your ideas into this repository and to solve the problems as fast as possible. +### Translate the pipeline description into code +`🐍 Python` -### Make A Branch :deciduous_tree: -Create a separate branch for each issue you're working on. Do not make changes to the default branch (e.g. master, develop) of your fork. +From the description provided by the team you chose, write Nipype workflows that match the steps performed by the teams (preprocessing, run level analysis, subject level analysis, group level analysis). -### Push Your Code :outbox_tray: -Push your code as soon as possible. +We created templates with modifications to make and holes to fill to help you with that. Find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). -### Create the Pull Request (PR) :inbox_tray: -Once you pushed your first lines of code to the branch in your fork, visit [this page](https://github.com/Inria-Empenn/narps_open_pipelines/pulls) to start creating a PR for the NARPS Open Pipelines project. +> [!TIP] +> Have a look to the already reproduced pipelines, as examples : +> | team_id | softwares | fmriprep used ? | pipeline file | +> | --- | --- | --- | --- | +> | Q6O0 | SPM | Yes | [/narps_open/pipelines/team_Q6O0.py](/narps_open/pipelines/team_Q6O0.py) | -:warning: Please create a **Draft Pull Request** as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork), and please stick to the PR description template as much as possible. +### Run the pipeline and produce evidences +`🧠 fMRI soft` -Continue writing your code and push to the same branch. Make sure you perform all the items of the PR checklist. +From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints and commend on the process would definitely be welcome, to enrich the description (e.g.: relevant parameters not written in the description, etc.). -### Request Review :disguised_face: -Once your PR is ready, you may add a reviewer to your PR, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review) in the GitHub documentation. - -Please turn your Draft Pull Request into a "regular" Pull Request, by clicking **Ready for review** in the Pull Request page. +Especially these files contain valuable information about model design: +* for FSL pipelines, `design.fsf` setup files coming from FEAT ; +* for SPM pipelines, `matlabbatch` files. -**:wave: Thank you in advance for contributing to the project!** +### Request Review +`🧠 fMRI soft` `🐍 Python` -## Additional resources +Once your work is ready, you may add a reviewer to your PR, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review). Please turn your draft pull request into a *regular* pull request, by clicking **Ready for review** in the pull request page. - - git and Gitub: general guidelines can be found [here](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) in the GitHub documentation. +**:wave: Thank you for contributing to the project!** From 6f3dd73f227c8f72db54e31873b3336d6ebe69ef Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 6 Dec 2023 14:53:14 +0100 Subject: [PATCH 15/35] [TEST] Solving pytest issues with template test --- CONTRIBUTING.md | 21 ++++++++++----------- pytest.ini | 2 +- tests/conftest.py | 5 ----- 3 files changed, 11 insertions(+), 17 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5ebe8af3..98b31c06 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -17,7 +17,7 @@ Not sure which pipeline to start with :thinking:? The [pipeline dashboard](https ## 2 - Interact using issues `🧠 fMRI soft` `🐍 Python` -Browse [issues](https://github.com/Inria-Empenn/narps_open_pipelines/issues/) before starting a new one. If the pipeline is :orange_circle: the associated issues are listed on the [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status). +Browse [issues](https://github.com/Inria-Empenn/narps_open_pipelines/issues/) before starting a new one. If the pipeline is :orange_circle:, the associated issues are listed on the [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status). You can either: * comment on an existing issue with details or your findings about the pipeline; @@ -27,9 +27,9 @@ You can either: > As soon as the issue is marked as `🏁 status: ready for dev` you can proceed to the next step. ## 3 - Use pull requests -`🧠 fMRI soft` `🐍 Python` +`🐍 Python` -1. If needed, [fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository; +1. [Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository; 2. create a separate branch for the issue you're working on (do not make changes to the default branch of your fork). 3. push your work to the branch as soon as possible; 4. visit [this page](https://github.com/Inria-Empenn/narps_open_pipelines/pulls) to start a draft pull request. @@ -37,13 +37,13 @@ You can either: > [!WARNING] > Make sure you create a **Draft Pull Request** as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork), and please stick to the description of the pull request template as much as possible. -## 4 - Reproduction - -Continue writing your work and push it to the branch. Make sure you perform all the items of the pull request checklist. +## 4 - Reproduce pipeline ### Translate the pipeline description into code `🐍 Python` +Write your code and push it to the branch. Make sure you perform all the items of the pull request checklist. + From the description provided by the team you chose, write Nipype workflows that match the steps performed by the teams (preprocessing, run level analysis, subject level analysis, group level analysis). We created templates with modifications to make and holes to fill to help you with that. Find them in [`narps_open/pipelines/templates`](/narps_open/pipelines/templates). @@ -54,18 +54,17 @@ We created templates with modifications to make and holes to fill to help you wi > | --- | --- | --- | --- | > | Q6O0 | SPM | Yes | [/narps_open/pipelines/team_Q6O0.py](/narps_open/pipelines/team_Q6O0.py) | +Once your work is ready, you may ask a reviewer to your pull request, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review). Please turn your draft pull request into a *regular* pull request, by clicking **Ready for review** in the pull request page. + ### Run the pipeline and produce evidences `🧠 fMRI soft` -From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints and commend on the process would definitely be welcome, to enrich the description (e.g.: relevant parameters not written in the description, etc.). +From the description provided by the team you chose, perform the analysis on the associated software to get as many metadata (log, configuration files, and other relevant files for reproducibility) as possible from the execution. Complementary hints and comments on the process would definitely be welcome, to enrich the description (e.g.: relevant parameters not written in the description, etc.). Especially these files contain valuable information about model design: * for FSL pipelines, `design.fsf` setup files coming from FEAT ; * for SPM pipelines, `matlabbatch` files. -### Request Review -`🧠 fMRI soft` `🐍 Python` - -Once your work is ready, you may add a reviewer to your PR, as described [here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/requesting-a-pull-request-review). Please turn your draft pull request into a *regular* pull request, by clicking **Ready for review** in the pull request page. +You can attach these files as comments on the pipeline reproduction issue. **:wave: Thank you for contributing to the project!** diff --git a/pytest.ini b/pytest.ini index 14522dc7..f949712a 100644 --- a/pytest.ini +++ b/pytest.ini @@ -1,5 +1,5 @@ [pytest] -addopts = --strict-markers +addopts = --strict-markers --ignore=tests/pipelines/templates/ testpaths = tests markers = diff --git a/tests/conftest.py b/tests/conftest.py index d6488236..7c57c1f9 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -18,11 +18,6 @@ from narps_open.utils.configuration import Configuration from narps_open.data.results import ResultsCollection -# A list of test files to be ignored -collect_ignore = [ - 'tests/pipelines/templates/test_team_XXXX.py' # test template - ] - # Init configuration, to ensure it is in testing mode Configuration(config_type='testing') From 58035011b8efd1f67b34d47f4c788d1c9c8d1282 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Tue, 9 Jan 2024 17:11:17 +0100 Subject: [PATCH 16/35] Changing docker image in use --- INSTALL.md | 10 +++++----- docs/environment.md | 10 +++++----- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/INSTALL.md b/INSTALL.md index b6142cc0..41758b47 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -40,18 +40,18 @@ datalad get data/original/ds001734/derivatives/fmriprep/sub-00[1-4] -J 12 ## 4 - Set up the environment -[Install Docker](https://docs.docker.com/engine/install/) then pull the Docker image : +[Install Docker](https://docs.docker.com/engine/install/) then pull the nipype Docker image : ```bash -docker pull elodiegermani/open_pipeline:latest +docker pull nipype/nipype ``` Once it's done you can check the image is available on your system : ```bash docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - docker.io/elodiegermani/open_pipeline latest 0f3c74d28406 9 months ago 22.7 GB + REPOSITORY TAG IMAGE ID CREATED SIZE + docker.io/nipype/nipype latest 0f3c74d28406 9 months ago 22.7 GB ``` > [!NOTE] @@ -63,7 +63,7 @@ Start a Docker container from the Docker image : ```bash # Replace PATH_TO_THE_REPOSITORY in the following command (e.g.: with /home/user/dev/narps_open_pipelines/) -docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ elodiegermani/open_pipeline +docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ nipype/nipype ``` Install NARPS Open Pipelines inside the container : diff --git a/docs/environment.md b/docs/environment.md index edab9b4d..00442421 100644 --- a/docs/environment.md +++ b/docs/environment.md @@ -2,12 +2,12 @@ ## The Docker container :whale: -The NARPS Open Pipelines project is build upon several dependencies, such as [Nipype](https://nipype.readthedocs.io/en/latest/) but also the original software packages used by the pipelines (SPM, FSL, AFNI...). Therefore, we created a Docker container based on [Neurodocker](https://github.com/ReproNim/neurodocker) that contains software dependencies. +The NARPS Open Pipelines project is build upon several dependencies, such as [Nipype](https://nipype.readthedocs.io/en/latest/) but also the original software packages used by the pipelines (SPM, FSL, AFNI...). Therefore we recommend to use the [`nipype/nipype` Docker image](https://hub.docker.com/r/nipype/nipype/) that contains all the required software dependencies. -The simplest way to start the container using the command below : +The simplest way to start the container is by using the command below : ```bash -docker run -it elodiegermani/open_pipeline +docker run -it nipype/nipype ``` From this command line, you need to add volumes to be able to link with your local files (code repository). @@ -16,7 +16,7 @@ From this command line, you need to add volumes to be able to link with your loc # Replace PATH_TO_THE_REPOSITORY in the following command (e.g.: with /home/user/dev/narps_open_pipelines/) docker run -it \ -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ \ - elodiegermani/open_pipeline + nipype/nipype ``` ## Use Jupyter with the container @@ -27,7 +27,7 @@ If you wish to use [Jupyter](https://jupyter.org/) to run the code, a port forwa docker run -it \ -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ \ -p 8888:8888 \ - elodiegermani/open_pipeline + nipype/nipype ``` Then, from inside the container : From e2976853a5513cde83432b90534d8dfe575000e6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 10 Jan 2024 11:03:25 +0100 Subject: [PATCH 17/35] FSL template correction --- .../pipelines/templates/template_fsl.py | 536 +++++++----------- 1 file changed, 191 insertions(+), 345 deletions(-) diff --git a/narps_open/pipelines/templates/template_fsl.py b/narps_open/pipelines/templates/template_fsl.py index 92ac3deb..68b8ca77 100644 --- a/narps_open/pipelines/templates/template_fsl.py +++ b/narps_open/pipelines/templates/template_fsl.py @@ -41,7 +41,15 @@ def __init__(self): # [INFO] Remove the init method completely if unused # [TODO] Init the attributes of the pipeline, if any other than the ones defined # in the pipeline class - pass + + # [INFO] You may for example define the contrasts that will be analyzed + # in the run level analysis. Each contrast is in the form : + # [Name, Stat, [list of condition names], [weights on those conditions]] + self.run_level_contrasts = [ + ['trial', 'T', ['trial', 'trialxgain^1', 'trialxloss^1'], [1, 0, 0]], + ['effect_of_gain', 'T', ['trial', 'trialxgain^1', 'trialxloss^1'], [0, 1, 0]], + ['effect_of_loss', 'T', ['trial', 'trialxgain^1', 'trialxloss^1'], [0, 0, 1]] + ] def get_preprocessing(self): """ Return a Nipype workflow describing the prerpocessing part of the pipeline """ @@ -58,7 +66,7 @@ def get_preprocessing(self): ('run_id', self.run_list), ] - # Templates to select files node + # SelectFiles node - to select necessary files file_templates = { 'anat': join( 'sub-{subject_id}', 'anat', 'sub-{subject_id}_T1w.nii.gz' @@ -73,18 +81,12 @@ def get_preprocessing(self): 'sub-{subject_id}', 'fmap', 'sub-{subject_id}_phasediff.nii.gz' ) } - - # SelectFiles node - to select necessary files - select_files = Node( - SelectFiles(file_templates, base_directory = self.directories.dataset_dir), - name='select_files' - ) + select_files = Node(SelectFiles(file_templates), name='select_files') + select_files.inputs.base_directory = self.directories.dataset_dir # DataSink Node - store the wanted results in the wanted repository - data_sink = Node( - DataSink(base_directory = self.directories.output_dir), - name='data_sink', - ) + data_sink = Node(DataSink(), name='data_sink') + data_sink.inputs.base_directory = self.directories.output_dir # [INFO] The following part has to be modified with nodes of the pipeline @@ -109,132 +111,151 @@ def get_preprocessing(self): # [TODO] Add the connections the workflow needs # [INFO] Input and output names can be found on NiPype documentation - preprocessing.connect( - [ - ( - info_source, - select_files, - [('subject_id', 'subject_id'), ('run_id', 'run_id')], - ), - ( - select_files, - node_name, - [('func', 'node_input_name')], - ), - ( - node_name, - data_sink, - [('node_output_name', 'preprocessing.@sym_link')], - ), - ] - ) + preprocessing.connect([ + (info_source, select_files, [('subject_id', 'subject_id'), ('run_id', 'run_id')]), + (select_files, node_name, [('func', 'node_input_name')]), + (node_name, data_sink, [('node_output_name', 'preprocessing.@sym_link')]) + ]) # [INFO] Here we simply return the created workflow return preprocessing - # [INFO] There was no run level analysis for the pipelines using FSL - def get_run_level_analysis(self): - """ Return a Nipype workflow describing the run level analysis part of the pipeline """ - return None - - # [INFO] This function is used in the subject level analysis pipelines using FSL + # [INFO] This function is used in the run level analysis, in order to + # extract trial information from the event files # [TODO] Adapt this example to your specific pipeline - def get_session_infos(event_file: str): + def get_subject_information(event_file: str): """ - Create Bunchs for specifyModel. + Extract information from an event file, to setup the model. Parameters : - - event_file : file corresponding to the run and the subject to analyze + - event_file : str, event file corresponding to the run and the subject to analyze Returns : - - subject_info : list of Bunch for 1st level analysis. + - subject_info : list of Bunch containing event information """ - - condition_names = ['trial', 'gain', 'loss'] - - onset = {} - duration = {} - amplitude = {} - - # Creates dictionary items with empty lists for each condition. - for condition in condition_names: - onset.update({condition: []}) - duration.update({condition: []}) - amplitude.update({condition: []}) - + # [INFO] nipype requires to import all dependancies from inside the methods that are + # later used in Function nodes + from nipype.interfaces.base import Bunch + + condition_names = ['event', 'gain', 'loss', 'response'] + onsets = {} + durations = {} + amplitudes = {} + + # Create dictionary items with empty lists + for condition in condition_names: + onsets.update({condition : []}) + durations.update({condition : []}) + amplitudes.update({condition : []}) + + # Parse information in the event_file with open(event_file, 'rt') as file: next(file) # skip the header for line in file: info = line.strip().split() - # Creates list with onsets, duration and loss/gain for amplitude (FSL) - for condition in condition_names: - if condition == 'gain': - onset[condition].append(float(info[0])) - duration[condition].append(float(info[4])) - amplitude[condition].append(float(info[2])) - elif condition == 'loss': - onset[condition].append(float(info[0])) - duration[condition].append(float(info[4])) - amplitude[condition].append(float(info[3])) - elif condition == 'trial': - onset[condition].append(float(info[0])) - duration[condition].append(float(info[4])) - amplitude[condition].append(float(1)) - - subject_info = [] - subject_info.append( + onsets['event'].append(float(info[0])) + durations['event'].append(float(info[1])) + amplitudes['event'].append(1.0) + onsets['gain'].append(float(info[0])) + durations['gain'].append(float(info[1])) + amplitudes['gain'].append(float(info[2])) + onsets['loss'].append(float(info[0])) + durations['loss'].append(float(info[1])) + amplitudes['loss'].append(float(info[3])) + onsets['response'].append(float(info[0])) + durations['response'].append(float(info[1])) + if 'accept' in info[5]: + amplitudes['response'].append(1.0) + elif 'reject' in info[5]: + amplitudes['response'].append(-1.0) + else: + amplitudes['response'].append(0.0) + + return [ Bunch( conditions = condition_names, - onsets = [onset[k] for k in condition_names], - durations = [duration[k] for k in condition_names], - amplitudes = [amplitude[k] for k in condition_names], + onsets = [onsets[k] for k in condition_names], + durations = [durations[k] for k in condition_names], + amplitudes = [amplitudes[k] for k in condition_names], regressor_names = None, - regressors = None, - ) - ) + regressors = None) + ] - return subject_info + def get_run_level_analysis(self): + """ Return a Nipype workflow describing the run level analysis part of the pipeline """ - # [INFO] This function creates the contrasts that will be analyzed in the first level analysis - # [TODO] Adapt this example to your specific pipeline - def get_contrasts(): - """ - Create the list of tuples that represents contrasts. - Each contrast is in the form : - (Name,Stat,[list of condition names],[weights on those conditions]) + # [INFO] The following part stays the same for all pipelines + # [TODO] Modify the templates dictionary to select the files + # that are relevant for your analysis only. - Returns: - - contrasts: list of tuples, list of contrasts to analyze - """ - # List of condition names - conditions = ['trial', 'trialxgain^1', 'trialxloss^1'] + # IdentityInterface node - allows to iterate over subjects and runs + information_source = Node(IdentityInterface( + fields = ['subject_id', 'run_id']), + name = 'information_source') + information_source.iterables = [ + ('run_id', self.run_list), + ('subject_id', self.subject_list), + ] - # Create contrasts - trial = ('trial', 'T', conditions, [1, 0, 0]) - effect_gain = ('effect_of_gain', 'T', conditions, [0, 1, 0]) - effect_loss = ('effect_of_loss', 'T', conditions, [0, 0, 1]) + # SelectFiles node - to select necessary files + templates = { + # Functional MRI - computed by preprocessing + 'func' : join(self.directories.output_dir, 'preprocessing', + '_run_id_{run_id}_subject_id_{subject_id}', + 'sub-{subject_id}_task-MGT_run-{run_id}_bold_brain_mcf_st_smooth_flirt_wtsimt.nii.gz' + ), + # Event file - from the original dataset + 'event' : join('sub-{subject_id}', 'func', + 'sub-{subject_id}_task-MGT_run-{run_id}_events.tsv' + ), + # Motion parameters - computed by preprocessing's motion_correction Node + 'motion' : join(self.directories.output_dir, 'preprocessing', + '_run_id_{run_id}_subject_id_{subject_id}', + 'sub-{subject_id}_task-MGT_run-{run_id}_bold_brain_mcf.nii.gz.par', + ) + } + select_files = Node(SelectFiles(templates), name = 'select_files') + select_files.inputs.base_directory = self.directories.dataset_dir - # Contrast list - return [trial, effect_gain, effect_loss] + # DataSink Node - store the wanted results in the wanted directory + data_sink = Node(DataSink(), name = 'data_sink') + data_sink.inputs.base_directory = self.directories.output_dir + + # [TODO] Continue adding nodes to the run level analysis part of the pipeline + + # [INFO] The following part defines the nipype workflow and the connections between nodes + run_level_analysis = Workflow( + base_dir = self.directories.working_dir, + name = 'run_level_analysis' + ) + + # [TODO] Add the connections the workflow needs + # [INFO] Input and output names can be found on NiPype documentation + run_level_analysis.connect([ + (info_source, select_files, [('subject_id', 'subject_id'), ('run_id', 'run_id')]) + # [TODO] Add other connections here + ]) + + # [INFO] Here we simply return the created workflow + return run_level_analysis def get_subject_level_analysis(self): """ Return a Nipype workflow describing the subject level analysis part of the pipeline """ # [INFO] The following part stays the same for all pipelines + # [TODO] Define a self.contrast_list in the __init__() method. It will allow to iterate + # on contrasts computed in the run level analysis + # Infosource Node - To iterate on subjects - info_source = Node( - IdentityInterface( - fields = ['subject_id', 'dataset_dir', 'results_dir', 'working_dir', 'run_list'], - dataset_dir = self.directories.dataset_dir, - results_dir = self.directories.results_dir, - working_dir = self.directories.working_dir, - run_list = self.run_list - ), - name='info_source', - ) - info_source.iterables = [('subject_id', self.subject_list)] + info_source = Node(IdentityInterface( + fields = ['subject_id', 'contrast_id']), + name='info_source') + information_source.iterables = [ + ('subject_id', self.subject_list), + ('contrast_id', self.contrast_list) + ] # Templates to select files node # [TODO] Change the name of the files depending on the filenames of results of preprocessing @@ -254,16 +275,12 @@ def get_subject_level_analysis(self): } # SelectFiles node - to select necessary files - select_files = Node( - SelectFiles(templates, base_directory = self.directories.dataset_dir), - name = 'select_files' - ) + select_files = Node(SelectFiles(templates), name = 'select_files') + select_files.inputs.base_directory = self.directories.dataset_dir # DataSink Node - store the wanted results in the wanted repository - data_sink = Node( - DataSink(base_directory = self.directories.output_dir), - name = 'data_sink' - ) + data_sink = Node(DataSink(), name = 'data_sink') + data_sink.inputs.base_directory = self.directories.output_dir # [INFO] This is the node executing the get_subject_infos_spm function # Subject Infos node - get subject specific condition information @@ -277,17 +294,6 @@ def get_subject_level_analysis(self): ) subject_infos.inputs.runs = self.run_list - # [INFO] This is the node executing the get_contrasts function - # Contrasts node - to get contrasts - contrasts = Node( - Function( - input_names = ['subject_id'], - output_names = ['contrasts'], - function = self.get_contrasts, - ), - name = 'contrasts', - ) - # [INFO] The following part has to be modified with nodes of the pipeline # [TODO] For each node, replace 'node_name' by an explicit name, and use it for both: @@ -311,165 +317,67 @@ def get_subject_level_analysis(self): # [TODO] Add the connections the workflow needs # [INFO] Input and output names can be found on NiPype documentation subject_level_analysis.connect([ - ( - info_source, - select_files, - [('subject_id', 'subject_id')] - ), - ( - info_source, - contrasts, - [('subject_id', 'subject_id')] - ), - ( - select_files, - subject_infos, - [('event', 'event_files')] - ), - ( - select_files, - node_name, - [('func', 'node_input_name')] - ), - ( - node_name, data_sink, - [('node_output_name', 'preprocess.@sym_link')] - ), + (info_source, select_files, [('subject_id', 'subject_id')]), + (info_source, contrasts, [('subject_id', 'subject_id')]), + (select_files, subject_infos, [('event', 'event_files')]), + (select_files, node_name, [('func', 'node_input_name')]), + (node_name, data_sink, [('node_output_name', 'preprocess.@sym_link')]) ]) # [INFO] Here we simply return the created workflow return subject_level_analysis - # [INFO] This function returns the list of ids and files of each group of participants - # to do analyses for both groups, and one between the two groups. - def get_subgroups_contrasts( - copes, varcopes, subject_list: list, participants_file: str - ): + # [INFO] This function creates the dictionary of regressors used in FSL Nipype pipelines + def get_one_sample_t_test_regressors(subject_list: list) -> dict: """ - This function return the file list containing only the files - belonging to subject in the wanted group. + Create dictionary of regressors for one sample t-test group analysis. - Parameters : - - copes: original file list selected by select_files node - - varcopes: original file list selected by select_files node - - subject_ids: list of subject IDs that are analyzed - - participants_file: file containing participants characteristics + Parameters: + - subject_list: ids of subject in the group for which to do the analysis - Returns : - - copes_equal_indifference : a subset of copes corresponding to subjects - in the equalIndifference group - - copes_equal_range : a subset of copes corresponding to subjects - in the equalRange group - - copes_global : a list of all copes - - varcopes_equal_indifference : a subset of varcopes corresponding to subjects - in the equalIndifference group - - varcopes_equal_range : a subset of varcopes corresponding to subjects - in the equalRange group - - equal_indifference_id : a list of subject ids in the equalIndifference group - - equal_range_id : a list of subject ids in the equalRange group - - varcopes_global : a list of all varcopes + Returns: + - dict containing named lists of regressors. """ - equal_range_id = [] - equal_indifference_id = [] - - # Reading file containing participants IDs and groups - with open(participants_file, 'rt') as file: - next(file) # skip the header - - for line in file: - info = line.strip().split() - - # Checking for each participant if its ID was selected - # and separate people depending on their group - if info[0][-3:] in subject_list and info[1] == 'equalIndifference': - equal_indifference_id.append(info[0][-3:]) - elif info[0][-3:] in subject_list and info[1] == 'equalRange': - equal_range_id.append(info[0][-3:]) - - copes_equal_indifference = [] - copes_equal_range = [] - copes_global = [] - varcopes_equal_indifference = [] - varcopes_equal_range = [] - varcopes_global = [] - - # Checking for each selected file if the corresponding participant was selected - # and add the file to the list corresponding to its group - for cope, varcope in zip(copes, varcopes): - sub_id = cope.split('/') - if sub_id[-2][-3:] in equal_indifference_id: - copes_equal_indifference.append(cope) - elif sub_id[-2][-3:] in equal_range_id: - copes_equal_range.append(cope) - if sub_id[-2][-3:] in subject_list: - copes_global.append(cope) - - sub_id = varcope.split('/') - if sub_id[-2][-3:] in equal_indifference_id: - varcopes_equal_indifference.append(varcope) - elif sub_id[-2][-3:] in equal_range_id: - varcopes_equal_range.append(varcope) - if sub_id[-2][-3:] in subject_list: - varcopes_global.append(varcope) - - return copes_equal_indifference, copes_equal_range, - varcopes_equal_indifference, varcopes_equal_range, - equal_indifference_id, equal_range_id, - copes_global, varcopes_global - + return dict(group_mean = [1 for _ in subject_list]) # [INFO] This function creates the dictionary of regressors used in FSL Nipype pipelines - def get_regressors( - equal_range_id: list, - equal_indifference_id: list, - method: str, + def get_two_sample_t_test_regressors( + equal_range_ids: list, + equal_indifference_ids: list, subject_list: list, - ) -> dict: + ) -> dict: """ - Create dictionary of regressors for group analysis. + Create dictionary of regressors for two sample t-test group analysis. Parameters: - - equal_range_id: ids of subjects in equal range group - - equal_indifference_id: ids of subjects in equal indifference group - - method: one of "equalRange", "equalIndifference" or "groupComp" + - equal_range_ids: ids of subjects in equal range group + - equal_indifference_ids: ids of subjects in equal indifference group - subject_list: ids of subject for which to do the analysis Returns: - - regressors: regressors used to distinguish groups in FSL group analysis + - regressors, dict: containing named lists of regressors. + - groups, list: group identifiers to distinguish groups in FSL analysis. """ - # For one sample t-test, creates a dictionary - # with a list of the size of the number of participants - if method == 'equalRange': - regressors = dict(group_mean = [1 for i in range(len(equal_range_id))]) - elif method == 'equalIndifference': - regressors = dict(group_mean = [1 for i in range(len(equal_indifference_id))]) - - # For two sample t-test, creates 2 lists: - # - one for equal range group, - # - one for equal indifference group - # Each list contains n_sub values with 0 and 1 depending on the group of the participant - # For equalRange_reg list --> participants with a 1 are in the equal range group - elif method == 'groupComp': - equalRange_reg = [ - 1 for i in range(len(equal_range_id) + len(equal_indifference_id)) - ] - equalIndifference_reg = [ - 0 for i in range(len(equal_range_id) + len(equal_indifference_id)) - ] - for index, subject_id in enumerate(subject_list): - if subject_id in equal_indifference_id: - equalIndifference_reg[index] = 1 - equalRange_reg[index] = 0 + # Create 2 lists containing n_sub values which are + # * 1 if the participant is on the group + # * 0 otherwise + equal_range_regressors = [1 if i in equal_range_ids else 0 for i in subject_list] + equal_indifference_regressors = [ + 1 if i in equal_indifference_ids else 0 for i in subject_list + ] - regressors = dict( - equalRange = equalRange_reg, - equalIndifference = equalIndifference_reg - ) + # Create regressors output : a dict with the two list + regressors = dict( + equalRange = equal_range_regressors, + equalIndifference = equal_indifference_regressors + ) - return regressors + # Create groups outputs : a list with 1 for equalRange subjects and 2 for equalIndifference + groups = [1 if i == 1 else 2 for i in equal_range_regressors] + return regressors, groups def get_group_level_analysis(self): """ Return all workflows for the group level analysis. @@ -495,19 +403,18 @@ def get_group_level_analysis_sub_workflow(self, method): # Infosource node - iterate over the list of contrasts generated # by the subject level analysis - info_source = Node( + information_source = Node( IdentityInterface( - fields = ['contrast_id', 'subjects'], - subjects = self.subject_list + fields = ['contrast_id'] ), - name = 'info_source', + name = 'information_source', ) - info_source.iterables = [('contrast_id', self.contrast_list)] + information_source.iterables = [('contrast_id', self.contrast_list)] # Templates to select files node # [TODO] Change the name of the files depending on the filenames # of results of first level analysis - template = { + templates = { 'cope' : join( self.directories.results_dir, 'subject_level_analysis', @@ -520,54 +427,13 @@ def get_group_level_analysis_sub_workflow(self, method): self.directories.dataset_dir, 'participants.tsv') } - select_files = Node( - SelectFiles( - templates, - base_directory = self.directories.results_dir, - force_list = True - ), - name = 'select_files', - ) + select_files = Node(SelectFiles(templates), name = 'select_files') + select_files.inputs.base_directory = self.directories.results_dir, + select_files.inputs.force_list = True # Datasink node - to save important files - data_sink = Node( - DataSink(base_directory = self.directories.output_dir), - name = 'data_sink', - ) - - contrasts = Node( - Function( - input_names=['copes', 'varcopes', 'subject_ids', 'participants_file'], - output_names=[ - 'copes_equalIndifference', - 'copes_equalRange', - 'varcopes_equalIndifference', - 'varcopes_equalRange', - 'equalIndifference_id', - 'equalRange_id', - 'copes_global', - 'varcopes_global' - ], - function = self.get_subgroups_contrasts, - ), - name = 'subgroups_contrasts', - ) - - regs = Node( - Function( - input_names = [ - 'equalRange_id', - 'equalIndifference_id', - 'method', - 'subject_list', - ], - output_names = ['regressors'], - function = self.get_regressors, - ), - name = 'regs', - ) - regs.inputs.method = method - regs.inputs.subject_list = subject_list + data_sink = Node(DataSink(), name = 'data_sink') + data_sink.inputs.base_directory = self.directories.output_dir # [INFO] The following part has to be modified with nodes of the pipeline @@ -591,41 +457,21 @@ def get_group_level_analysis_sub_workflow(self, method): base_dir = self.directories.working_dir, name = f'group_level_analysis_{method}_nsub_{nb_subjects}' ) - group_level_analysis.connect( - [ - ( - info_source, - select_files, - [('contrast_id', 'contrast_id')], - ), - ( - info_source, - subgroups_contrasts, - [('subject_list', 'subject_ids')], - ), - ( - select_files, - subgroups_contrasts, - [ + group_level_analysis.connect([ + (info_source, select_files, [('contrast_id', 'contrast_id')]), + (info_source, subgroups_contrasts, [('subject_list', 'subject_ids')]), + (select_files, subgroups_contrasts,[ ('cope', 'copes'), ('varcope', 'varcopes'), ('participants', 'participants_file'), - ], - ), - ( - select_files, - node_name[('func', 'node_input_name')], - ), - ( - node_variable, - datasink_groupanalysis, - [('node_output_name', 'preprocess.@sym_link')], - ), - ] - ) # Complete with other links between nodes - - # [INFO] Here we define the contrasts used for the group level analysis, depending on the - # method used. + ]), + (select_files, node_name[('func', 'node_input_name')]), + (node_variable, datasink_groupanalysis, + [('node_output_name', 'preprocess.@sym_link')]) + ]) # Complete with other links between nodes + + # [INFO] You can add conditional sections of code to shape the workflow depending + # on the method passed as parameter if method in ('equalRange', 'equalIndifference'): contrasts = [('Group', 'T', ['mean'], [1]), ('Group', 'T', ['mean'], [-1])] From 9213dc40d56191364be525bc0fe02212f0f4b710 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 10 Jan 2024 11:16:30 +0100 Subject: [PATCH 18/35] [DOC] writing test files --- docs/pipelines.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/docs/pipelines.md b/docs/pipelines.md index d59f9a9e..cdf64615 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -149,4 +149,13 @@ First have a look at the [testing page of the documentation](/docs/testing.md). All tests must be contained in a single file named `tests/pipelines/test_team_.py`. You can start by copy-pasting the template file : [tests/pipelines/templates/template_test.py](/tests/pipelines/templates/template_test.py) inside the `tests/pipelines/` directory, renaming it accordingly. Then, follow the instructions and tips inside the template and don't forget to replace `XXXX` with the actual team id, inside the document. > [!NOTE] -> Feel free to have a look at [tests/pipelines/test_team_2T6S.py](/tests/pipelines/test_team_2T6S.py), which is the file containing all the automatic tests for the 2T6S pipeline : it gives an example. +> Feel free to have a look at [tests/pipelines/test_team_C88N.py](/tests/pipelines/test_team_C88N.py), which is the file containing all the automatic tests for the C88N pipeline : it gives an example. + +Your test file must contain the following methods: +* one testing the execution of the pipeline and comparing the results with the original ones. This can be simply achieved with this line : +```python +helpers.test_pipeline_evaluation('C88N') +``` +* one for each method you declared in the pipeline class (e.g.: testing the `get_subject_information` method) +* one testing the instantiation of your pipeline (see `test_create` in `test_team_C88N.py`) +* one testing the `get_*_outputs` methods of your pipeline (see `test_outputs` in `test_team_C88N.py`) From 7d5fa82471ccedef23ae3d8e1d6af12ffb944f57 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 10 Jan 2024 11:21:26 +0100 Subject: [PATCH 19/35] Codespell --- narps_open/pipelines/templates/template_fsl.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/narps_open/pipelines/templates/template_fsl.py b/narps_open/pipelines/templates/template_fsl.py index 68b8ca77..c0f53a72 100644 --- a/narps_open/pipelines/templates/template_fsl.py +++ b/narps_open/pipelines/templates/template_fsl.py @@ -133,7 +133,7 @@ def get_subject_information(event_file: str): Returns : - subject_info : list of Bunch containing event information """ - # [INFO] nipype requires to import all dependancies from inside the methods that are + # [INFO] nipype requires to import all dependencies from inside the methods that are # later used in Function nodes from nipype.interfaces.base import Bunch From 11736e6898de6db1e15360859dc3a8ffd37677bc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 10 Jan 2024 17:21:20 +0100 Subject: [PATCH 20/35] First step in writing documentation about NARPS --- README.md | 6 ++++-- docs/narps.md | 43 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+), 2 deletions(-) create mode 100644 docs/narps.md diff --git a/README.md b/README.md index 7172e25b..b18c8ff3 100644 --- a/README.md +++ b/README.md @@ -17,9 +17,11 @@ **The goal of the NARPS Open Pipelines project is to create a codebase reproducing the 70 pipelines of the NARPS study (Botvinik-Nezer et al., 2020) and share this as an open resource for the community**. -We base our reproductions on the [original descriptions provided by the teams](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx) and test the quality of the reproductions by comparing our results with the original results published on NeuroVault. +We base our reproductions on the original descriptions provided by the teams and test the quality of the reproductions by comparing our results with the original results published on NeuroVault. -:vertical_traffic_light: See [the pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) to view our current progress at a glance. +Find more information about the NARPS study [here](docs/narps.md). + +:vertical_traffic_light: See [the pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) to view our current progress at a glance ! ## Contributing diff --git a/docs/narps.md b/docs/narps.md new file mode 100644 index 00000000..93999fed --- /dev/null +++ b/docs/narps.md @@ -0,0 +1,43 @@ +# More information about the NARPS study + +This page aims at summaryzing the NARPS study (Botvinik-Nezer et al., 2020) for future contributors of NARPS Open Pipelines. + +## Context + +The global context / problem : analytical variability + +https://www.narps.info/ + +## The main idea behind the study + +Asking 70 teams to analyse the same dataset + +https://github.com/poldrack/narps +https://zenodo.org/records/3709273#.Y2jVkCPMIz4 + +## The data + +A dataset of task-fMRI with 108 participants, 4 runs for each. Each run + + +Raw and preprocessed fMRI data of two versions of the mixed gambles task, from the Neuroimaging Analysis Replication and Prediction Study +https://openneuro.org/datasets/ds001734/versions/1.0.5 + + + +## The outputs + +textual descriptions + team results +](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx) + +Data submitted by all participants in the Neuroimaging Analysis Replication and Prediction Study, along with results from prediction markets and metadata for analysis pipelines. +https://zenodo.org/records/3528329#.Y7_H1bTMKBT + +## Useful resources + +[Botvinik-Nezer, R. et al. (2020), ‘Variability in the analysis of a single neuroimaging dataset by many teams’, Nature.](https://www.nature.com/articles/s41586-020-2314-9) +https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7771346/ + + +Data ! +https://www.nature.com/articles/s41597-019-0113-7 \ No newline at end of file From c5fd548ff3c14061f51c541213d6265e95ddddb5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 11 Jan 2024 11:26:22 +0100 Subject: [PATCH 21/35] [DOC] completing doc about narps --- docs/narps.md | 62 ++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 49 insertions(+), 13 deletions(-) diff --git a/docs/narps.md b/docs/narps.md index 93999fed..89112f66 100644 --- a/docs/narps.md +++ b/docs/narps.md @@ -1,29 +1,62 @@ # More information about the NARPS study -This page aims at summaryzing the NARPS study (Botvinik-Nezer et al., 2020) for future contributors of NARPS Open Pipelines. +This page aims at summarizing the NARPS study (Botvinik-Nezer et al., 2020) for the future contributors of NARPS Open Pipelines. ## Context The global context / problem : analytical variability -https://www.narps.info/ + ## The main idea behind the study Asking 70 teams to analyse the same dataset -https://github.com/poldrack/narps -https://zenodo.org/records/3709273#.Y2jVkCPMIz4 - ## The data -A dataset of task-fMRI with 108 participants, 4 runs for each. Each run +NARPS is based on a dataset of task-fMRI with 108 participants, 4 runs for each. +### Raw functional volumes -Raw and preprocessed fMRI data of two versions of the mixed gambles task, from the Neuroimaging Analysis Replication and Prediction Study -https://openneuro.org/datasets/ds001734/versions/1.0.5 +> Each run consists of 64 trials performed during an fMRI scanning run lasting 453 seconds and comprising 453 volumes (given the repetition time of one second). + +For each participant, the associated data (4D volumes) is : +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-01_bold.nii.gz` +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-02_bold.nii.gz` +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-03_bold.nii.gz` +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-04_bold.nii.gz` + +### Event data + +> On each trial, participants were presented with a mixed gamble entailing an equal 50% chance of gaining one amount of money or losing another amount. + +For each participant, the associated data (events files) is : +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-01_events.tsv` +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-02_events.tsv` +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-03_events.tsv` +`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-04_events.tsv` + +This file contains the onsets, response time, and response of the participant, as well as the amount of money proposed (gain and loss) for each trial. +> The pre-processed data included in this dataset were preprocessed using fMRIprep version 1.1.4, which is based on Nipype 1.1.1 +For each participant, the associated data (preprocessed volumes, confounds, brain masks, ...) is under : +`data/original/ds001734/derivatives/fmriprep/sub-*/func/` + +### Task-related data + +The associated data (task and dataset description) is : +`data/original/ds001734/T1w.json` +`data/original/ds001734/task-MGT_bold.json` +`data/original/ds001734/task-MGT_sbref.json` +`data/original/ds001734/dataset_description.json` + +Furthermore, the participants were assigned to a condition, either *equalRange* or *equalIndifference* : + +> Possible gains ranged between 10 and 40 ILS (in increments of 2 ILS) in the equal indifference condition or 5–20 ILS (in increments of 1 ILS) in the equal range condition, while possible losses ranged from 5–20 ILS (in increments of 1 ILS) in both conditions. + +The repartition is stored in : +`data/original/ds001734/participants.tsv` ## The outputs @@ -35,9 +68,12 @@ https://zenodo.org/records/3528329#.Y7_H1bTMKBT ## Useful resources -[Botvinik-Nezer, R. et al. (2020), ‘Variability in the analysis of a single neuroimaging dataset by many teams’, Nature.](https://www.nature.com/articles/s41586-020-2314-9) -https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7771346/ - +* The website dedicated to the study - [www.narps.info](https://www.narps.info/) +* The article - [Botvinik-Nezer, R. et al. (2020), 'Variability in the analysis of a single neuroimaging dataset by many teams', Nature](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7771346/) +* The associated data article - [Botvinik-Nezer, R. et al. (2019), 'fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study', Scientific Data](https://www.nature.com/articles/s41597-019-0113-7) +* The GitHub page for the NARPS repository. This code was used to generate the published results - [github.com/poldrack/narps](https://github.com/poldrack/narps) +* The snapshot of this code base [on Zenodo](https://zenodo.org/records/3709273#.Y2jVkCPMIz4) +* https://openneuro.org/datasets/ds001734/versions/1.0.5 -Data ! -https://www.nature.com/articles/s41597-019-0113-7 \ No newline at end of file +Raw and preprocessed fMRI data of two versions of the mixed gambles task, from the Neuroimaging Analysis Replication and Prediction Study +https://openneuro.org/datasets/ds001734/versions/1.0.5 \ No newline at end of file From 6c9d7a36246de26e91ec0e08dd43ce27a6c37593 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 11 Jan 2024 11:43:05 +0100 Subject: [PATCH 22/35] [DOC] completing doc about narps --- docs/narps.md | 66 +++++++++++++++++++++++++++++---------------------- 1 file changed, 38 insertions(+), 28 deletions(-) diff --git a/docs/narps.md b/docs/narps.md index 89112f66..5249d5ef 100644 --- a/docs/narps.md +++ b/docs/narps.md @@ -4,67 +4,79 @@ This page aims at summarizing the NARPS study (Botvinik-Nezer et al., 2020) for ## Context -The global context / problem : analytical variability +> In the Neuroimaging Analysis Replication and Prediction Study (NARPS), we aim to provide the first scientific evidence on the variability of results across analysis teams in neuroscience. - - -## The main idea behind the study - -Asking 70 teams to analyse the same dataset +From this starting point, 70 teams were asked to analyse the same dataset, providing their methods and results to be later analysed and compared. ## The data NARPS is based on a dataset of task-fMRI with 108 participants, 4 runs for each. +> For each participant, the dataset includes an anatomical (T1 weighted) scan and fMRI as well as behavioral data from four runs of the task. The dataset is shared through OpenNeuro and is formatted according to the Brain Imaging Data Structure (BIDS) standard. Data pre-processed with fMRIprep and quality control reports are also publicly shared. + ### Raw functional volumes > Each run consists of 64 trials performed during an fMRI scanning run lasting 453 seconds and comprising 453 volumes (given the repetition time of one second). For each participant, the associated data (4D volumes) is : -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-01_bold.nii.gz` -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-02_bold.nii.gz` -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-03_bold.nii.gz` -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-04_bold.nii.gz` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-01_bold.nii.gz` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-02_bold.nii.gz` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-03_bold.nii.gz` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-04_bold.nii.gz` ### Event data > On each trial, participants were presented with a mixed gamble entailing an equal 50% chance of gaining one amount of money or losing another amount. For each participant, the associated data (events files) is : -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-01_events.tsv` -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-02_events.tsv` -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-03_events.tsv` -`data/original/ds001734/sub-*/func/sub-*_task-MGT_run-04_events.tsv` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-01_events.tsv` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-02_events.tsv` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-03_events.tsv` +* `data/original/ds001734/sub-*/func/sub-*_task-MGT_run-04_events.tsv` This file contains the onsets, response time, and response of the participant, as well as the amount of money proposed (gain and loss) for each trial. +### Preprocessed data + > The pre-processed data included in this dataset were preprocessed using fMRIprep version 1.1.4, which is based on Nipype 1.1.1 For each participant, the associated data (preprocessed volumes, confounds, brain masks, ...) is under : -`data/original/ds001734/derivatives/fmriprep/sub-*/func/` +* `data/original/ds001734/derivatives/fmriprep/sub-*/func/` ### Task-related data The associated data (task and dataset description) is : -`data/original/ds001734/T1w.json` -`data/original/ds001734/task-MGT_bold.json` -`data/original/ds001734/task-MGT_sbref.json` -`data/original/ds001734/dataset_description.json` +* `data/original/ds001734/T1w.json` +* `data/original/ds001734/task-MGT_bold.json` +* `data/original/ds001734/task-MGT_sbref.json` +* `data/original/ds001734/dataset_description.json` + +> [!TIP] +> The `narps_open.data.task` module helps parsing this data. Furthermore, the participants were assigned to a condition, either *equalRange* or *equalIndifference* : > Possible gains ranged between 10 and 40 ILS (in increments of 2 ILS) in the equal indifference condition or 5–20 ILS (in increments of 1 ILS) in the equal range condition, while possible losses ranged from 5–20 ILS (in increments of 1 ILS) in both conditions. The repartition is stored in : -`data/original/ds001734/participants.tsv` +* `data/original/ds001734/participants.tsv` + +> [!TIP] +> The `narps_open.data.participants` module helps parsing this data. ## The outputs -textual descriptions + team results -](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx) +Each of the team participating in NARPS had to provide a COBIDS-compliant *textual description* of its analysis of the dataset, as well as the results from it. + +All the descriptions are gathered in the [analysis_pipelines_for_analysis.xlsx](https://github.com/poldrack/narps/blob/1.0.1/ImageAnalyses/metadata_files/analysis_pipelines_for_analysis.xlsx) in the NARPS repository on GitHub. + +> [!TIP] +> We developed a tool to easily parse this file, see [docs/description.md](docs/description.md) for more details on how to use it. -Data submitted by all participants in the Neuroimaging Analysis Replication and Prediction Study, along with results from prediction markets and metadata for analysis pipelines. -https://zenodo.org/records/3528329#.Y7_H1bTMKBT +Results data submitted by all the teams is available [on Zenodo](https://zenodo.org/records/3528329#.Y7_H1bTMKBT) + +> [!TIP] +> This data is included in the NARPS Open Pipelines repository under [data/results](data/results), and we developed a tool to access it easily : see the dedicated section in [docs/data.md](docs/data.md#results-from-narps-teams) for more details. ## Useful resources @@ -73,7 +85,5 @@ https://zenodo.org/records/3528329#.Y7_H1bTMKBT * The associated data article - [Botvinik-Nezer, R. et al. (2019), 'fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study', Scientific Data](https://www.nature.com/articles/s41597-019-0113-7) * The GitHub page for the NARPS repository. This code was used to generate the published results - [github.com/poldrack/narps](https://github.com/poldrack/narps) * The snapshot of this code base [on Zenodo](https://zenodo.org/records/3709273#.Y2jVkCPMIz4) -* https://openneuro.org/datasets/ds001734/versions/1.0.5 - -Raw and preprocessed fMRI data of two versions of the mixed gambles task, from the Neuroimaging Analysis Replication and Prediction Study -https://openneuro.org/datasets/ds001734/versions/1.0.5 \ No newline at end of file +* The dataset [on OpenNeuro](https://openneuro.org/datasets/ds001734/versions/1.0.5) +* Results data submitted by all the teams [on Zenodo](https://zenodo.org/records/3528329#.Y7_H1bTMKBT) From a016bdd964c8f3434a583ff22f0c56c07f557913 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 11 Jan 2024 14:02:01 +0100 Subject: [PATCH 23/35] [DOC] completing doc about narps --- docs/narps.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/narps.md b/docs/narps.md index 5249d5ef..7d53ec4f 100644 --- a/docs/narps.md +++ b/docs/narps.md @@ -1,12 +1,14 @@ # More information about the NARPS study -This page aims at summarizing the NARPS study (Botvinik-Nezer et al., 2020) for the future contributors of NARPS Open Pipelines. +This page aims at summarizing the NARPS study [(Botvinik-Nezer et al., 2020)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7771346/) for the future contributors of NARPS Open Pipelines. + +In the following, the citations come from the associated data article : [Botvinik-Nezer, R. et al. (2019), 'fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study', Scientific Data](https://www.nature.com/articles/s41597-019-0113-7). ## Context > In the Neuroimaging Analysis Replication and Prediction Study (NARPS), we aim to provide the first scientific evidence on the variability of results across analysis teams in neuroscience. -From this starting point, 70 teams were asked to analyse the same dataset, providing their methods and results to be later analysed and compared. +From this starting point, 70 teams were asked to analyse the same dataset, providing their methods and results to be later analysed and compared. Nine hypotheses where to be tested and a binary decision for each one had to be reported, whether it was significantly supported based on a whole-brain analysis. ## The data @@ -43,6 +45,8 @@ This file contains the onsets, response time, and response of the participant, a For each participant, the associated data (preprocessed volumes, confounds, brain masks, ...) is under : * `data/original/ds001734/derivatives/fmriprep/sub-*/func/` +Some teams chose to use this pre-processed data directly as inputs for the statistical analysis, while other performed their own method of pre-processing. + ### Task-related data The associated data (task and dataset description) is : From a73f3c3346eb589b43ebdaa756367d8fb7e45b72 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 11 Jan 2024 14:53:15 +0100 Subject: [PATCH 24/35] [DATALAD] change results url --- .gitmodules | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.gitmodules b/.gitmodules index d3eaeb2f..364a3345 100644 --- a/.gitmodules +++ b/.gitmodules @@ -5,6 +5,6 @@ datalad-url = https://github.com/OpenNeuroDatasets/ds001734.git [submodule "data/results"] path = data/results - url = https://gin.g-node.org/RemiGau/neurovault_narps_open_pipeline.git - datalad-url = https://gin.g-node.org/RemiGau/neurovault_narps_open_pipeline.git + url = https://gin.g-node.org/RemiGau/neurovault_narps_open_pipeline + datalad-url = https://gin.g-node.org/RemiGau/neurovault_narps_open_pipeline datalad-id = b7b70790-7b0c-40d3-976f-c7dd49df3b86 From 68ea23cebd8d60d8bdf242bbbad8eefde49d3e12 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 11 Jan 2024 17:13:54 +0100 Subject: [PATCH 25/35] [DOC] reference to the github project for reproduction mgmt --- CONTRIBUTING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 98b31c06..dcc8bd6f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -9,7 +9,7 @@ In the following, read the instruction sections where the badge corresponding to ## 1 - Choose a pipeline `🧠 fMRI soft` `🐍 Python` -Not sure which pipeline to start with :thinking:? The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick any pipeline that is not fully reproduced, i.e.: not started :red_circle: or in progress :orange_circle: . +Not sure which pipeline to start with :thinking:? The [pipeline dashboard](https://github.com/Inria-Empenn/narps_open_pipelines/wiki/pipeline_status) provides the progress status for each pipeline. You can pick a pipeline that is not fully reproduced, i.e.: not started :red_circle: or in progress :orange_circle: . Also have a look to the [pipeline reproduction management page](https://github.com/orgs/Inria-Empenn/projects/1/views/1) in order to get in touch with contributors working on the same pipeline. > [!NOTE] > Need more information to make a decision? The `narps_open.utils.description` module of the project, as described [in the documentation](/docs/description.md) provides easy access to all the info we have on each pipeline. From f4c0c24d941666eb4827a55cb0b2c2e60fdfacf8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Fri, 12 Jan 2024 15:34:51 +0100 Subject: [PATCH 26/35] [DOC] adding team id choices for narps open runner --- narps_open/runner.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/narps_open/runner.py b/narps_open/runner.py index 0776c4aa..195bd732 100644 --- a/narps_open/runner.py +++ b/narps_open/runner.py @@ -17,6 +17,7 @@ get_participants_subset ) from narps_open.utils.configuration import Configuration +from narps_open.pipelines import get_implemented_pipelines class PipelineRunner(): """ A class that allows to run a NARPS pipeline. """ @@ -157,7 +158,7 @@ def get_missing_group_level_outputs(self): # Parse arguments parser = ArgumentParser(description='Run the pipelines from NARPS.') parser.add_argument('-t', '--team', type=str, required=True, - help='the team ID') + help='the team ID', choices=get_implemented_pipelines()) subjects = parser.add_mutually_exclusive_group(required=True) subjects.add_argument('-s', '--subjects', nargs='+', type=str, action='extend', help='a list of subjects to be selected') From 6ebd536451fbb11548dcbb2a103678a34fbe90f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Mon, 15 Jan 2024 15:25:56 +0100 Subject: [PATCH 27/35] [DOC] list of available team ids in command tools documentation --- README.md | 10 ++++----- narps_open/data/description/__main__.py | 3 ++- narps_open/data/results/__main__.py | 3 +-- narps_open/pipelines/__main__.py | 30 +++++++++++++++++++++++++ narps_open/tester.py | 4 +++- 5 files changed, 41 insertions(+), 9 deletions(-) create mode 100644 narps_open/pipelines/__main__.py diff --git a/README.md b/README.md index b18c8ff3..a1eee7a6 100644 --- a/README.md +++ b/README.md @@ -50,11 +50,11 @@ This project is supported by Région Bretagne (Boost MIND) and by Inria (Explora ## Credits -This project is developed in the Empenn team by Boris Clenet, Elodie Germani, Jeremy Lefort-Besnard and Camille Maumet with contributions by Rémi Gau. +This project is developed in the Empenn team by Boris Clénet, Elodie Germani, Jeremy Lefort-Besnard and Camille Maumet with contributions by Rémi Gau. In addition, this project was presented and received contributions during the following events: - - [OHBM Brainhack 2022](https://ohbm.github.io/hackathon2022/) (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet. - - [e-ReproNim FENS NENS Cluster Brainhack](https://repro.school/2023-e-repronim-brainhack/) (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook. - - [OHBM Brainhack 2023](https://ohbm.github.io/hackathon2023/) (July 2023): Arshitha Basavaraj, Boris Clénet, Rémi Gau, Élodie Germani, Yaroslav Halchenko, Camille Maumet, Paul Taylor. - - [ORIGAMI lab](https://neurodatascience.github.io/) hackathon (September 2023): - [Brainhack Marseille 2023](https://brainhack-marseille.github.io/) (December 2023): + - [ORIGAMI lab](https://neurodatascience.github.io/) hackathon (September 2023): + - [OHBM Brainhack 2023](https://ohbm.github.io/hackathon2023/) (July 2023): Arshitha Basavaraj, Boris Clénet, Rémi Gau, Élodie Germani, Yaroslav Halchenko, Camille Maumet, Paul Taylor. + - [e-ReproNim FENS NENS Cluster Brainhack](https://repro.school/2023-e-repronim-brainhack/) (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook. + - [OHBM Brainhack 2022](https://ohbm.github.io/hackathon2022/) (June 2022): Elodie Germani, Arshitha Basavaraj, Trang Cao, Rémi Gau, Anna Menacher, Camille Maumet. diff --git a/narps_open/data/description/__main__.py b/narps_open/data/description/__main__.py index b6c9ead3..a7eaa84b 100644 --- a/narps_open/data/description/__main__.py +++ b/narps_open/data/description/__main__.py @@ -7,6 +7,7 @@ from json import dumps from narps_open.data.description import TeamDescription +from narps_open.pipelines import implemented_pipelines def main(): """ Entry-point for the command line tool narps_description """ @@ -14,7 +15,7 @@ def main(): # Parse arguments parser = ArgumentParser(description='Get description of a NARPS pipeline.') parser.add_argument('-t', '--team', type=str, required=True, - help='the team ID') + help='the team ID', choices=implemented_pipelines.keys()) parser.add_argument('-d', '--dictionary', type=str, required=False, choices=[ 'general', diff --git a/narps_open/data/results/__main__.py b/narps_open/data/results/__main__.py index 88111b87..64bbd34f 100644 --- a/narps_open/data/results/__main__.py +++ b/narps_open/data/results/__main__.py @@ -8,7 +8,6 @@ from narps_open.data.results import ResultsCollectionFactory from narps_open.pipelines import implemented_pipelines - def main(): """ Entry-point for the command line tool narps_results """ @@ -16,7 +15,7 @@ def main(): parser = ArgumentParser(description='Get Neurovault collection of results from NARPS teams.') group = parser.add_mutually_exclusive_group(required = True) group.add_argument('-t', '--teams', nargs='+', type=str, action='extend', - help='a list of team IDs') + help='a list of team IDs', choices=implemented_pipelines.keys()) group.add_argument('-a', '--all', action='store_true', help='download results from all teams') parser.add_argument('-r', '--rectify', action='store_true', default = False, required = False, help='rectify the results') diff --git a/narps_open/pipelines/__main__.py b/narps_open/pipelines/__main__.py new file mode 100644 index 00000000..60fd5c76 --- /dev/null +++ b/narps_open/pipelines/__main__.py @@ -0,0 +1,30 @@ +#!/usr/bin/python +# coding: utf-8 + +""" Provide a command-line interface for the package narps_open.pipelines """ + +from argparse import ArgumentParser + +from narps_open.pipelines import get_implemented_pipelines + +def main(): + """ Entry-point for the command line tool narps_open_pipeline """ + + # Parse arguments + parser = ArgumentParser(description='Get description of a NARPS pipeline.') + parser.add_argument('-v', '--verbose', action='store_true', + help='verbose mode') + arguments = parser.parse_args() + + # Print header + print('NARPS Open Pipelines') + + # Print general information about NARS Open Pipelines + print('A codebase reproducing the 70 pipelines of the NARPS study (Botvinik-Nezer et al., 2020) shared as an open resource for the community.') + + # Print pipelines + implemented_pipelines = get_implemented_pipelines() + print(f'There are currently {len(implemented_pipelines)} implemented pipelines: {implemented_pipelines}') + +if __name__ == '__main__': + main() diff --git a/narps_open/tester.py b/narps_open/tester.py index 1a2cf284..c3b48ecd 100644 --- a/narps_open/tester.py +++ b/narps_open/tester.py @@ -8,13 +8,15 @@ import pytest +from narps_open.pipelines import get_implemented_pipelines + def main(): """ Entry-point for the command line tool narps_open_tester """ # Parse arguments parser = ArgumentParser(description='Test the pipelines from NARPS.') parser.add_argument('-t', '--team', type=str, required=True, - help='the team ID') + help='the team ID', choices=get_implemented_pipelines()) arguments = parser.parse_args() sys.exit(pytest.main([ From ecce46aeedbb03678c31e978a32d636e159c3e2f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Tue, 16 Jan 2024 11:06:42 +0100 Subject: [PATCH 28/35] [DOC] configuration info inside INSTALL.md --- INSTALL.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/INSTALL.md b/INSTALL.md index 9a429f00..a3f6ec21 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -66,6 +66,22 @@ Start a Docker container from the Docker image : docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ nipype/nipype ``` +Optionally edit the configuration file `narps_open/utils/default_config.toml` so that the refered paths match the ones inside the container. E.g.: if using the previous command line, the `directories` part of the configuration file should be : +```toml +# default_config.toml +# ... + +[directories] +dataset = "/home/neuro/code/data/original/ds001734/" +reproduced_results = "/home/neuro/code/data/reproduced/" +narps_results = "/home/neuro/code/data/results/" + +# ... +``` + +> [!NOTE] +> Further information about configuration files can be found on the page [docs/configuration.md](docs/configuration.md). + Install NARPS Open Pipelines inside the container : ```bash From 5b46057c213ef513e8e21e2555a6971b03f88292 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Tue, 16 Jan 2024 11:19:01 +0100 Subject: [PATCH 29/35] [DOC] configuration info inside INSTALL.md --- INSTALL.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/INSTALL.md b/INSTALL.md index a3f6ec21..630f96e6 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -66,7 +66,7 @@ Start a Docker container from the Docker image : docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ nipype/nipype ``` -Optionally edit the configuration file `narps_open/utils/default_config.toml` so that the refered paths match the ones inside the container. E.g.: if using the previous command line, the `directories` part of the configuration file should be : +Optionally edit the configuration file `narps_open/utils/default_config.toml` so that the referred paths match the ones inside the container. E.g.: if using the previous command line, the `directories` part of the configuration file should be : ```toml # default_config.toml # ... From b8c1fda43595785b7778752a389701f9e1e63304 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 24 Jan 2024 10:17:38 +0100 Subject: [PATCH 30/35] NARPS Exclusion comments --- .../analysis_pipelines_comments.tsv | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/narps_open/data/description/analysis_pipelines_comments.tsv b/narps_open/data/description/analysis_pipelines_comments.tsv index 5ebf2405..fd858ebc 100644 --- a/narps_open/data/description/analysis_pipelines_comments.tsv +++ b/narps_open/data/description/analysis_pipelines_comments.tsv @@ -5,7 +5,7 @@ O21U No N/A 3 U26C No N/A 4 Link to shared analysis code : https://github.com/gladomat/narps 43FJ No N/A 2 C88N No N/A 3 -4TQ6 Yes Resampled image offset and too large compared to template. 3 +4TQ6 No Resampled image offset and too large compared to template. 3 T54A No N/A 3 2T6S No N/A 3 L7J7 No N/A 3 @@ -17,17 +17,17 @@ O6R6 No N/A 3 C22U No N/A 1 Custom Matlab script for white matter PCA confounds 3PQ2 No N/A 2 UK24 No N/A 2 -4SZ2 Yes Resampled image offset from template brain. 3 +4SZ2 No Resampled image offset from template brain. 3 9T8E No N/A 3 94GU No N/A 1 Multiple software dependencies : SPM + ART + TAPAS + Matlab. I52Y No N/A 2 5G9K Yes Values in the unthresholded images are not z / t stats 3 -2T7P Yes Missing thresholded images. 2 Link to shared analysis code : https://osf.io/3b57r +2T7P No Missing thresholded images. 2 Link to shared analysis code : https://osf.io/3b57r UI76 No N/A 3 B5I6 No N/A 3 -V55J Yes Bad histogram : very small values. 2 +V55J No Bad histogram : very small values. 2 X19V No N/A 3 -0C7Q Yes Appears to be a p-value distribution, with slight excursions below and above zero. 2 +0C7Q No Appears to be a p-value distribution, with slight excursions below and above zero. 2 R5K7 No N/A 2 0I4U No N/A 2 3C6G No N/A 2 @@ -37,20 +37,20 @@ O03M No N/A 3 80GC No N/A 3 J7F9 No N/A 3 R7D1 No N/A 3 Link to shared analysis code : https://github.com/IMTAltiStudiLucca/NARPS_R7D1 -Q58J Yes Bad histogram : bimodal, zero-inflated with a second distribution centered around 5. 3 Link to shared analysis code : https://github.com/amrka/NARPS_Q58J -L3V8 Yes Rejected due to large amount of missing brain in center. 2 +Q58J No Bad histogram : bimodal, zero-inflated with a second distribution centered around 5. 3 Link to shared analysis code : https://github.com/amrka/NARPS_Q58J +L3V8 No Rejected due to large amount of missing brain in center. 2 SM54 No N/A 3 1KB2 No N/A 2 -0H5E Yes Rejected due to large amount of missing brain in center. 2 -P5F3 Yes Rejected due to large amounts of missing data across brain. 2 +0H5E No Rejected due to large amount of missing brain in center. 2 +P5F3 No Rejected due to large amounts of missing data across brain. 2 Q6O0 No N/A 3 R42Q No N/A 2 Uses fMRIflows, a custom software based on NiPype. Code available here : https://github.com/ilkayisik/narps_R42Q L9G5 No N/A 2 DC61 No N/A 3 -E3B6 Yes Bad histogram : very long tail, with substantial inflation at a value just below zero. 4 Link to shared analysis code : doi.org/10.5281/zenodo.3518407 +E3B6 No Bad histogram : very long tail, with substantial inflation at a value just below zero. 4 Link to shared analysis code : doi.org/10.5281/zenodo.3518407 16IN Yes Values in the unthresholded images are not z / t stats 2 Multiple software dependencies : matlab + SPM + FSL + R + TExPosition + neuroim. Link to shared analysis code : https://github.com/jennyrieck/NARPS 46CD No N/A 1 -6FH5 Yes Missing much of the central brain. 2 +6FH5 No Missing much of the central brain. 2 K9P0 No N/A 3 9U7M No N/A 2 VG39 Yes Performed small volume corrected instead of whole-brain analysis 3 @@ -64,8 +64,8 @@ AO86 No N/A 2 L1A8 Yes Not in MNI standard space. 2 IZ20 No N/A 1 3TR7 No N/A 3 -98BT Yes Rejected due to very bad normalization. 2 +98BT No Rejected due to very bad normalization. 2 XU70 No N/A 1 Uses custom software : FSL + 4drealign 0ED6 No N/A 2 -I07H Yes Bad histogram : bimodal, with second distribution centered around 2.5. 2 +I07H No Bad histogram : bimodal, with second distribution centered around 2.5. 2 1P0Y No N/A 2 From d59c3298e3a96215f938bf80c57c5fd9779e82d1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 14 Feb 2024 14:47:06 +0100 Subject: [PATCH 31/35] Empenn hackathon names --- INSTALL.md | 5 +---- README.md | 3 ++- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/INSTALL.md b/INSTALL.md index 6af924c1..8574b4e1 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -66,11 +66,8 @@ Start a Docker container from the Docker image : docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ nipype/nipype ``` -<<<<<<< HEAD -Optionally edit the configuration file `narps_open/utils/default_config.toml` so that the referred paths match the ones inside the container. E.g.: if using the previous command line, the `directories` part of the configuration file should be : -======= Optionally edit the configuration file `narps_open/utils/configuration/default_config.toml` so that the referred paths match the ones inside the container. E.g.: if using the previous command line, the `directories` part of the configuration file should be : ->>>>>>> main + ```toml # default_config.toml # ... diff --git a/README.md b/README.md index a1eee7a6..8c3bcf57 100644 --- a/README.md +++ b/README.md @@ -53,7 +53,8 @@ This project is supported by Région Bretagne (Boost MIND) and by Inria (Explora This project is developed in the Empenn team by Boris Clénet, Elodie Germani, Jeremy Lefort-Besnard and Camille Maumet with contributions by Rémi Gau. In addition, this project was presented and received contributions during the following events: - - [Brainhack Marseille 2023](https://brainhack-marseille.github.io/) (December 2023): + - Empenn team hackathon (February 2024): Mathieu Acher, Élise Bannier, Boris Clénet, Isabelle Corouge, Malo Gaubert, Élodie Germani, Gauthier Le Bartz Lyan, Jérémy Lefort-Besnard, Camille Maumet, Youenn Merel, Alexandre Pron. + - [Brainhack Marseille 2023](https://brainhack-marseille.github.io/) (December 2023) - [ORIGAMI lab](https://neurodatascience.github.io/) hackathon (September 2023): - [OHBM Brainhack 2023](https://ohbm.github.io/hackathon2023/) (July 2023): Arshitha Basavaraj, Boris Clénet, Rémi Gau, Élodie Germani, Yaroslav Halchenko, Camille Maumet, Paul Taylor. - [e-ReproNim FENS NENS Cluster Brainhack](https://repro.school/2023-e-repronim-brainhack/) (June 2023) : Liz Bushby, Boris Clénet, Michael Dayan, Aimee Westbrook. From 9f561361602fdf97a7c78b8a1a340943d99e5289 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 14 Feb 2024 15:22:53 +0100 Subject: [PATCH 32/35] Data documentation (datalad get recursive --- README.md | 2 +- docs/data.md | 7 ++++++- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 8c3bcf57..3eae7353 100644 --- a/README.md +++ b/README.md @@ -53,7 +53,7 @@ This project is supported by Région Bretagne (Boost MIND) and by Inria (Explora This project is developed in the Empenn team by Boris Clénet, Elodie Germani, Jeremy Lefort-Besnard and Camille Maumet with contributions by Rémi Gau. In addition, this project was presented and received contributions during the following events: - - Empenn team hackathon (February 2024): Mathieu Acher, Élise Bannier, Boris Clénet, Isabelle Corouge, Malo Gaubert, Élodie Germani, Gauthier Le Bartz Lyan, Jérémy Lefort-Besnard, Camille Maumet, Youenn Merel, Alexandre Pron. + - [Empenn team](https://team.inria.fr/empenn/) hackathon (February 2024): Mathieu Acher, Élise Bannier, Boris Clénet, Isabelle Corouge, Malo Gaubert, Élodie Germani, Gauthier Le Bartz Lyan, Jérémy Lefort-Besnard, Camille Maumet, Youenn Merel, Alexandre Pron. - [Brainhack Marseille 2023](https://brainhack-marseille.github.io/) (December 2023) - [ORIGAMI lab](https://neurodatascience.github.io/) hackathon (September 2023): - [OHBM Brainhack 2023](https://ohbm.github.io/hackathon2023/) (July 2023): Arshitha Basavaraj, Boris Clénet, Rémi Gau, Élodie Germani, Yaroslav Halchenko, Camille Maumet, Paul Taylor. diff --git a/docs/data.md b/docs/data.md index 3a68b32e..77013750 100644 --- a/docs/data.md +++ b/docs/data.md @@ -20,10 +20,15 @@ Tips for people using M1 MacBooks: `git-annex` is not yet available for M1 MacBo The `datalad install` command only downloaded the metadata associated with the dataset ; to download the actual files run the following command: +> [! WARNING] +> The following command lines will download **all** the data, which represents around : +> * 3.2GB for `data/results/` +> * 550GB for `data/original/` + ```bash # To get all the data cd data/ -datalad get ./* +datalad get --recursive ./* ``` If you only want parts of the data, replace the `./*` by the paths to the desired files. From d6eef33ea75013eac345341e1fe5efb4242e2f91 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Wed, 14 Feb 2024 15:34:46 +0100 Subject: [PATCH 33/35] Adjusting correlation thresholds inside testing configuration, after U26C results --- narps_open/utils/configuration/testing_config.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/narps_open/utils/configuration/testing_config.toml b/narps_open/utils/configuration/testing_config.toml index b5374183..86ba77b8 100644 --- a/narps_open/utils/configuration/testing_config.toml +++ b/narps_open/utils/configuration/testing_config.toml @@ -23,4 +23,4 @@ neurovault_naming = true # true if results files are saved using the neurovault [testing.pipelines] nb_subjects_per_group = 4 # Compute first level analyses by subgroups of N subjects, to avoid lacking of disk and memory -correlation_thresholds = [0.30, 0.70, 0.79, 0.85, 0.93] # Correlation between reproduced hypotheses files and results, respectively for [20, 40, 60, 80, 108] subjects. +correlation_thresholds = [0.30, 0.70, 0.78, 0.85, 0.93] # Correlation between reproduced hypotheses files and results, respectively for [20, 40, 60, 80, 108] subjects. From af709fc14824a283ef6f78e1b47d15e12fd8ca3d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Thu, 15 Feb 2024 15:13:54 +0100 Subject: [PATCH 34/35] Update dataset size --- docs/data.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/data.md b/docs/data.md index 77013750..b619bf13 100644 --- a/docs/data.md +++ b/docs/data.md @@ -22,8 +22,8 @@ The `datalad install` command only downloaded the metadata associated with the d > [! WARNING] > The following command lines will download **all** the data, which represents around : -> * 3.2GB for `data/results/` -> * 550GB for `data/original/` +> * 3 GB for `data/results/` +> * 880 GB for `data/original/` ```bash # To get all the data From 83765ebe5dc9b0a527bccdb5c5f17349e8917dc7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Boris=20Cl=C3=A9net?= Date: Mon, 19 Feb 2024 11:30:03 +0100 Subject: [PATCH 35/35] Freeze versions --- INSTALL.md | 6 +++--- docs/environment.md | 10 +++++----- setup.py | 18 +++++++++--------- 3 files changed, 17 insertions(+), 17 deletions(-) diff --git a/INSTALL.md b/INSTALL.md index 8574b4e1..18de2747 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -43,7 +43,7 @@ datalad get data/original/ds001734/derivatives/fmriprep/sub-00[1-4] -J 12 [Install Docker](https://docs.docker.com/engine/install/) then pull the nipype Docker image : ```bash -docker pull nipype/nipype +docker pull nipype/nipype:py38 ``` Once it's done you can check the image is available on your system : @@ -51,7 +51,7 @@ Once it's done you can check the image is available on your system : ```bash docker images REPOSITORY TAG IMAGE ID CREATED SIZE - docker.io/nipype/nipype latest 0f3c74d28406 9 months ago 22.7 GB + docker.io/nipype/nipype py38 0f3c74d28406 9 months ago 22.7 GB ``` > [!NOTE] @@ -63,7 +63,7 @@ Start a Docker container from the Docker image : ```bash # Replace PATH_TO_THE_REPOSITORY in the following command (e.g.: with /home/user/dev/narps_open_pipelines/) -docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ nipype/nipype +docker run -it -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ nipype/nipype:py38 ``` Optionally edit the configuration file `narps_open/utils/configuration/default_config.toml` so that the referred paths match the ones inside the container. E.g.: if using the previous command line, the `directories` part of the configuration file should be : diff --git a/docs/environment.md b/docs/environment.md index 00442421..a345e94f 100644 --- a/docs/environment.md +++ b/docs/environment.md @@ -2,12 +2,12 @@ ## The Docker container :whale: -The NARPS Open Pipelines project is build upon several dependencies, such as [Nipype](https://nipype.readthedocs.io/en/latest/) but also the original software packages used by the pipelines (SPM, FSL, AFNI...). Therefore we recommend to use the [`nipype/nipype` Docker image](https://hub.docker.com/r/nipype/nipype/) that contains all the required software dependencies. +The NARPS Open Pipelines project is build upon several dependencies, such as [Nipype](https://nipype.readthedocs.io/en/latest/) but also the original software packages used by the pipelines (SPM, FSL, AFNI...). Therefore we recommend to use the [`nipype/nipype:py38` Docker image](https://hub.docker.com/r/nipype/nipype/) that contains all the required software dependencies. The simplest way to start the container is by using the command below : ```bash -docker run -it nipype/nipype +docker run -it nipype/nipype:py38 ``` From this command line, you need to add volumes to be able to link with your local files (code repository). @@ -16,7 +16,7 @@ From this command line, you need to add volumes to be able to link with your loc # Replace PATH_TO_THE_REPOSITORY in the following command (e.g.: with /home/user/dev/narps_open_pipelines/) docker run -it \ -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ \ - nipype/nipype + nipype/nipype:py38 ``` ## Use Jupyter with the container @@ -27,7 +27,7 @@ If you wish to use [Jupyter](https://jupyter.org/) to run the code, a port forwa docker run -it \ -v PATH_TO_THE_REPOSITORY:/home/neuro/code/ \ -p 8888:8888 \ - nipype/nipype + nipype/nipype:py38 ``` Then, from inside the container : @@ -81,7 +81,7 @@ To use SPM inside the container, use this command at the beginning of your scrip ```python from nipype.interfaces import spm -matlab_cmd = '/opt/spm12-r7771/run_spm12.sh /opt/matlabmcr-2010a/v713/ script' +matlab_cmd = '/opt/spm12-r7219/run_spm12.sh /opt/matlabmcr-2010a/v713/ script' spm.SPMCommand.set_mlab_paths(matlab_cmd=matlab_cmd, use_mcr=True) ``` diff --git a/setup.py b/setup.py index 185c8418..ec22904f 100644 --- a/setup.py +++ b/setup.py @@ -18,18 +18,18 @@ 'tomli>=2.0.1,<2.1', 'networkx>=2.0,<3.0', # a workaround to nipype's bug (issue 3530) 'nilearn>=0.10.0,<0.11', - 'nipype', - 'pandas' + 'nipype>=1.8.6,<1.9', + 'pandas>=1.5.2,<1.6' ] extras_require = { 'tests': [ - 'pathvalidate', - 'pylint', - 'pytest', - 'pytest-cov', - 'pytest-helpers-namespace', - 'pytest-mock', - 'checksumdir' + 'pathvalidate>=3.2.0,<3.3', + 'pylint>=3.0.3,<3.1', + 'pytest>=7.2.0,<7.3', + 'pytest-cov>=2.10.1,<2.11', + 'pytest-helpers-namespace>=2021.12.29,<2021.13', + 'pytest-mock>=3.12.0,<3.13', + 'checksumdir>=1.2.0,<1.3' ] }