Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hed library example addition #332

Merged
merged 2 commits into from
Aug 31, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@ bids-examples (in alphabetical order).
| eeg_ds003654s_hed | @VisLab | Shows usage of Hierarchical Event Descriptor (HED) in events files | https://openneuro.org/datasets/ds003645 |
| eeg_ds003654s_hed_inheritance | @VisLab | HED annotation with multiple inherited sidecars | https://openneuro.org/datasets/ds003645 |
| eeg_ds003654s_hed_longform | @VisLab | HED annotation using tags in long form. | https://openneuro.org/datasets/ds003645 |
| eeg_ds003654s_hed_library | @VisLab | HED annotation using HED library vocabularies (schema). | https://openneuro.org/datasets/ds003645 |
| eeg_cbm | @cpernet | Rest EEG. European Data Format (.edf) | |

### iEEG datasets
Expand Down
6 changes: 6 additions & 0 deletions eeg_ds003654s_hed_library/CHANGES
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
1.0.0 2021-05-11
- First release
Revision history for Face Recognition experiment by Wakeman-Henson

version 1.0 - April 2021
- Initial release of EEG data in this experiment for HED education purposes
24 changes: 24 additions & 0 deletions eeg_ds003654s_hed_library/README
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
**Introduction:**
This dataset consists of the MEEG (sMRI+MEG+EEG) portion of the multi-subject, multi-modal face processing dataset (ds000117). This dataset was originally acquired and shared by Daniel Wakeman and Richard Henson (https://pubmed.ncbi.nlm.nih.gov/25977808/). The data has been repackaged in EEGLAB format and has undergone minimal preprocessing as well as reorganization and annotation of the dataset events.

**Overview of the experiment:**
Eighteen participants completed two recording sessions spaced three months apart – one session recorded fMRI and the other simultaneously recorded MEG and EEG data. During each session, participants performed the same simple perceptual task, responding to presented photographs of famous, unfamiliar, and scrambled faces by pressing one of two keyboard keys to indicate a subjective yes or no decision as to the relative spatial symmetry of the viewed face. Famous faces were feature-matched to unfamiliar faces; half the faces were female. The two sessions (MEEG, fMRI) had different organizations of event timing and presentation because of technological requirements of the respective imaging modalities. Each individual face was presented twice during the session. For half of the presented faces, the second presentation followed immediately after the first. For the other half, the second presentation was delayed by 5-15 face presentations.

**Preprocessing:**
The preprocessing, which was performed using the `wh_extracteeg_BIDS.m` located in the code directory, includes the following steps:
* Ignore MRI data except for sMRI.
* Extract EEG channels out of the MEG/EEG fif data
* Add fiducials
* Rename EOG and EKG channels
* Extract events from event channel
* Remove spurious events 5, 6, 7, 13, 14, 15, 17, 18 and 19
* Remove spurious event 24 for subject 3 run 4
* Rename events taking into account button assigned to each subject
* Correct event latencies (events have a shift of 34 ms)
* Resample data to 250 Hz (this step is performed because this dataset is used in a tutorial for EEGLAB and needs to be lightweight)
* Remove event fields `urevent` and `duration`
* Save as EEGLAB .set format

**Data curators:**
Ramon Martinez, Dung Truong, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA), Kay Robbins (UTSA, San Antonio, TX, USA)

24 changes: 24 additions & 0 deletions eeg_ds003654s_hed_library/dataset_description.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
{
"Name": "Face processing MEEG dataset with HED annotation",
"BIDSVersion": "1.8.4",
"HEDVersion": ["8.0.0", "sc:score_0.0.1", "test:testlib_1.0.2"],
"License": "CC0",
"Authors": [
"Daniel G. Wakeman",
"Richard N Henson",
"Dung Truong (curation)",
"Kay Robbins (curation)",
"Scott Makeig (curation)",
"Arno Delorme (curation)"
],
"ReferencesAndLinks": [
"Wakeman, D., Henson, R. (2015). A multi-subject, multi-modal human neuroimaging dataset. Sci Data 2, 150001. https://doi.org/10.1038/sdata.2015.1",
"Robbins, K., Truong, D., Appelhoff, S., Delorme, A., & Makeig, S. (2021). Capturing the nature of events and event context using Hierarchical Event Descriptors (HED). In press for NeuroImage Special Issue Practice in MEEG. NeuroImage 245 (2021) 118766. Online: https://www.sciencedirect.com/science/article/pii/S1053811921010387.",
"Robbins, K., Truong, D., Jones, A., Callanan, I., & Makeig, S. (2021). Building FAIR functionality: Annotating events in time series data using Hierarchical Event Descriptors (HED). Neuroinformatics Special Issue Building the NeuroCommons. Neuroinformatics https://doi.org/10.1007/s12021-021-09537-4. Online: https://link.springer.com/article/10.1007/s12021-021-09537-4."
],
"Funding": [
"Experiment was supported by the UK Medical Research Council (MC_A060_5PR10) and Elekta Ltd.",
"Curation was supported by: Army Research Laboratory W911NF-10-2-0022, NIH R01 EB023297-03, NIH R01 NS047293-l4, and NIH R24 MH120037-01."
],
"DatasetDOI": "10.18112/openneuro.ds003645.v1.0.0"
}
17 changes: 17 additions & 0 deletions eeg_ds003654s_hed_library/participants.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"participant_id": {
"LongName": "Participant identifier",
"Description": "Unique subject identifier"
},
"gender": {
"Description": "Sex of the subject",
"Levels": {
"M": "male",
"F": "female"
}
},
"age": {
"Description": "Age of the subject",
"Units": "years"
}
}
3 changes: 3 additions & 0 deletions eeg_ds003654s_hed_library/participants.tsv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
participant_id age gender
sub-002 31 M
sub-003 25 M
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Loading