From 8d7825f74bf4199afac88abcd8f8ce0f4bbeba96 Mon Sep 17 00:00:00 2001 From: Fatemeh Taheriyan Date: Mon, 13 May 2024 09:55:30 +0200 Subject: [PATCH] Deployed d5ea5b0 with MkDocs version: 1.6.0 --- 03_cogdataset/index.html | 27 +++++++++++++++++++++++---- index.html | 2 +- search/search_index.json | 2 +- 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/03_cogdataset/index.html b/03_cogdataset/index.html index 881aeee..8f70840 100644 --- a/03_cogdataset/index.html +++ b/03_cogdataset/index.html @@ -80,9 +80,11 @@ @@ -309,7 +311,7 @@

Eye Tracking & Behavioral Data
  • Stimulus and jitter duration
  • Subject's responses
  • -

    M-EEG data

    +

    M-EEG Data

    • Unprocessed/Raw release format

      @@ -334,7 +336,7 @@

      M-EEG data

    • Covariance matrices: Statistical information about the relationships between sensor measurements at different time points or frequencies.
    • Anatomical MRI data: High-resolution structural images of the subject's brain, used for source localization and spatial alignment.
    • -

      iEEG data

      +

      iEEG/ECoG Data

      • Unprocessed/Raw release format

        @@ -351,6 +353,23 @@

        iEEG data

      European Data Format files used for storing and exchanging time-series biological and physiological data

      EDF files are designed to accommodate data from multiple channels, allowing researchers to store and manage data collected simultaneously from different sensors or electrodes. The format supports both raw signal data and associated metadata, including information about sampling rates, units of measurement, patient demographics, and recording conditions.

      +

      MR/CT Data

      +
        +
      • +

        Unprocessed/Raw release format

        +
          +
        • Filetype: DICOM/NIFTI
        • +
        +
      • +
      • +

        BIDS Format

        +
          +
        • Filetype: DICOM/NIFTI
        • +
        +
      • +
      +

      DICOM is a standard format utilized for storing CT (Computed Tomography) scans and MRI (Magnetic Resonance Imaging) data. These files encompass not only the image data but also essential metadata, including imaging parameters.

      +

      NIFTI serves as another format employed for a subset of subjects where our standard procedure encountered challenges. With the exception of the MR and CT scans for 12 subjects within the iEEG data, all other datasets of similar nature are stored in DICOM format. Further details regarding these 12 problematic datasets are available in this section. NIFTI files encapsulate image data alongside metadata concerning spatial orientation, voxel dimensions, and additional imaging parameters.

      Data Acquisition

      The Cogitate dataset encompasses three distinct neuroimaging modalities, along with synchronized eye-tracking and behavioral data linked to each of these modalities. Here we detail the acquisition protocol for each modality in the corresponding data release: M-EEG, iEEG

      Stimuli

      diff --git a/index.html b/index.html index e66add8..9cb250b 100644 --- a/index.html +++ b/index.html @@ -202,5 +202,5 @@

      Welcome to the Cogit diff --git a/search/search_index.json b/search/search_index.json index 75e0b8e..e1d8615 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome to the Cogitate Data Release Documentation Version Author(s) 1.1 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N. Date Editor(s) 13.05.2024 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N.","title":"Welcome to the Cogitate Data Release Documentation"},{"location":"#welcome-to-the-cogitate-data-release-documentation","text":"Version Author(s) 1.1 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N. Date Editor(s) 13.05.2024 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N.","title":"Welcome to the Cogitate Data Release Documentation"},{"location":"01_intro/","text":"Introduction This document provides guidance and detailed information on the datasets released by COGITATE, how to access them, the directory structure, and a description on various types of data acquired for each modality. Updates This is the second version ( V1.1 ) of the COGITATE data release document. New updates or any changes to the previous versions will be announced here, and as a versioned document here as well. In V1.1 , the raw/unprocessed and BIDS (Brain Imaging Data Structure) format of iEEG (intracranial electroencephalography) data, collected by the Cogitate Consortium for Experiment 1, are released. Future Releases Here are the items that will be released soon: Experiment 1 BIDS format of the M-EEG data (batch 2) Unprocessed/raw format of all M-EEG data (batch 1 and batch 2) Unprocessed/raw and BIDS format of fMRI data Attention: M-EEG, MEEG, M/EEG, MEG/EEG or MEG might be used interchangeably throughout this document or the name of data folders, but all of them pertain to a singular data. This also applies to iEEG and ECoG (Electrocorticography).","title":"Introduction"},{"location":"01_intro/#introduction","text":"This document provides guidance and detailed information on the datasets released by COGITATE, how to access them, the directory structure, and a description on various types of data acquired for each modality.","title":"Introduction"},{"location":"01_intro/#updates","text":"This is the second version ( V1.1 ) of the COGITATE data release document. New updates or any changes to the previous versions will be announced here, and as a versioned document here as well. In V1.1 , the raw/unprocessed and BIDS (Brain Imaging Data Structure) format of iEEG (intracranial electroencephalography) data, collected by the Cogitate Consortium for Experiment 1, are released.","title":"Updates"},{"location":"01_intro/#future-releases","text":"Here are the items that will be released soon:","title":"Future Releases"},{"location":"01_intro/#experiment-1","text":"BIDS format of the M-EEG data (batch 2) Unprocessed/raw format of all M-EEG data (batch 1 and batch 2) Unprocessed/raw and BIDS format of fMRI data Attention: M-EEG, MEEG, M/EEG, MEG/EEG or MEG might be used interchangeably throughout this document or the name of data folders, but all of them pertain to a singular data. This also applies to iEEG and ECoG (Electrocorticography).","title":"Experiment 1"},{"location":"02_overview/","text":"Overview of COGITATE What are the mechanisms that give rise to consciousness? This question has been the focus of extensive research, leading to the development of several prominent theories, including Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). Critically, however, the focus so far has been on testing each theory independently, gathering evidence for/against them separately, leaving open a crucial question: which theory has higher explanatory power when tested against each other directly? COGITATE is a pioneering Open Science adversarial collaboration to bridge this gap and evaluate GNWT and IIT through two studies, named Experiment 1 (EXP1) and Experiment 2 (EXP2). In these experiments, multimodal empirical tests are conducted on human volunteers, combining magneto-electroencephalography (M-EEG), functional magnetic resonance imaging (fMRI) and invasive intracortical recordings (iEEG) along with behavioral and eye tracking measurements. The reason for this approach is to maximize the sensitivity and specificity to the tests of each hypothesis, while accounting for trade-offs between temporal and spatial specificity inherent to the currently available methods in human neuroscience. Goals The aim of the COGITATE project is to accelerate research on consciousness and establish a groundbreaking model for scientific practices in cognitive neuroscience at large, by demonstrating the impact of team-based adversary research and open data to address some of the major riddles in the field, much like established practices in other fields of inquiry such as physics and genomics. Furthermore, the resulting products of this research include a large and unique multimodal database, high-end analysis tools, and a new paradigm for probing consciousness in naturalistic settings. All experimental procedures, multimodal datasets, and analysis tools developed in this project will be made openly available to the public. These products will propel further discoveries in the field of consciousness, and in cognitive neuroscience in general, which will exceed and outlast the direct outputs of the proposed studies. Experiments The COGITATE consortium performed two experiments: In Experiment 1 (EXP1), two sets of clearly visible task relevant and irrelevant stimuli were shown to the subjects with different durations. The goal was to test the effects of maintenance of a percept in consciousness and task relevance and contradictory predictions regarding the involvement of prefrontal and posterior, category selective cortical areas in consciousness. Specifically, the main questions were: How is the persistence of a stimulus in consciousness reflected in cortical hemodynamic and electrophysiological activity, i.e., are the neural responses phasic or sustained throughout a conscious experience? Do activity patterns in prefrontal areas relate to visual consciousness per se or to its consequences, i.e., task-related processes? In Experiment 2 (EXP2), a novel paradigm was developed to test the key predictions of GNWT and IIT while overcoming a major obstacle in the field: creating more naturalistic conditions of invisibility that do not degrade the physical input. To achieve this goal, an engaging video game was used with the help of which salient stimuli were presented for relatively long durations in the background. Sometimes the stimuli was not consciously seen due to attentional engagement by the game. This approach allowed us to uniquely study neural activity elicited by seen or unseen stimuli under naturalistic conditions so that the stimuli can either be task relevant or task irrelevant. Experiment 1: Conscious Perception Objective The primary aim of this experiment was to investigate neural activity in response to stimuli that are consciously perceived. It was designed to manipulate two key factors: Relevance of the Stimulus to the Task: This factor was categorized into three levels\u2014Task-relevant target, Task-relevant non-target, and Task-irrelevant stimulus. Stimulus Duration: The stimuli were presented for durations of 500 ms, 1000 ms, and 1500 ms This design framework allowed us to test several key hypotheses, including: Disentangling consciousness-related activations from task-related activations. Identifying brain regions that convey information about the content of consciousness. Examining the persistence of the content of consciousness over time. Design This experiment followed a 3x3x4x2 factorial design, with the following items: Relevance of Stimulus to the Task (3) - Task-relevant target - Task-relevant non-target - Task-irrelevant stimulus Stimulus Duration (3) - 500 ms - 1000 ms - 1500 ms Stimulus Category (4) - Faces - Objects - Letters - False-fonts (meaningless symbols) Stimulus Orientation (2) - Side view (right or left view) - Front view Sample Size The sample sizes were determined based on common practices in the literature, resulting in a total of 122 subjects for fMRI, 102 for M-EEG, and 38 for iEEG. All subjects met specific criteria, including age and health conditions, to ensure data quality. Task Description A sequence of images including faces, objects, letters or meaningless symbols (\u2018false fonts\u2019) with front or side (left or right) view were presented to the subjects. At the beginning of each sequence, the target images were presented and subjects were asked to memorize and remember them during the sequence. Subjects were instructed to press any buttons with their index finger when they saw targets (in either front or side views) as quickly and accurately as possible. The duration of each sequence was approximately 2 minutes. The next sequence started when the subjects pressed the space key. Here is an example of the tasks: For a comprehensive summary of more details about the experiments, please refer to the following supplementary resources: PLOS One Methods paper (COGITATE Main Scientific Paper 1 (MSP-1)): Melloni L, Mudrik L, Pitts M, Bendtz K, Ferrante O, et al. (2023) An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE 18(2): e0268577. https://doi.org/10.1371/journal.pone.0268577 COGITATE Preregistration, v4 EXP 1 Demo video Experiment 2: Video Game Engagement Currently in preparation. It will be released soon! Task Code and Stimuli repositories The task code and stimuli used for EXP1 and for all modalities are available in the cogitate-experiment-code repository.","title":"Overview of COGITATE"},{"location":"02_overview/#overview-of-cogitate","text":"What are the mechanisms that give rise to consciousness? This question has been the focus of extensive research, leading to the development of several prominent theories, including Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). Critically, however, the focus so far has been on testing each theory independently, gathering evidence for/against them separately, leaving open a crucial question: which theory has higher explanatory power when tested against each other directly? COGITATE is a pioneering Open Science adversarial collaboration to bridge this gap and evaluate GNWT and IIT through two studies, named Experiment 1 (EXP1) and Experiment 2 (EXP2). In these experiments, multimodal empirical tests are conducted on human volunteers, combining magneto-electroencephalography (M-EEG), functional magnetic resonance imaging (fMRI) and invasive intracortical recordings (iEEG) along with behavioral and eye tracking measurements. The reason for this approach is to maximize the sensitivity and specificity to the tests of each hypothesis, while accounting for trade-offs between temporal and spatial specificity inherent to the currently available methods in human neuroscience.","title":"Overview of COGITATE"},{"location":"02_overview/#goals","text":"The aim of the COGITATE project is to accelerate research on consciousness and establish a groundbreaking model for scientific practices in cognitive neuroscience at large, by demonstrating the impact of team-based adversary research and open data to address some of the major riddles in the field, much like established practices in other fields of inquiry such as physics and genomics. Furthermore, the resulting products of this research include a large and unique multimodal database, high-end analysis tools, and a new paradigm for probing consciousness in naturalistic settings. All experimental procedures, multimodal datasets, and analysis tools developed in this project will be made openly available to the public. These products will propel further discoveries in the field of consciousness, and in cognitive neuroscience in general, which will exceed and outlast the direct outputs of the proposed studies.","title":"Goals"},{"location":"02_overview/#experiments","text":"The COGITATE consortium performed two experiments: In Experiment 1 (EXP1), two sets of clearly visible task relevant and irrelevant stimuli were shown to the subjects with different durations. The goal was to test the effects of maintenance of a percept in consciousness and task relevance and contradictory predictions regarding the involvement of prefrontal and posterior, category selective cortical areas in consciousness. Specifically, the main questions were: How is the persistence of a stimulus in consciousness reflected in cortical hemodynamic and electrophysiological activity, i.e., are the neural responses phasic or sustained throughout a conscious experience? Do activity patterns in prefrontal areas relate to visual consciousness per se or to its consequences, i.e., task-related processes? In Experiment 2 (EXP2), a novel paradigm was developed to test the key predictions of GNWT and IIT while overcoming a major obstacle in the field: creating more naturalistic conditions of invisibility that do not degrade the physical input. To achieve this goal, an engaging video game was used with the help of which salient stimuli were presented for relatively long durations in the background. Sometimes the stimuli was not consciously seen due to attentional engagement by the game. This approach allowed us to uniquely study neural activity elicited by seen or unseen stimuli under naturalistic conditions so that the stimuli can either be task relevant or task irrelevant.","title":"Experiments"},{"location":"02_overview/#experiment-1-conscious-perception","text":"","title":"Experiment 1: Conscious Perception"},{"location":"02_overview/#objective","text":"The primary aim of this experiment was to investigate neural activity in response to stimuli that are consciously perceived. It was designed to manipulate two key factors: Relevance of the Stimulus to the Task: This factor was categorized into three levels\u2014Task-relevant target, Task-relevant non-target, and Task-irrelevant stimulus. Stimulus Duration: The stimuli were presented for durations of 500 ms, 1000 ms, and 1500 ms This design framework allowed us to test several key hypotheses, including: Disentangling consciousness-related activations from task-related activations. Identifying brain regions that convey information about the content of consciousness. Examining the persistence of the content of consciousness over time.","title":"Objective"},{"location":"02_overview/#design","text":"This experiment followed a 3x3x4x2 factorial design, with the following items: Relevance of Stimulus to the Task (3) - Task-relevant target - Task-relevant non-target - Task-irrelevant stimulus Stimulus Duration (3) - 500 ms - 1000 ms - 1500 ms Stimulus Category (4) - Faces - Objects - Letters - False-fonts (meaningless symbols) Stimulus Orientation (2) - Side view (right or left view) - Front view","title":"Design"},{"location":"02_overview/#sample-size","text":"The sample sizes were determined based on common practices in the literature, resulting in a total of 122 subjects for fMRI, 102 for M-EEG, and 38 for iEEG. All subjects met specific criteria, including age and health conditions, to ensure data quality.","title":"Sample Size"},{"location":"02_overview/#task-description","text":"A sequence of images including faces, objects, letters or meaningless symbols (\u2018false fonts\u2019) with front or side (left or right) view were presented to the subjects. At the beginning of each sequence, the target images were presented and subjects were asked to memorize and remember them during the sequence. Subjects were instructed to press any buttons with their index finger when they saw targets (in either front or side views) as quickly and accurately as possible. The duration of each sequence was approximately 2 minutes. The next sequence started when the subjects pressed the space key. Here is an example of the tasks: For a comprehensive summary of more details about the experiments, please refer to the following supplementary resources: PLOS One Methods paper (COGITATE Main Scientific Paper 1 (MSP-1)): Melloni L, Mudrik L, Pitts M, Bendtz K, Ferrante O, et al. (2023) An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE 18(2): e0268577. https://doi.org/10.1371/journal.pone.0268577 COGITATE Preregistration, v4 EXP 1 Demo video","title":"Task Description"},{"location":"02_overview/#experiment-2-video-game-engagement","text":"Currently in preparation. It will be released soon!","title":"Experiment 2: Video Game Engagement"},{"location":"02_overview/#task-code-and-stimuli-repositories","text":"The task code and stimuli used for EXP1 and for all modalities are available in the cogitate-experiment-code repository.","title":"Task Code and Stimuli repositories"},{"location":"03_cogdataset/","text":"COGITATE Dataset The COGITATE dataset is a comprehensive collection of multimodal neuroimaging data, encompassing a total of 262 subjects. COGITATE employs three distinct neuroimaging techniques: fMRI, M-EEG, and iEEG/ECoG. The MEG modality comprised 102 healthy subjects, also above the age of 18, with no known psychiatric or neurological issues. These participants were sourced from the Centre for Human Brain Health at the University of Birmingham (Birmingham, United Kingdom) and the Center for MRI Research of Peking University (Beijing, China). Similarly, the fMRI modality included 122 healthy volunteers, all of whom were above the age of 18 and predominantly right-handed. These participants had no known history of psychiatric or neurological disorders and were recruited from the Yale Magnetic Resonance Research Center (New Haven, CT, United States) and the Donders Centre for Cognitive Neuroimaging (Nijmegen, Netherlands). In contrast, the iEEG modality involved a more specialized cohort of 38 patients diagnosed with pharmaco-resistant focal epilepsy. These participants ranged in age from 10 to 65 years, had an IQ above 70, and met specific health criteria. They were recruited from multiple medical centers specializing in epilepsy treatment, including the Comprehensive Epilepsy Center at New York University (New York, NY, United States), Brigham and Women\u2019s Hospital, Boston Children\u2019s Hospital (Boston, MA, United States), and the University of Wisconsin School of Medicine and Public Health (Madison, WI, United States). Age histograms across modalities Sex proportions across modalities Handedness proportions across modalities Demography of Subjects You can find the profile of participants for all modalities at subjects_demography . Here is a brief explanation about the information collected from the subjects. Demographic Information for M-EEG The below items are included in the subjects\u2019 demography for M-EEG modality: Participant_ID (participant identifier), sex (biological sex of participant), age (age of participant at the time of testing), handedness (right, left or ambidextrous), included in MSP (whether the data was used for the experiments or not), phase* (determining in which stage the data is, phase 2/phaseII (optimization) or phase 3/phaseIII (replication)), QC** status (passed/not), if Not (in QC status) QC rejection reason, weight (weight of participant at the time of study in pounds), height (height of participant at the time of study in inches), primary and secondary language, race (ethnicity of the participant), education, compensation (whether the subject was paid or not), colorblind (determining whether the subject can distinguish the colors and perceiving them correctly or not), visual correction (none or using any glasses or contact lenses), eye dominance (which eye is dominant), eye chart results (the outcome of a visual acuity test performed using the eye chart) and dioptry (visual acuity of the participant in Diopters). * Phase: COGITATE project has three phases. In phase 1 , all data were acquired by theory neutral teams. To ensure replicability of the results, the entire dataset was split into two halves, each with an equal mixture of data from each of the labs for each modality. In phase 2 , after evaluating data quality, the first half of the data were used for developing analysis tools (optimization of methods). The purpose of phase 2 was to define the best analysis practices and to agree upon, in consultation with expert advisors. In phase 3 , the replication phase, the second half of the data were analyzed using the concurred procedure, agreed upon protocols, thereby allowing an in-house replication of the results obtained in phase 2. * *QC (quality control): A number of items were checked for all the data of each modality which are elaborated in the section of Quality Check and Exclusion Criteria . Demographic Information for fMRI All of the items are similar to the M-EEG modality. Demographic Information for iEEG In addition to the properties mentioned for M-EEG modality, the below parameters were also provided for this modality: Electrode scheme (the scheme used for implanting the electrodes, Stereo, Subdural grid & Strips), number of implanted electrodes, implant hemisphere (brain hemisphere where the electrodes implanted, right, left, both or bilateral), IQ (score and name of the test used for assessment (FSIQ, WISC, VCI, POI, WMI, PSI, AMI, VMI), WADA (intracarotid sodium amobarbital, a test that determines which side of the subject\u2019s brain controls language and memory functions), seizure type (classification of seizure type), age of onset (age at which the first symptoms of seizure appeared), auditory normal hearing (indicator of whether the participant had normal hearing capabilities, yes or no), epilepsy seizure classification (categorization of epilepsy as per standard seizure classification), epilepsy seizure aura (description of any sensory or perceptual symptoms before a seizure occured), epilepsy seizure semiology (signs and symptoms exhibited during epileptic seizures), epilepsy seizure frequency (frequency of seizures experienced by participant), epilepsy post ictal semiology (symptoms and signs after an epileptic seizure), epilepsy trigger (identified factors or circumstances that increased the likelihood of experiencing a seizure), epilepsy duration uncontrolled (the duration that seizures had not been successfully managed or medically controlled), epilepsy seizure onset zone (brain region identified as the initial site of seizure activity), epilepsy resection (details of any surgical resection performed for seizure control), epilepsy language lateralization (determination of the dominant hemisphere for language function), epilepsy past surgical history (record of any previous surgeries related to the treatment of epilepsy), epilepsy past medical history (medical history relevant to epilepsy diagnosis and treatment), epilepsy family history (presence of seizure or epilepsy disorders in family members), other neurological disorders (any other diagnosed neurological disorders besides epilepsy), epilepsy MRI findings (summary of MRI findings relevant to epilepsy diagnosis), epilepsy pathology findings (pathological findings from tissue analysis post-surgery or biopsy). Quality Check Data from all modalities were checked at three levels. The first level checks tested whether the datasets contained all expected files keeping their naming conventions, and that all personal information had been removed. The second level checks tested subjects\u2019 performance with respect to behavior. For Experiment 1 , subjects were excluded if their hit rate was lower than 80% or (False Alarm) FAs higher than 20% for M-EEG and fMRI, and for iEEG, a more relaxed criteria of 70% Hits and 30% FAs was used. Two M-EEG subjects were excluded due to low hit rates and one iEEG patient was excluded due to high FAs. The third level checks assessed the quality of the neural data. Exclusion Criteria The generic exclusion criteria used across Experiment 1 and Experiment 2 included: (a) insufficient number of trials in each of the experimental conditions (<30 for M-EEG or <20 for fMRI), due to excessive muscular artifacts, movement, noisy recording, or subjects deciding to stop the experiments. If a given analysis showed that a good enough signal could be obtained with fewer trials, these numbers were amended; and (b) low performance in the attention tasks. In Experiment 1 , this translates into: <80% Hits, >20% FAs for fMRI and M-EEG subjects; <70% Hits, >30% FAs for iEEG patients. In addition, data was excluded from analysis if it did not pass any of the predefined data quality checks. Description of COGITATE Data Although our data collection had a specific purpose, the data we gathered holds potential value for a range of diverse inquiries. Consequently, the COGITATE consortium has chosen to openly share all raw data collected (including the data that did not pass the quality criteria), to facilitate its utilization for various research endeavors and promote data reusability. We have made available two primary formats for the data acquired during the experimental phase of the COGITATE project, specifically Experiment 1 : Unprocessed/Raw Data BIDS Format 1. Unprocessed/Raw Data The unprocessed data format closely resembles the original acquired data, having undergone minimal processing to ensure compliance with GDPR (General Data Protection Regulation)/ HIPAA (Health Insurance Portability & Accountability Act) anonymity standards. 2. BIDS Format BIDS format, widely adopted in cognitive neuroscience, enhances data reusability. To facilitate others in leveraging our data, we have released it in BIDS format. File Type Glossary Here are the various file formats used for each modality of the COGITATE dataset along with a short description of them. Eye Tracking & Behavioral Data Unprocessed/Raw release format - Filetype: ASC/CSV BIDS Format - Filetype: ASC/CSV The two eye trackers used within COGITATE are: 1. EyeLink eye tracker 2. Tobii eye tracker 1) EyeLink eye tracker : Most of the sites used this eye tracker which produces data in the EDF format, EyeLink Data Format. This data was immediately converted to ASCII text files using the converter provided by Eyelink. This is the ASC files that we used in our data. 2) Tobii eye tracker : The other eye tracker was the Tobii eye tracker used by New York University Langone for ECOG data. This eye tracker produces data in the form of CSV files. The files generated by eye tracking systems, containing information about eye movement and gaze behavior which typically store a time-stamped sequence of gaze data points and include information such as: Timestamps: The exact time at which each gaze data point was recorded. Gaze Coordinates: The x and y coordinates on the screen where the person's gaze is directed. Pupil Diameter: The size of the person's pupil, which can provide insights into changes in visual processing or cognitive load. Fixations: Periods of stable gaze where the person is looking at a specific point without significant movement. Saccades: Rapid eye movements between fixations, indicating shifts in attention. Blinks: Instances when the person's eyes are closed, which can be important for data cleaning and analysis. Behavioral data is available in CSV format and it provides below information: Blocks Events Trials Stimulus and jitter duration Subject's responses M-EEG data Unprocessed/Raw release format Filetype: FIF BIDS Format Filetype: FIF File Format for the Input and Output of MEG and EEG data FIF files contain various types of information related to neuroimaging data, including: Raw sensor data: MEG and EEG measurements recorded from sensors placed on the scalp or near the head. Event information: Time-stamped triggers or markers indicating the timing of events, such as stimulus presentations or subject responses. Sensor locations and orientations: Information about the physical positions and orientations of sensors used in the measurements. Head geometry: Information about the shape and structure of the subject's head, which is crucial for accurate source localization. Covariance matrices: Statistical information about the relationships between sensor measurements at different time points or frequencies. Anatomical MRI data: High-resolution structural images of the subject's brain, used for source localization and spatial alignment. iEEG data Unprocessed/Raw release format Filetype: EDF BIDS Format Filetype: EDF European Data Format files used for storing and exchanging time-series biological and physiological data EDF files are designed to accommodate data from multiple channels, allowing researchers to store and manage data collected simultaneously from different sensors or electrodes. The format supports both raw signal data and associated metadata, including information about sampling rates, units of measurement, patient demographics, and recording conditions. Data Acquisition The Cogitate dataset encompasses three distinct neuroimaging modalities, along with synchronized eye-tracking and behavioral data linked to each of these modalities. Here we detail the acquisition protocol for each modality in the corresponding data release: M-EEG, iEEG Stimuli Stimuli belonged to four categories that naturally fell into two groups that were clearly distinct from each other: pictures (20 faces and 20 objects) and symbols (20 letters and 20 false-fonts). Face stimuli were created using the FaceGen Modeler 3.1 program and object stimuli were taken from the Object Databank (Tarr, 1996). Faces and objects were grey-scaled (RGB: 125, 125, 125), and manipulated to have similar size and equal luminance using the SHINE toolbox (Willenbockel et al., 2010). Equal proportions of male and female faces were presented. They all had hair and belonged to different ethnicities (e.g., Caucasian, Asian, African, American) to facilitate face individuation. The orientation of the stimuli was manipulated, such that half of the stimuli from each category had a side view and the other half a front view. All letter stimuli and false fonts were generated with MAXON CINEMA 4D Studio (RC-R20) 20.059 on macOS 10.14, appearing in gray (RGB: 125, 125, 125). Three views were rendered for each font set (real font, false/pseudo font) at 0\u00b0, 30\u00b0 and -30\u00b0 horizontal viewing angle with the following settings: Extrusion depth 9.79% of character height, camera distance 5.65 times character height and 18\u00b0 above the center of the letter (High Angle), with a simulated focal length of 135 mm (35 mm equiv.). All stimuli were presented on a rectangular aperture at an average visual angle of 6 \u030a by 6 \u030a. Procedure Stimuli were presented sequentially, all supra-threshold, with half being task-relevant and the other half task-irrelevant. Only one stimulus was shown on the screen at any given time. To define task relevance, subjects were instructed to detect two targets from different categories, regardless of their orientation. This online reporting enabled an explicit assessment of subjects\u2019 performance, engaging report-related areas for later analysis. Each block began with notification of the two target stimuli, either pictorial (faces and objects) or symbolic (letters and false fonts), creating a clear distinction between relevant and irrelevant stimuli. At the start of each block, specific target stimuli were revealed with instructions such as \u201cdetect face A and object B\u201d or \"detect letter C and false-font D.\" Targets did not repeat across blocks. Each run included two blocks of the Face/Object task and two blocks of the Letter/False-font task, with the order counterbalanced across runs. Subjects were instructed to maintain central fixation throughout each trial. Gaze was monitored online through an eye tracker, with repeated calibrations ensuring good quality data. Each block comprised stimuli from all four categories, with each stimulus displayed for 500, 1000, or 1500 ms, followed by a blank interval, ensuring a consistent trial duration of 2000 ms. Within each block, three trial types were presented: i) Task Relevant Targets, consisting of the specific stimuli participants were tasked with detecting; ii) Task Relevant Non-Targets, encompassing stimuli from relevant categories that were not designated targets; and iii) Task Irrelevant Stimuli, comprising stimuli from the remaining categories. M-EEG Data Acquisition M-EEG recordings were acquired at the Centre for Human Brain Health (CHBH) of University of Birmingham in the United Kingdom, and at the Center for MRI Research of Peking University (PKU) in China. Hardware Both centers had a 306-channel, whole-head TRIUX MEG system from MEGIN (York Instruments; formerly Elekta). The MEG system comprised 204 planar gradiometers and 102 magnetometers in a helmet-shaped array. Simultaneous EEG was recorded using an integrated EEG system and a 64-channel electrode cap. The MEG system was equipped with a zero boil-off Helium recycling system and the noise-resilient ARMOR sensors and placed in a shielded room (2 layers of mu-metal and 1 layer of aluminum). To reduce environmental noise, the integrated active shielding system was used at PKU. In order to cover the brain more homogeneously, the MEG gantry was positioned at 68 degrees. Location of Electrodes and ECG/EOG Measurements The location of the fiducials, the positions of the 64 EEG electrodes and the participant\u2019s head shape were recorded using a 3-D digitizer system (Polhemus Isotrak). A set of bipolar electrodes were placed on the subject\u2019s chest (upper left and upper right chest position) to record the cardiac signal (ECG). Two sets of bipolar electrodes were placed around the eyes (two located at the outer canthi of the right and left eyes and two above and below the center of the right eye) to record eye movements and blinks (EOG). Ground and reference electrodes were placed on the back of the neck and on the right cheek, respectively. The impedance of all of the electrodes was checked to be below 10 kOhm. Standard Locations of EOG and ECG electrodes Head Position Indicator (HPI) Coils The participant\u2019s head position inside the MEG system was measured at the beginning and at the end of each run using four head position indicator (HPI) coils placed on the EEG cap. Specifically, the HPI coils were placed next to the left and right mastoids and on the left and right forehead. Their location relative to anatomical landmarks was digitized with a Polhemus Isotrak System. During the measurement, high frequency (>200 Hz) signals were produced by those coils and the localization of these signals was used to estimate the head position in the sensor space. To avoid the potential artifacts produced by the non-linear interaction between the signals generated by these coils, head position measurement was performed only during resting periods (as opposed to continuously). Standard locations of HPI coils. Coil Numbers: 1. Blue, 2. White, 3. Red, 4. Black, 5. Yellow Anatomical MRI Data Acquisition For each subject, a high resolution T1-weighted MRI volume (3T Siemens MRI Prisma scanner) was acquired. At CHBH, a 32-channel coil with a resolution of 1 x 1 x 1 mm, TR/TE = 2000/2.03ms; TI = 880 ms; 8\u00b0 flip angle, 208 sagittal slices and field of view (FOV): 256 x 256 matrix was acquired for source localization with individual realistic head modeling. At PKU, a 64-channel coil with a resolution of 0.5 x 0.5 x 1 mm, TR/TE = 2530/2.98ms; TI = 1100 ms; 7\u00b0 flip angle, 192 sagittal slices; FOV: 448 \u00d7 512 matrix was used. To avoid possible interference of body magnetization on the MEG recording, all MRI scans were acquired at least one week before the MEG session, or at any time afterwards. The FreeSurfer standard template was used (fsaverage) for participants lacking an anatomical scan (N=5). Behavioral Data Acquisition The task was executed using Matlab (PKU: R2018b; UB: R2019b) with Psychtoolbox v.3 (Pelli, 1997) on a custom PC at UB and a Dell XPS desktop PC at PKU. Visual stimuli were presented on a screen placed in front of the subjects with a PROPixx DLP LED projector (VPixx Technologies Inc.) at a resolution of 1920 x 1080 pixels and a refresh rate of 120 Hz. The distance between the subject\u2019s eyes and the screen was different at each site (CHBH: 119 cm, PKU: 85 cm) to achieve the same FOV of 36.6 x 21.2 degrees. Subjects responded with an 8-button response box (Millikey LH-8). Eye Tracking Data Acquisition Eye movements were monitored and recorded from both eyes (binocular eye-tracking) using the MEG-compatible EyeLink 1000 Plus eye-tracker (SR Research Ltd., Ottawa, Canada). Nine-point calibration was performed at the beginning of the experiment, and recalibrated if necessary at the beginning of each block/word. Pupil size and corneal reflection data were collected at a sampling rate of 1000 Hz. Behavioral Data Code Scheme Stimuli are coded as a 4-digit number. 1st digit = stimulus type (1 = face; 2 = object; 3 = letter; 4 = false font) 2nd digit = stimulus orientation (1 = center; 2 = left; 3 = right) 3rd & 4th digits = stimulus id (1...20; for faces 1...10 is male, 11...20 is female) e.g., \"1219\" = 1 is face, 2 is left orientation and 19 is a female stimulus #19 Eye Tracker and MEG Code Scheme The channel name that contains the eye tracker data in the FIF file is as follows: MISC1 (X), MISC2 (Y), and MISC3 (pupil) Defining some terms Trial : Stimulus presentation followed by a fixation (the two add up to 2 sec) followed by a jitter of 200 msec to 2000 ms. Mini block : presentation of 34 to 38 stimuli, in the beginning of which the target stimuli were presented. Block : composed of 4 mini blocks. At the end of each block, there was a break. Break : Pause between 2 blocks Successive trigger scheme The triggers were sent successively. The first trigger represented the stimulus type, followed by orientation, stimulus duration, and task relevance, all interspaced by 50 ms. Additionally, a trigger was sent upon key press. 1st Trigger (on Stimulus Onset): Stimulus Type 1 to 20: faces - 1 to 10 males, - 11 to 20 females 21 to 40: objects 41 to 60: letters 61 to 80: falses 2nd Trigger (2 Frames after Stimulus Onset): Stimulus Orientation 101: Center 102: Left 103: Right 3rd Trigger (4 Frames after Stimulus Onset): Stimulus Duration 151: 500 msec 152: 1000 msec 153: 1500 msec 4th Trigger (6 Frames after Stimulus Onset): Stimulus Task Relevance 201: Task relevant target 202: Task relevant non target 203: Task irrelevant 5th Trigger (8 Frames after Stimulus Onset): Trial ID Triggers 111-148: Trial number Response Trigger 255: Following button press. Stimulus Presentation End 96: Offset of stimulus presentation (onset of blank) 97: Offset of blank (onset of jitter period) Note that both these are fixations, they are just divided into blank and jitter. General Triggers to Mark Experiment Progression 86: Onset of experiment 81: Onset of recording 83: Offset of recording Miniblock ID Triggers 161-200: Miniblock ID trigger Zeroes 0: Zeros were sent between the successive triggers to reset the LPT, see below. These were also sent to the eye tracker but did not mean anything and they can safely be ignored. How The LPT Triggers Were Sent The LPT port of the computer was used for sending the triggers and it was done by using the sendTrig function. This function sets the port in a specific state (whatever trigger we want to send) and logs the trigger afterwards, noting if it is sent and what time the command for sending it is executed. For each trigger that is being sent, the port is being reset after a frame to 0. In the beginning of the experiment, a few triggers were sent to mark experiment onset and onset of recording. Then, a mini block was initiated. The participant was presented with the target screen and required to press the spacebar to proceed. When the participant pressed the space button, the miniblock ID was sent. Only once the miniblock trigger was sent the fixation appeared. This means that there was a small delay between key press and fixation onset. Following the first fixation, a jitter started, which was also logged. Then, the first stimulus was displayed. Upon the presentation of the stimulus, the successive triggers were initiated. The first trigger occurred directly after the onset of the stimulus, indicating the stimulus ID (1-80). Then, after 2 frames, the orientation trigger (101-103) was sent, followed by the duration trigger (151 to 153) at 4 frames, the task demand trigger (201-203) at 6 frames, and finally, the trial ID trigger (111 to 148) at 8 frames. Empty Room Recording Prior to each experiment, MEG signals from the empty room were recorded for 3-minutes. Resting-State (rM-EEG) The resting-state data for each participant was also recorded for 5-minutes and the subjects were asked to keep their eyes open and fixated on a point presented at the center of the screen. M-EEG signals were sampled at a rate of 1 kHz and band-pass filtered between 0.01 and 330 Hz prior to sampling. Task (tM-EEG) Following the empty room and rM-EEG recordings, subjects were asked to complete the task defined in the Procedure section. tM-EEG consisted of 10 runs, with 4 blocks each. During each block, a ratio of 34-38 trials was presented, with 32 non-targets (8 of each category) and 2-6 targets (number chosen randomly). Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Task Runs Blocks Trials Total trials Experiment 1 10 4 34-38 per block 1440 Full Structure of Session Complete standard procedure of an M-EEG session is available in MEG Standard Operating Procedure . Inclusion Criteria The items below were assessed for the subjects before the data was acquired: Age range: 18 to 35 (since over the age of 35 subjects might have a hard time maintaining central focus) Handedness: right Hearing problems: no Hearing aid: no Vision problems: no, or corrected-to-normal with soft lenses No MRI in the last week MRI compatible: no metal, medical implants, etc. No claustrophobia. Note: dental implants are allowed (particularly for non-magnetic materials) unless it generates big impacts on MEG signals, and this will be checked prior to MEG recording. No known history of psychiatric or neurological disorders, e.g., Not have been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D). Not have been formally diagnosed with autism spectrum disorder (ASD) Not suffer from epilepsy Quality Check and Exclusion Criteria For M-EEG, the first stage of the third-level checks focused on system-related and external noise generators. It was tested using the signal spectra in the empty room recording, the resting state session, and the experiment itself for all sensors. Any sensor and/or specific frequency revealing extensive noise using visual inspection, was flagged to document potential problems. Ultimately, this did not lead to any exclusions. Next, all experimental data blocks were visually inspected for abnormalities in spectra (peaks not explainable by physiology), and in ICA components, and checked for extremely noisy (based on the score of differences between the original and Maxwell-filtered data > 7) and flat sensors. The latter step was performed in a collaboration between the data monitoring team and members of the centers where data was acquired to check whether any potential changes in preprocessing for particular subjects were needed. Finally, we tested if all experimental cells (i.e. task-relevant non-targets and task-irrelevant stimuli for each one of the four categories) have enough trials (N=30). iEEG Data Acquisition iEEG recordings were obtained from patients with pharmacologically resistant epilepsy undergoing invasive electrophysiological monitoring at the Comprehensive Epilepsy Center at New York University (NYU) Langone Health Center, Brigham and Women\u2019s Hospital, Children\u2019s Hospital Boston (Harvard Medical School), and University of Wisconsin School of Medicine and Public Health (WU). Hardware Brain activity was recorded with a combination of intracranially subdural platinum-iridium electrodes embedded in SILASTIC sheets (2.3 mm diameter contacts, Ad-Tech Medical Instrument and PMT Corporation) and/or depth stereo-electroencephalographic platinum- iridium electrodes (PMT Corporation; 0.8-mm diameter, 2.0-mm length cylinders; separated from adjacent contacts by 1.5 to 2.43 mm), or Behnke-Fried depth stereo- electroencephalographic platinum-iridium electrodes (Ad-Tech Medical, BF08R-SP21X-0C2, 1.28 mm in diameter, 1.57 mm in length, 3 to 5.5 mm spacing). The decision to implant, electrode targeting, and the duration of invasive monitoring was solely determined on clinical grounds and without reference to this or any other study. Electrodes were arranged as grid arrays (either 8 \u00d7 8 with 10 mm center-to-center spacing, 8 x 16 contacts with 3 mm spacing, or hybrid macro/micro 8 x 8 contacts with 10 mm spacing and 64 integrated microcontacts with 5 mm spacing), linear strips (1 \u00d7 8/12 contacts), depth electrodes (1 \u00d7 8/12 contacts), or a combination thereof. Subdural electrodes covered extensive portions of lateral and medial frontal, parietal, occipital, and temporal cortex of the left and/or right hemisphere. Recordings from grid, strip and depth electrode arrays were done using a Natus Quantum amplifier (Pleasonton, CA) or a Neuralynx Atlas amplifier (Bozeman, MT). A total of 4057 electrodes (892 grids, 346 strips, 2819 depths) were implanted across 32 patients with drug-resistant focal epilepsy undergoing clinically motivated invasive monitoring. 3512 electrodes (780 grids, 307 strips, 2425 depths) that were unaffected by epileptic activity, artifacts, or electrical noise were used in subsequent analyses. To determine the electrode localization for each patient, a postoperative CT (computed tomography) scan and a pre-operative T1 MRI were acquired and co-registered. Recordings were obtained continuously during the patients\u2019 stay in the hospital. All data was stored with stimulus and timing markers permitting offline synchronization. Anatomical MRI Data Acquisition Before the participants underwent surgery and electrode implantation, T1-weighted MR data were acquired from them. At NYU, imaging was performed using the Siemens Biograph mMR scanner. At Harvard, the imaging sequence utilized was MPRAGE (magnetization-prepared rapid gradient-echo), with a Siemens Skyra 3T scanner. At WU, imaging was conducted using the GE MEDICAL SYSTEMS SIGNA Artist scanner. The rationale behind acquiring MR scans was the spatial resolution it offers for brain tissue visualization. CT Data Acquisition Following surgery, post-operative CT scans were obtained from the subjects to assist in localizing the electrodes on specific brain tissue. At NYU, scans were performed using a Siemens SOMATOM Force scanner. At Harvard, imaging was conducted using the Medtronic O-arm MVS O2, manufactured by Medtronic. At WU, scans were acquired utilizing the GE MEDICAL SYSTEMS Optima CT660 scanner. Please note: MR and CT data were collected for the subjects at Brigham and Women\u2019s Hospital and Children\u2019s Hospital Boston. However, due to the data protection policies, they are not included in the COGITATE Data Release. Behavioral Data Acquisition The task was implemented using Matlab (Harvard: R2020b; NYU: R2020a, WU: 2021a), Psychtoolbox v.3 (Pelli, 1997), and run on a Dell Precision 5540 laptop, with a 15.6\" Ultrasharp screen (screen size 345 x 195 mm2; resolution 1920 x 1080) at NYU and Harvard and on a Dell D29M PC with an Acer V196WL 19\" LED LCD monitor (screen size 406.4 x 254 mm2; resolution 1440 x 990) at WU. The distance between the subject\u2019s eyes and the screen was 80 cm. But the actual distance was measured for each subject before the start of recording to ensure that the size of the stimulus was 6 x 6 of visual angle. Participants responded using an 8-button response box (Millikey LH-8; response hand(s) varied based on the setting in the patient\u2019s room). Eye Tracking Data Acquisition At Harvard and Wisconsin, EyeLink 1000 Plus Camera was used to collect eye-tracking data, and a thirteen-point calibration was performed several times during the experiment. The calibration was performed at the beginning of the experiment, and recalibrated in-between blocks, if necessary to meet precision requirements. At NYU, eye-tracking data was collected throughout the duration of the experiment using a Tobii-4C eye-tracker. A nine-point calibration was performed several times during the experiment. Pupil size and corneal reflection data were collected at a sampling rate of 500 Hz at Harvard and Wisconsin and at a sampling rate of 90 Hz at NYU. The Eyelink system recorded monocular data, while the Tobii system recorded binocular data. For the former cases, only one eye was recorded as determined by ocular dominance. The experiment was not influenced by the Eye-tracking recording. Behavioral Data Code Scheme The behavioral code scheme is similar to the M-EEG modality which is explained in this section . Eye Tracker Data Code The eye tracker code scheme for the iEEG modality follows a similar structure to that described for M-EEG data. You can find detailed explanations here . iEEG Code Scheme Photodiode Trigger Scheme For ECOG patients, the type of port utilized by the M-EEG team (LPT) was incompatible with our recording system. Consequently, a photodiode was employed. A photodiode is an electronic device that records changes in luminance and converts them into voltage. An example of a photodiode In the experimental code, it was ensured that when a new event occurred on the screen (such as stimulus onset or stimulus offset), a white flash appeared in the bottom right corner. The photodiode device was positioned atop the flashing square and connected to the amplifier recording the iEEG channel signals. This additional channel facilitated the identification of event onsets in our task. This type of recording only allows binary signals (the photodiode is either on or off). However, specific events were encoded with varying numbers of subsequent pulses. Stimulus Presentation Onset The flashing square was flashed only once at the onset of each new stimulus. Stimulus Presentation Offset The flashing square was flashed only once at the offset of each stimulus. Start of the Inter-Trial Interval The flashing square was flashed only once at the beginning of the inter-trial interval. The inter-trial interval was initiated 2 seconds after stimulus onset and persisted for a random duration (following a truncated exponential distribution between 0.2 and 2 seconds, with a mean of 0.4 seconds). Block Start The start of an experimental block was marked by sending 4 consecutive pulses. Block End The end of an experimental block was marked by sending 2 consecutive pulses. Experiment Start and End The beginning and end of the experiment were marked by sending 3 consecutive pulses. Schematic representation of the photodiode channel Log File Alignment The photodiode channel solely indicates when a specific event occurred in the iEEG signals, lacking any information about the nature of the event (unlike an LPT trigger in MEG). To identify specific events in the signal, the timing information from the log file was combined with that from the photodiode. The log file contains a description of each presented event along with a corresponding time stamp from the experimental computer. The photodiode channel recorded time stamps for each event, indicating when it occurred according to the acquisition computer clock. The goal was to align the log file and the photodiode to associate each event in the photodiode signal with the corresponding event description in the log file. This step was crucial since misalignment could lead to incorrect event descriptions in the iEEG signal, compromising the entire analysis. The procedure relies on the fact that both the log file and the photodiode had timestamps. These timestamps were recorded on different clocks. Unfortunately, computer clocks tended to drift away from one another, and these drifts accumulated to be quite significant over extended periods of time (they could be several seconds apart after 1 hour). Therefore, the timestamps of the photodiode and the log file could not be used interchangeably. However, over short periods of time, these drifts were negligible. What this meant was that the interval between two successive timestamps in the log file should be quite consistent with the intervals between two successive events in the photodiode. This provided us with the most thorough check possible: if the events in the log file and in the photodiode were aligned, then there should be only tiny differences between the differences between successive events in both. Here is a step-by-step description of the alignment procedure. Extract the Photodiode Timestamps The timestamps from the photodiode triggers were extracted as the first step. As illustrated in the figure Schematic representation of the photodiode channel , a square pulse was generated for each event during the recording. The onset of each of these pulses was sought. To achieve this, a threshold was initially established, below which the photodiode was considered to be in the off state and above which it was considered to be on (based on visual inspection of the data, which was facilitated by the clean nature of photodiode signals). Subsequently, the signal was binarized using this threshold (signal_bin = signal > threshold), resulting in a signal consisting only of ones and zeros. Next, the discrete difference of the binary signal was computed (y(i + 1) = y(i + 1) - y(i)). This operation produced a \u201c1\u201d when the photodiode transitioned from off to on (onset) and a \u201c-1\u201d when it transitioned from on to off (offset). Since only the onset was of interest, the timestamps of the ones were extracted, representing the timestamps of the photodiode. Verify Event Count Alignment The first step in aligning the photodiode events and the log files was to check if the number of events in each matched. If they did not match, then there was a problem. Aligning the Two Signals To ensure alignment of both signals, the discrete difference between the photodiode and log file timestamps was computed, providing the interval between successive events for each signal. The resulting arrays were then plotted atop each other. Misalignment between the two sources of timing information could be easily detected, as they did not overlap. Perfect overlap between the two was necessary to consider the signals aligned. Additionally, the difference between the two signals was computed to ensure minimal deviation. Integrating Information Once the two signals were properly aligned, the log file events could be used as descriptors of the events marked at the timestamps from the photodiode. Illustration of the alignment pipeline The alignment procedure allowed the information from the log file to be utilized in generating well-described events in the iEEG data. The events were encoded as \u201c/\u201d separated strings and stored in an events.tsv table. An extensive description of each event type and their levels are as follows: Task Name: Dur Task Description: Description of the experimental factors and their level with the correct syntax to access them from the MNE epochs object. Note that factor and level names are case-sensitive. We describe the syntax for each condition separately. However, note that you can fetch a combination of factors from the epochs using a forward slash. For example, if you wish to fetch the face target trials, you can combine both conditions like so: epochs['face/Relevant target'] will fetch all face target trials. In addition, the epochs objects are equipped with metadata, where the name of the column is the name of the factor, and the level follows the same nomenclature as below. This can be used for more sophisticated trial filtering and retrieval. Experimental Design: Event Type: Marks the different events occurring within a trial. Factor Type: Categorical Factor 1: Name: stimulus onset Description: Marks the onset of the visual stimuli. With epochs['stimulus onset'], extract all epochs time-locked to the visual stimulus onset. Factor 2: Name: stimulus offset Description: Marks the offset of the visual stimuli. With epochs['stimulus offset'], extract all epochs time-locked to the offset of the visual stimulus. Factor 3: Name: jitter onset Description: Marks the beginning of the inter-trial jitter. All trials lasted 2 sec., with an added jitter of 400ms on average. With epochs['jitter onset'], extract all epochs time-locked to the beginning of the jitter period (2 sec. after stimulus onset). Block: Marks the experimental blocks. Factor Type: Discrete Factor 1: Name: block_* Description: Experimental blocks. Our experiment consisted of 5 blocks, in between which participants were allowed to take a break. With epochs['block_1'], extract all epochs of the first experimental block. Miniblock: Marks the experimental miniblocks. Factor Type: Discrete Factor 1: Name: miniblock_* Description: Experimental miniblocks. Each experimental block consisted of 4 miniblocks. At the beginning of each miniblock, the two target stimuli were presented to the participant, which the participant had to remember to be able to detect in the stream of stimuli. With epochs['miniblock_1'], extract all epochs of the first experimental miniblock. Category: Category of the visual stimuli. Factor Type: Categorical Factor 1: Name: face Description: Identifies face trials. With epochs['face'], extract all epochs in which a face was presented. Factor 2: Name: object Description: Identifies object trials. With epochs['object'], extract all epochs in which an object was presented. Factor 3: Name: letter Description: Identifies letter trials. With epochs['letter'], extract all epochs in which a letter was presented. Factor 4: Name: false Description: Identifies false font trials (i.e., symbols). With epochs['false'], extract all epochs in which a false font was presented. Identity: Identity of the visual stimuli. Factor Type: Categorical Factor 1: Name: face_* Description: Identifies the identity of face trials. With epochs['face_*'], extract all epochs in which that specific face was presented. From 1-9, leading 0. Factor 2: Name: object_* Description: Identifies the identity of object trials. With epochs['object_*'], extract all epochs in which that specific object was presented. From 1-9, leading 0. Factor 3: Name: letter_* Description: Identifies the identity of letter trials. With epochs['letter_*'], extract all epochs in which that specific letter was presented. From 1-9, leading 0. Factor 4: Name: false_* Description: Identifies the identity of false font trials (i.e., symbols). With epochs['false__*'], extract all epochs in which that specific false font was presented. From 1-9, leading 0. Orientation: Orientation of the displayed stimuli. Factor Type: Categorical Factor 1: Name: Center Description: Identifies stimuli presented in the center orientation. With epochs['Center'], extract all epochs in which a stimulus was presented in the center orientation. Factor 2: Name: Left Description: Identifies stimuli presented in the Left orientation. With epochs['Left'], extract all epochs in which a stimulus was presented in the Left orientation. Factor 3: Name: Right Description: Identifies stimuli presented in the Right orientation. With epochs['Right'], extract all epochs in which a stimulus was presented in the Right orientation. Duration: Duration a visual stimulus was presented for. Factor Type: Categorical Factor 1: Name: 500ms Description: Identifies stimuli presented for 500ms. With epochs['500ms'], extract all epochs in which the stimulus was displayed for 500ms. Factor 2: Name: 1000ms Description: Identifies stimuli presented for 1000ms. With epochs['1000ms'], extract all epochs in which the stimulus was displayed for 1000ms. Factor 3: Name: 1500ms Description: Identifies stimuli presented for 1500ms. With epochs['1500ms'], extract all epochs in which the stimulus was displayed for 1500ms. Task Relevance: Task relevance of a given trial. Factor Type: Categorical Factor 1: Name: Relevant target Description: Identifies target stimuli. Target stimuli are presented at the beginning of each miniblock, and participants must detect them among the sequence of presented stimuli by pressing a button. With epochs['Relevant target'], extract all target trials. Factor 2: Name: Relevant non-target Description: Identifies task-relevant non-target stimuli. We considered task-relevant stimuli that were of the same category as the target but of a different identity. With epochs['Relevant non-target'], extract all task-relevant non-target trials. Factor 3: Name: Irrelevant Description: Identifies task-irrelevant non-target stimuli. We considered task-irrelevant stimuli that were of a different category than the target. With epochs['Irrelevant'], extract all task-irrelevant non-target trials. Response: Rated response of the participants. Factor Type: Categorical Factor 1: Name: Hit Description: Participants correctly identified a target by pressing a button. With epochs['Hit'], extract all target trials for which the participants pressed a key. Factor 2: Name: CorrRej Description: Participants correctly rejected a non-target stimulus and did not press any button. With epochs['CorrRej'], extract all non-target trials for which the participants did not press a key. Factor 3: Name: Miss Description: Participants failed to press a button when a target stimulus was presented. With epochs['Miss'], extract all target trials in which participants failed to press a button. Factor 4: Name: FA Description: Participants mistakenly pressed a button when a non-target stimulus was presented. With epochs['FA'], extract all non-target trials in which participants pressed a button. Factor 5: Name: n.a. Description: For the events stimulus offset and jitter onset, the response is set to n.a. as the response relates to the visual stimulus, not to the other events. This should not be used to access the data. Surface Reconstruction and Electrode Localization Subject-specific pial surfaces were automatically reconstructed based on a pre-implant T1 weighted MR image using the Freesurfer image analysis suite (\u2018recon-all\u2019, Dale et al., 1999). Post-implant CT images were co-registered with the pre-implant MR images using FLIRT (Jenkinson and Smith, 2001), as implemented in FSL (Smith et al., 2004). For NYU patients, we used a semi-automatic approach to generating electrode labels. For manual cases, co-registered MR and CT slices were examined using FSLView (Smith et al., 2004). For grids, we localized three corner electrodes and the remaining electrodes coordinates were then automatically interpolated along the shared plane using the known inter-electrode distances. Strip and depth electrodes were localized manually when they did not follow straight trajectories. When depth electrodes were in a straight line, the first and last electrodes were localized manually, and electrodes in between were automatically interpolated and labeled based on known inter-electrode distances and serial labeling convention. For WU patients, electrodes were localized manually using the SubNuclear toolbox . Electrode locations were further refined within the space of the pre-operative MRI using three-dimensional non-linear thin-plate spline warping (Rohr et al., 2001), which corrected for post-operative shift and distortion. The warping was constrained with manually selected points through the brain, which was visually aligned with landmarks in pre-implantation MRI and post-implantation CT. For Harvard subjects, individual contacts from depth electrodes were labeled manually from the CT image using the BioImage Suite \u2019s Electrode Editor tool (legacy version 3.5; Joshi, et al., 2011). The coordinates in CT image-space were converted to coordinates within the patient\u2019s segmented MRI brain-space using the iELVis toolbox (yangWangElecPjct; Yang, Wang, et al., 2012; Groppe et al., 2017). For all sites, the electrode spatial coordinates were transformed from the individual patient space into the standard space of the Montreal Neurological Institute (MNI-152) template for plotting purposes. At NYU, this transformation was performed using the DARTEL algorithm (Ashburner, 2007) implemented in SPM8 (Wellcome Department of Imaging Neuroscience, London, United Kingdom). At Harvard, this transformation was performed using the iELVis toolkit . At WU the transformation was performed with the SubNuclear toolbox using img2imgcoord utility. Finger Localizer Task In the Finger Localizer task, participants were presented with four circles, one of which was filled with a specific color, serving as a cue for participants to press the corresponding colored button on the response box. The filled state of the circle persisted for the duration of the response time, followed by an additional delay of 200 milliseconds. The Inter-Trial Intervals (ITIs) were uniformly distributed, with a mean of 0.55 seconds and a range from 0.400 to 0.700 seconds. The experimental protocol comprised 80 trials, distributed equally among the four colors, with 20 trials per color, and the sequence of trials was randomized. This task aimed to identify brain regions responsible for motor control, particularly those governing finger movements, and to pinpoint electrodes selectively activated by specific motor responses, such as button presses. An illustration depicting a trial in which the participant is required to press the blue button Please note: Although participants completed this task concurrently with Experiment 1 , we did not utilize the data in the analysis, as it was primarily acquired for use in Experiment 2 . Consequently, the data pertaining to the Finger Localizer task is not included in this version of our data release. Task (tiEEG) Participants proceeded to Experiment 1 either after or before completing the Finger Localizer task . tiEEG consisted of 5 runs containing 4 blocks each, and 34-38 trials per block, 32 non-targets (8 of each category) and 2-6 targets, with each trial lasting 2.4 s approximately, for a total of 720 trials. Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Additional information about the task can be found here . Task Runs Blocks Trials Total trials Experiment 1 5 4 34-38 per block 720 Full Structure of Session Complete standard procedure of an iEEG session is available in iEEG Standard Operating Procedure . Inclusion Criteria For the iEEG studies, subjects were 10-65 years old, able to provide informed consent, had IQ > 70, fluent in English, with self-reported normal hearing, normal or corrected-to-normal vision, and cognitive and language abilities within or above the normal range in formal neuropsychological testing performed before surgery. They must not have had an electrographic seizure within 3-hours prior to testing. Quality Check A comprehensive quality assessment was conducted on the iEEG data. The data underwent manual annotation by epileptologists, excluding channels within the epileptic onset zone, as well as those exhibiting artifacts or showing complete flatness due to electrode contact issues. Channel rejection was independently performed by both the data monitoring and iEEG teams, with results compared to ensure consistency. Additionally, electrode reconstruction was verified to align with subjects' CT scans. Finally, we inspected for significant disturbances in the spectra. Exclusion Criteria Subjects who were unable to complete a sufficient number of trials due to excessive muscular artifacts, movement, noisy recordings, or a decision by the subject to terminate the experiment were excluded. Subjects who exhibited a low performance in the attention task were also excluded \u2013 this translates to <70% Hits and >30% FAs. In addition, data was also excluded if it did not pass any of the pre-defined data quality checks. Deviations from Data Curation Procedure Our approach to defacing MR/CT DICOM images involved utilizing the built-in face masking tool from XNAT. However, for 12 iEEG subjects, we encountered difficulties in executing this step. This was primarily due to variations in the DICOM images, which could include cropped images, aliasing artifacts, broken headers, missing slices, non-equidistant slices within a series, and other issues. Below is the list of subjects where the execution of the XNAT facemasking tool failed: Subject_ID: CF103, CF104, CF112, CF113, CF116, CF117, CF120, CF121, CF122, CF124, CF125, CF126 To address this issue, we implemented a slightly different workflow that allowed us to successfully deface MR/CT images of these 12 subjects. However, this new approach differed in its ability to regenerate the original DICOM images post-defacement (the original output from the XNAT facemasking tool). Instead, it generated defaced NIFTI images as the primary output. For our current version of data release, we have decided to share only the defaced NIFTI images for these subjects. Details about this workflow are provided below: Anonymization: MR/CT DICOM images underwent anonymization to remove the subject\u2019s Protected Health Information (PHI). NIFTI Conversion: Anonymized DICOM images were then converted to the NIFTI image format using the dcm2niix package (version: 1.0.20220505) (Li et al., 2016). Defacing of NIFTI: Defacing of the NIFTI images was performed using the PyDeface package (version: 2.0.2) (Gulban et al., 2022). Verification: This step involved checking the quality of the defaced NIFTI images using 2D/3D image plots to compare before and after the defacing stage. Alternative workflow for defacing 12 challenging MR/CT DICOM Images Miscellaneous: In the MR data for subject CF103, one DICOM slice was inadvertently dropped during the conversion process from DICOM to NIFTI format. However, the resulting NIFTI file remains functional and usable.","title":"COGITATE Dataset"},{"location":"03_cogdataset/#cogitate-dataset","text":"The COGITATE dataset is a comprehensive collection of multimodal neuroimaging data, encompassing a total of 262 subjects. COGITATE employs three distinct neuroimaging techniques: fMRI, M-EEG, and iEEG/ECoG. The MEG modality comprised 102 healthy subjects, also above the age of 18, with no known psychiatric or neurological issues. These participants were sourced from the Centre for Human Brain Health at the University of Birmingham (Birmingham, United Kingdom) and the Center for MRI Research of Peking University (Beijing, China). Similarly, the fMRI modality included 122 healthy volunteers, all of whom were above the age of 18 and predominantly right-handed. These participants had no known history of psychiatric or neurological disorders and were recruited from the Yale Magnetic Resonance Research Center (New Haven, CT, United States) and the Donders Centre for Cognitive Neuroimaging (Nijmegen, Netherlands). In contrast, the iEEG modality involved a more specialized cohort of 38 patients diagnosed with pharmaco-resistant focal epilepsy. These participants ranged in age from 10 to 65 years, had an IQ above 70, and met specific health criteria. They were recruited from multiple medical centers specializing in epilepsy treatment, including the Comprehensive Epilepsy Center at New York University (New York, NY, United States), Brigham and Women\u2019s Hospital, Boston Children\u2019s Hospital (Boston, MA, United States), and the University of Wisconsin School of Medicine and Public Health (Madison, WI, United States). Age histograms across modalities Sex proportions across modalities Handedness proportions across modalities","title":"COGITATE Dataset"},{"location":"03_cogdataset/#demography-of-subjects","text":"You can find the profile of participants for all modalities at subjects_demography . Here is a brief explanation about the information collected from the subjects.","title":"Demography of Subjects"},{"location":"03_cogdataset/#demographic-information-for-m-eeg","text":"The below items are included in the subjects\u2019 demography for M-EEG modality: Participant_ID (participant identifier), sex (biological sex of participant), age (age of participant at the time of testing), handedness (right, left or ambidextrous), included in MSP (whether the data was used for the experiments or not), phase* (determining in which stage the data is, phase 2/phaseII (optimization) or phase 3/phaseIII (replication)), QC** status (passed/not), if Not (in QC status) QC rejection reason, weight (weight of participant at the time of study in pounds), height (height of participant at the time of study in inches), primary and secondary language, race (ethnicity of the participant), education, compensation (whether the subject was paid or not), colorblind (determining whether the subject can distinguish the colors and perceiving them correctly or not), visual correction (none or using any glasses or contact lenses), eye dominance (which eye is dominant), eye chart results (the outcome of a visual acuity test performed using the eye chart) and dioptry (visual acuity of the participant in Diopters). * Phase: COGITATE project has three phases. In phase 1 , all data were acquired by theory neutral teams. To ensure replicability of the results, the entire dataset was split into two halves, each with an equal mixture of data from each of the labs for each modality. In phase 2 , after evaluating data quality, the first half of the data were used for developing analysis tools (optimization of methods). The purpose of phase 2 was to define the best analysis practices and to agree upon, in consultation with expert advisors. In phase 3 , the replication phase, the second half of the data were analyzed using the concurred procedure, agreed upon protocols, thereby allowing an in-house replication of the results obtained in phase 2. * *QC (quality control): A number of items were checked for all the data of each modality which are elaborated in the section of Quality Check and Exclusion Criteria .","title":"Demographic Information for M-EEG"},{"location":"03_cogdataset/#demographic-information-for-fmri","text":"All of the items are similar to the M-EEG modality.","title":"Demographic Information for fMRI"},{"location":"03_cogdataset/#demographic-information-for-ieeg","text":"In addition to the properties mentioned for M-EEG modality, the below parameters were also provided for this modality: Electrode scheme (the scheme used for implanting the electrodes, Stereo, Subdural grid & Strips), number of implanted electrodes, implant hemisphere (brain hemisphere where the electrodes implanted, right, left, both or bilateral), IQ (score and name of the test used for assessment (FSIQ, WISC, VCI, POI, WMI, PSI, AMI, VMI), WADA (intracarotid sodium amobarbital, a test that determines which side of the subject\u2019s brain controls language and memory functions), seizure type (classification of seizure type), age of onset (age at which the first symptoms of seizure appeared), auditory normal hearing (indicator of whether the participant had normal hearing capabilities, yes or no), epilepsy seizure classification (categorization of epilepsy as per standard seizure classification), epilepsy seizure aura (description of any sensory or perceptual symptoms before a seizure occured), epilepsy seizure semiology (signs and symptoms exhibited during epileptic seizures), epilepsy seizure frequency (frequency of seizures experienced by participant), epilepsy post ictal semiology (symptoms and signs after an epileptic seizure), epilepsy trigger (identified factors or circumstances that increased the likelihood of experiencing a seizure), epilepsy duration uncontrolled (the duration that seizures had not been successfully managed or medically controlled), epilepsy seizure onset zone (brain region identified as the initial site of seizure activity), epilepsy resection (details of any surgical resection performed for seizure control), epilepsy language lateralization (determination of the dominant hemisphere for language function), epilepsy past surgical history (record of any previous surgeries related to the treatment of epilepsy), epilepsy past medical history (medical history relevant to epilepsy diagnosis and treatment), epilepsy family history (presence of seizure or epilepsy disorders in family members), other neurological disorders (any other diagnosed neurological disorders besides epilepsy), epilepsy MRI findings (summary of MRI findings relevant to epilepsy diagnosis), epilepsy pathology findings (pathological findings from tissue analysis post-surgery or biopsy).","title":"Demographic Information for iEEG"},{"location":"03_cogdataset/#quality-check","text":"Data from all modalities were checked at three levels. The first level checks tested whether the datasets contained all expected files keeping their naming conventions, and that all personal information had been removed. The second level checks tested subjects\u2019 performance with respect to behavior. For Experiment 1 , subjects were excluded if their hit rate was lower than 80% or (False Alarm) FAs higher than 20% for M-EEG and fMRI, and for iEEG, a more relaxed criteria of 70% Hits and 30% FAs was used. Two M-EEG subjects were excluded due to low hit rates and one iEEG patient was excluded due to high FAs. The third level checks assessed the quality of the neural data.","title":"Quality Check"},{"location":"03_cogdataset/#exclusion-criteria","text":"The generic exclusion criteria used across Experiment 1 and Experiment 2 included: (a) insufficient number of trials in each of the experimental conditions (<30 for M-EEG or <20 for fMRI), due to excessive muscular artifacts, movement, noisy recording, or subjects deciding to stop the experiments. If a given analysis showed that a good enough signal could be obtained with fewer trials, these numbers were amended; and (b) low performance in the attention tasks. In Experiment 1 , this translates into: <80% Hits, >20% FAs for fMRI and M-EEG subjects; <70% Hits, >30% FAs for iEEG patients. In addition, data was excluded from analysis if it did not pass any of the predefined data quality checks.","title":"Exclusion Criteria"},{"location":"03_cogdataset/#description-of-cogitate-data","text":"Although our data collection had a specific purpose, the data we gathered holds potential value for a range of diverse inquiries. Consequently, the COGITATE consortium has chosen to openly share all raw data collected (including the data that did not pass the quality criteria), to facilitate its utilization for various research endeavors and promote data reusability. We have made available two primary formats for the data acquired during the experimental phase of the COGITATE project, specifically Experiment 1 : Unprocessed/Raw Data BIDS Format","title":"Description of COGITATE Data"},{"location":"03_cogdataset/#1-unprocessedraw-data","text":"The unprocessed data format closely resembles the original acquired data, having undergone minimal processing to ensure compliance with GDPR (General Data Protection Regulation)/ HIPAA (Health Insurance Portability & Accountability Act) anonymity standards.","title":"1. Unprocessed/Raw Data"},{"location":"03_cogdataset/#2-bids-format","text":"BIDS format, widely adopted in cognitive neuroscience, enhances data reusability. To facilitate others in leveraging our data, we have released it in BIDS format.","title":"2. BIDS Format"},{"location":"03_cogdataset/#file-type-glossary","text":"Here are the various file formats used for each modality of the COGITATE dataset along with a short description of them.","title":"File Type Glossary"},{"location":"03_cogdataset/#eye-tracking-behavioral-data","text":"Unprocessed/Raw release format - Filetype: ASC/CSV BIDS Format - Filetype: ASC/CSV The two eye trackers used within COGITATE are: 1. EyeLink eye tracker 2. Tobii eye tracker 1) EyeLink eye tracker : Most of the sites used this eye tracker which produces data in the EDF format, EyeLink Data Format. This data was immediately converted to ASCII text files using the converter provided by Eyelink. This is the ASC files that we used in our data. 2) Tobii eye tracker : The other eye tracker was the Tobii eye tracker used by New York University Langone for ECOG data. This eye tracker produces data in the form of CSV files. The files generated by eye tracking systems, containing information about eye movement and gaze behavior which typically store a time-stamped sequence of gaze data points and include information such as: Timestamps: The exact time at which each gaze data point was recorded. Gaze Coordinates: The x and y coordinates on the screen where the person's gaze is directed. Pupil Diameter: The size of the person's pupil, which can provide insights into changes in visual processing or cognitive load. Fixations: Periods of stable gaze where the person is looking at a specific point without significant movement. Saccades: Rapid eye movements between fixations, indicating shifts in attention. Blinks: Instances when the person's eyes are closed, which can be important for data cleaning and analysis. Behavioral data is available in CSV format and it provides below information: Blocks Events Trials Stimulus and jitter duration Subject's responses","title":"Eye Tracking & Behavioral Data"},{"location":"03_cogdataset/#m-eeg-data","text":"Unprocessed/Raw release format Filetype: FIF BIDS Format Filetype: FIF File Format for the Input and Output of MEG and EEG data FIF files contain various types of information related to neuroimaging data, including: Raw sensor data: MEG and EEG measurements recorded from sensors placed on the scalp or near the head. Event information: Time-stamped triggers or markers indicating the timing of events, such as stimulus presentations or subject responses. Sensor locations and orientations: Information about the physical positions and orientations of sensors used in the measurements. Head geometry: Information about the shape and structure of the subject's head, which is crucial for accurate source localization. Covariance matrices: Statistical information about the relationships between sensor measurements at different time points or frequencies. Anatomical MRI data: High-resolution structural images of the subject's brain, used for source localization and spatial alignment.","title":"M-EEG data"},{"location":"03_cogdataset/#ieeg-data","text":"Unprocessed/Raw release format Filetype: EDF BIDS Format Filetype: EDF European Data Format files used for storing and exchanging time-series biological and physiological data EDF files are designed to accommodate data from multiple channels, allowing researchers to store and manage data collected simultaneously from different sensors or electrodes. The format supports both raw signal data and associated metadata, including information about sampling rates, units of measurement, patient demographics, and recording conditions.","title":"iEEG data"},{"location":"03_cogdataset/#data-acquisition","text":"The Cogitate dataset encompasses three distinct neuroimaging modalities, along with synchronized eye-tracking and behavioral data linked to each of these modalities. Here we detail the acquisition protocol for each modality in the corresponding data release: M-EEG, iEEG","title":"Data Acquisition"},{"location":"03_cogdataset/#stimuli","text":"Stimuli belonged to four categories that naturally fell into two groups that were clearly distinct from each other: pictures (20 faces and 20 objects) and symbols (20 letters and 20 false-fonts). Face stimuli were created using the FaceGen Modeler 3.1 program and object stimuli were taken from the Object Databank (Tarr, 1996). Faces and objects were grey-scaled (RGB: 125, 125, 125), and manipulated to have similar size and equal luminance using the SHINE toolbox (Willenbockel et al., 2010). Equal proportions of male and female faces were presented. They all had hair and belonged to different ethnicities (e.g., Caucasian, Asian, African, American) to facilitate face individuation. The orientation of the stimuli was manipulated, such that half of the stimuli from each category had a side view and the other half a front view. All letter stimuli and false fonts were generated with MAXON CINEMA 4D Studio (RC-R20) 20.059 on macOS 10.14, appearing in gray (RGB: 125, 125, 125). Three views were rendered for each font set (real font, false/pseudo font) at 0\u00b0, 30\u00b0 and -30\u00b0 horizontal viewing angle with the following settings: Extrusion depth 9.79% of character height, camera distance 5.65 times character height and 18\u00b0 above the center of the letter (High Angle), with a simulated focal length of 135 mm (35 mm equiv.). All stimuli were presented on a rectangular aperture at an average visual angle of 6 \u030a by 6 \u030a.","title":"Stimuli"},{"location":"03_cogdataset/#procedure","text":"Stimuli were presented sequentially, all supra-threshold, with half being task-relevant and the other half task-irrelevant. Only one stimulus was shown on the screen at any given time. To define task relevance, subjects were instructed to detect two targets from different categories, regardless of their orientation. This online reporting enabled an explicit assessment of subjects\u2019 performance, engaging report-related areas for later analysis. Each block began with notification of the two target stimuli, either pictorial (faces and objects) or symbolic (letters and false fonts), creating a clear distinction between relevant and irrelevant stimuli. At the start of each block, specific target stimuli were revealed with instructions such as \u201cdetect face A and object B\u201d or \"detect letter C and false-font D.\" Targets did not repeat across blocks. Each run included two blocks of the Face/Object task and two blocks of the Letter/False-font task, with the order counterbalanced across runs. Subjects were instructed to maintain central fixation throughout each trial. Gaze was monitored online through an eye tracker, with repeated calibrations ensuring good quality data. Each block comprised stimuli from all four categories, with each stimulus displayed for 500, 1000, or 1500 ms, followed by a blank interval, ensuring a consistent trial duration of 2000 ms. Within each block, three trial types were presented: i) Task Relevant Targets, consisting of the specific stimuli participants were tasked with detecting; ii) Task Relevant Non-Targets, encompassing stimuli from relevant categories that were not designated targets; and iii) Task Irrelevant Stimuli, comprising stimuli from the remaining categories.","title":"Procedure"},{"location":"03_cogdataset/#m-eeg-data-acquisition","text":"M-EEG recordings were acquired at the Centre for Human Brain Health (CHBH) of University of Birmingham in the United Kingdom, and at the Center for MRI Research of Peking University (PKU) in China.","title":"M-EEG Data Acquisition"},{"location":"03_cogdataset/#hardware","text":"Both centers had a 306-channel, whole-head TRIUX MEG system from MEGIN (York Instruments; formerly Elekta). The MEG system comprised 204 planar gradiometers and 102 magnetometers in a helmet-shaped array. Simultaneous EEG was recorded using an integrated EEG system and a 64-channel electrode cap. The MEG system was equipped with a zero boil-off Helium recycling system and the noise-resilient ARMOR sensors and placed in a shielded room (2 layers of mu-metal and 1 layer of aluminum). To reduce environmental noise, the integrated active shielding system was used at PKU. In order to cover the brain more homogeneously, the MEG gantry was positioned at 68 degrees.","title":"Hardware"},{"location":"03_cogdataset/#location-of-electrodes-and-ecgeog-measurements","text":"The location of the fiducials, the positions of the 64 EEG electrodes and the participant\u2019s head shape were recorded using a 3-D digitizer system (Polhemus Isotrak). A set of bipolar electrodes were placed on the subject\u2019s chest (upper left and upper right chest position) to record the cardiac signal (ECG). Two sets of bipolar electrodes were placed around the eyes (two located at the outer canthi of the right and left eyes and two above and below the center of the right eye) to record eye movements and blinks (EOG). Ground and reference electrodes were placed on the back of the neck and on the right cheek, respectively. The impedance of all of the electrodes was checked to be below 10 kOhm. Standard Locations of EOG and ECG electrodes","title":"Location of Electrodes and ECG/EOG Measurements"},{"location":"03_cogdataset/#head-position-indicator-hpi-coils","text":"The participant\u2019s head position inside the MEG system was measured at the beginning and at the end of each run using four head position indicator (HPI) coils placed on the EEG cap. Specifically, the HPI coils were placed next to the left and right mastoids and on the left and right forehead. Their location relative to anatomical landmarks was digitized with a Polhemus Isotrak System. During the measurement, high frequency (>200 Hz) signals were produced by those coils and the localization of these signals was used to estimate the head position in the sensor space. To avoid the potential artifacts produced by the non-linear interaction between the signals generated by these coils, head position measurement was performed only during resting periods (as opposed to continuously). Standard locations of HPI coils. Coil Numbers: 1. Blue, 2. White, 3. Red, 4. Black, 5. Yellow","title":"Head Position Indicator (HPI) Coils"},{"location":"03_cogdataset/#anatomical-mri-data-acquisition","text":"For each subject, a high resolution T1-weighted MRI volume (3T Siemens MRI Prisma scanner) was acquired. At CHBH, a 32-channel coil with a resolution of 1 x 1 x 1 mm, TR/TE = 2000/2.03ms; TI = 880 ms; 8\u00b0 flip angle, 208 sagittal slices and field of view (FOV): 256 x 256 matrix was acquired for source localization with individual realistic head modeling. At PKU, a 64-channel coil with a resolution of 0.5 x 0.5 x 1 mm, TR/TE = 2530/2.98ms; TI = 1100 ms; 7\u00b0 flip angle, 192 sagittal slices; FOV: 448 \u00d7 512 matrix was used. To avoid possible interference of body magnetization on the MEG recording, all MRI scans were acquired at least one week before the MEG session, or at any time afterwards. The FreeSurfer standard template was used (fsaverage) for participants lacking an anatomical scan (N=5).","title":"Anatomical MRI Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-acquisition","text":"The task was executed using Matlab (PKU: R2018b; UB: R2019b) with Psychtoolbox v.3 (Pelli, 1997) on a custom PC at UB and a Dell XPS desktop PC at PKU. Visual stimuli were presented on a screen placed in front of the subjects with a PROPixx DLP LED projector (VPixx Technologies Inc.) at a resolution of 1920 x 1080 pixels and a refresh rate of 120 Hz. The distance between the subject\u2019s eyes and the screen was different at each site (CHBH: 119 cm, PKU: 85 cm) to achieve the same FOV of 36.6 x 21.2 degrees. Subjects responded with an 8-button response box (Millikey LH-8).","title":"Behavioral Data Acquisition"},{"location":"03_cogdataset/#eye-tracking-data-acquisition","text":"Eye movements were monitored and recorded from both eyes (binocular eye-tracking) using the MEG-compatible EyeLink 1000 Plus eye-tracker (SR Research Ltd., Ottawa, Canada). Nine-point calibration was performed at the beginning of the experiment, and recalibrated if necessary at the beginning of each block/word. Pupil size and corneal reflection data were collected at a sampling rate of 1000 Hz.","title":"Eye Tracking Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-code-scheme","text":"Stimuli are coded as a 4-digit number. 1st digit = stimulus type (1 = face; 2 = object; 3 = letter; 4 = false font) 2nd digit = stimulus orientation (1 = center; 2 = left; 3 = right) 3rd & 4th digits = stimulus id (1...20; for faces 1...10 is male, 11...20 is female) e.g., \"1219\" = 1 is face, 2 is left orientation and 19 is a female stimulus #19","title":"Behavioral Data Code Scheme"},{"location":"03_cogdataset/#eye-tracker-and-meg-code-scheme","text":"The channel name that contains the eye tracker data in the FIF file is as follows: MISC1 (X), MISC2 (Y), and MISC3 (pupil)","title":"Eye Tracker and MEG Code Scheme"},{"location":"03_cogdataset/#defining-some-terms","text":"Trial : Stimulus presentation followed by a fixation (the two add up to 2 sec) followed by a jitter of 200 msec to 2000 ms. Mini block : presentation of 34 to 38 stimuli, in the beginning of which the target stimuli were presented. Block : composed of 4 mini blocks. At the end of each block, there was a break. Break : Pause between 2 blocks","title":"Defining some terms"},{"location":"03_cogdataset/#successive-trigger-scheme","text":"The triggers were sent successively. The first trigger represented the stimulus type, followed by orientation, stimulus duration, and task relevance, all interspaced by 50 ms. Additionally, a trigger was sent upon key press.","title":"Successive trigger scheme"},{"location":"03_cogdataset/#1st-trigger-on-stimulus-onset-stimulus-type","text":"1 to 20: faces - 1 to 10 males, - 11 to 20 females 21 to 40: objects 41 to 60: letters 61 to 80: falses","title":"1st Trigger (on Stimulus Onset): Stimulus Type"},{"location":"03_cogdataset/#2nd-trigger-2-frames-after-stimulus-onset-stimulus-orientation","text":"101: Center 102: Left 103: Right","title":"2nd Trigger (2 Frames after Stimulus Onset): Stimulus Orientation"},{"location":"03_cogdataset/#3rd-trigger-4-frames-after-stimulus-onset-stimulus-duration","text":"151: 500 msec 152: 1000 msec 153: 1500 msec","title":"3rd Trigger (4 Frames after Stimulus Onset): Stimulus Duration"},{"location":"03_cogdataset/#4th-trigger-6-frames-after-stimulus-onset-stimulus-task-relevance","text":"201: Task relevant target 202: Task relevant non target 203: Task irrelevant","title":"4th Trigger (6 Frames after Stimulus Onset): Stimulus Task Relevance"},{"location":"03_cogdataset/#5th-trigger-8-frames-after-stimulus-onset-trial-id-triggers","text":"111-148: Trial number","title":"5th Trigger (8 Frames after Stimulus Onset): Trial ID Triggers"},{"location":"03_cogdataset/#response-trigger","text":"255: Following button press.","title":"Response Trigger"},{"location":"03_cogdataset/#stimulus-presentation-end","text":"96: Offset of stimulus presentation (onset of blank) 97: Offset of blank (onset of jitter period) Note that both these are fixations, they are just divided into blank and jitter.","title":"Stimulus Presentation End"},{"location":"03_cogdataset/#general-triggers-to-mark-experiment-progression","text":"86: Onset of experiment 81: Onset of recording 83: Offset of recording","title":"General Triggers to Mark Experiment Progression"},{"location":"03_cogdataset/#miniblock-id-triggers","text":"161-200: Miniblock ID trigger","title":"Miniblock ID Triggers"},{"location":"03_cogdataset/#zeroes","text":"0: Zeros were sent between the successive triggers to reset the LPT, see below. These were also sent to the eye tracker but did not mean anything and they can safely be ignored.","title":"Zeroes"},{"location":"03_cogdataset/#how-the-lpt-triggers-were-sent","text":"The LPT port of the computer was used for sending the triggers and it was done by using the sendTrig function. This function sets the port in a specific state (whatever trigger we want to send) and logs the trigger afterwards, noting if it is sent and what time the command for sending it is executed. For each trigger that is being sent, the port is being reset after a frame to 0. In the beginning of the experiment, a few triggers were sent to mark experiment onset and onset of recording. Then, a mini block was initiated. The participant was presented with the target screen and required to press the spacebar to proceed. When the participant pressed the space button, the miniblock ID was sent. Only once the miniblock trigger was sent the fixation appeared. This means that there was a small delay between key press and fixation onset. Following the first fixation, a jitter started, which was also logged. Then, the first stimulus was displayed. Upon the presentation of the stimulus, the successive triggers were initiated. The first trigger occurred directly after the onset of the stimulus, indicating the stimulus ID (1-80). Then, after 2 frames, the orientation trigger (101-103) was sent, followed by the duration trigger (151 to 153) at 4 frames, the task demand trigger (201-203) at 6 frames, and finally, the trial ID trigger (111 to 148) at 8 frames.","title":"How The LPT Triggers Were Sent"},{"location":"03_cogdataset/#empty-room-recording","text":"Prior to each experiment, MEG signals from the empty room were recorded for 3-minutes.","title":"Empty Room Recording"},{"location":"03_cogdataset/#resting-state-rm-eeg","text":"The resting-state data for each participant was also recorded for 5-minutes and the subjects were asked to keep their eyes open and fixated on a point presented at the center of the screen. M-EEG signals were sampled at a rate of 1 kHz and band-pass filtered between 0.01 and 330 Hz prior to sampling.","title":"Resting-State (rM-EEG)"},{"location":"03_cogdataset/#task-tm-eeg","text":"Following the empty room and rM-EEG recordings, subjects were asked to complete the task defined in the Procedure section. tM-EEG consisted of 10 runs, with 4 blocks each. During each block, a ratio of 34-38 trials was presented, with 32 non-targets (8 of each category) and 2-6 targets (number chosen randomly). Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Task Runs Blocks Trials Total trials Experiment 1 10 4 34-38 per block 1440","title":"Task (tM-EEG)"},{"location":"03_cogdataset/#full-structure-of-session","text":"Complete standard procedure of an M-EEG session is available in MEG Standard Operating Procedure .","title":"Full Structure of Session"},{"location":"03_cogdataset/#inclusion-criteria","text":"The items below were assessed for the subjects before the data was acquired: Age range: 18 to 35 (since over the age of 35 subjects might have a hard time maintaining central focus) Handedness: right Hearing problems: no Hearing aid: no Vision problems: no, or corrected-to-normal with soft lenses No MRI in the last week MRI compatible: no metal, medical implants, etc. No claustrophobia. Note: dental implants are allowed (particularly for non-magnetic materials) unless it generates big impacts on MEG signals, and this will be checked prior to MEG recording. No known history of psychiatric or neurological disorders, e.g., Not have been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D). Not have been formally diagnosed with autism spectrum disorder (ASD) Not suffer from epilepsy","title":"Inclusion Criteria"},{"location":"03_cogdataset/#quality-check-and-exclusion-criteria","text":"For M-EEG, the first stage of the third-level checks focused on system-related and external noise generators. It was tested using the signal spectra in the empty room recording, the resting state session, and the experiment itself for all sensors. Any sensor and/or specific frequency revealing extensive noise using visual inspection, was flagged to document potential problems. Ultimately, this did not lead to any exclusions. Next, all experimental data blocks were visually inspected for abnormalities in spectra (peaks not explainable by physiology), and in ICA components, and checked for extremely noisy (based on the score of differences between the original and Maxwell-filtered data > 7) and flat sensors. The latter step was performed in a collaboration between the data monitoring team and members of the centers where data was acquired to check whether any potential changes in preprocessing for particular subjects were needed. Finally, we tested if all experimental cells (i.e. task-relevant non-targets and task-irrelevant stimuli for each one of the four categories) have enough trials (N=30).","title":"Quality Check and Exclusion Criteria"},{"location":"03_cogdataset/#ieeg-data-acquisition","text":"iEEG recordings were obtained from patients with pharmacologically resistant epilepsy undergoing invasive electrophysiological monitoring at the Comprehensive Epilepsy Center at New York University (NYU) Langone Health Center, Brigham and Women\u2019s Hospital, Children\u2019s Hospital Boston (Harvard Medical School), and University of Wisconsin School of Medicine and Public Health (WU).","title":"iEEG Data Acquisition"},{"location":"03_cogdataset/#hardware_1","text":"Brain activity was recorded with a combination of intracranially subdural platinum-iridium electrodes embedded in SILASTIC sheets (2.3 mm diameter contacts, Ad-Tech Medical Instrument and PMT Corporation) and/or depth stereo-electroencephalographic platinum- iridium electrodes (PMT Corporation; 0.8-mm diameter, 2.0-mm length cylinders; separated from adjacent contacts by 1.5 to 2.43 mm), or Behnke-Fried depth stereo- electroencephalographic platinum-iridium electrodes (Ad-Tech Medical, BF08R-SP21X-0C2, 1.28 mm in diameter, 1.57 mm in length, 3 to 5.5 mm spacing). The decision to implant, electrode targeting, and the duration of invasive monitoring was solely determined on clinical grounds and without reference to this or any other study. Electrodes were arranged as grid arrays (either 8 \u00d7 8 with 10 mm center-to-center spacing, 8 x 16 contacts with 3 mm spacing, or hybrid macro/micro 8 x 8 contacts with 10 mm spacing and 64 integrated microcontacts with 5 mm spacing), linear strips (1 \u00d7 8/12 contacts), depth electrodes (1 \u00d7 8/12 contacts), or a combination thereof. Subdural electrodes covered extensive portions of lateral and medial frontal, parietal, occipital, and temporal cortex of the left and/or right hemisphere. Recordings from grid, strip and depth electrode arrays were done using a Natus Quantum amplifier (Pleasonton, CA) or a Neuralynx Atlas amplifier (Bozeman, MT). A total of 4057 electrodes (892 grids, 346 strips, 2819 depths) were implanted across 32 patients with drug-resistant focal epilepsy undergoing clinically motivated invasive monitoring. 3512 electrodes (780 grids, 307 strips, 2425 depths) that were unaffected by epileptic activity, artifacts, or electrical noise were used in subsequent analyses. To determine the electrode localization for each patient, a postoperative CT (computed tomography) scan and a pre-operative T1 MRI were acquired and co-registered. Recordings were obtained continuously during the patients\u2019 stay in the hospital. All data was stored with stimulus and timing markers permitting offline synchronization.","title":"Hardware"},{"location":"03_cogdataset/#anatomical-mri-data-acquisition_1","text":"Before the participants underwent surgery and electrode implantation, T1-weighted MR data were acquired from them. At NYU, imaging was performed using the Siemens Biograph mMR scanner. At Harvard, the imaging sequence utilized was MPRAGE (magnetization-prepared rapid gradient-echo), with a Siemens Skyra 3T scanner. At WU, imaging was conducted using the GE MEDICAL SYSTEMS SIGNA Artist scanner. The rationale behind acquiring MR scans was the spatial resolution it offers for brain tissue visualization.","title":"Anatomical MRI Data Acquisition"},{"location":"03_cogdataset/#ct-data-acquisition","text":"Following surgery, post-operative CT scans were obtained from the subjects to assist in localizing the electrodes on specific brain tissue. At NYU, scans were performed using a Siemens SOMATOM Force scanner. At Harvard, imaging was conducted using the Medtronic O-arm MVS O2, manufactured by Medtronic. At WU, scans were acquired utilizing the GE MEDICAL SYSTEMS Optima CT660 scanner. Please note: MR and CT data were collected for the subjects at Brigham and Women\u2019s Hospital and Children\u2019s Hospital Boston. However, due to the data protection policies, they are not included in the COGITATE Data Release.","title":"CT Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-acquisition_1","text":"The task was implemented using Matlab (Harvard: R2020b; NYU: R2020a, WU: 2021a), Psychtoolbox v.3 (Pelli, 1997), and run on a Dell Precision 5540 laptop, with a 15.6\" Ultrasharp screen (screen size 345 x 195 mm2; resolution 1920 x 1080) at NYU and Harvard and on a Dell D29M PC with an Acer V196WL 19\" LED LCD monitor (screen size 406.4 x 254 mm2; resolution 1440 x 990) at WU. The distance between the subject\u2019s eyes and the screen was 80 cm. But the actual distance was measured for each subject before the start of recording to ensure that the size of the stimulus was 6 x 6 of visual angle. Participants responded using an 8-button response box (Millikey LH-8; response hand(s) varied based on the setting in the patient\u2019s room).","title":"Behavioral Data Acquisition"},{"location":"03_cogdataset/#eye-tracking-data-acquisition_1","text":"At Harvard and Wisconsin, EyeLink 1000 Plus Camera was used to collect eye-tracking data, and a thirteen-point calibration was performed several times during the experiment. The calibration was performed at the beginning of the experiment, and recalibrated in-between blocks, if necessary to meet precision requirements. At NYU, eye-tracking data was collected throughout the duration of the experiment using a Tobii-4C eye-tracker. A nine-point calibration was performed several times during the experiment. Pupil size and corneal reflection data were collected at a sampling rate of 500 Hz at Harvard and Wisconsin and at a sampling rate of 90 Hz at NYU. The Eyelink system recorded monocular data, while the Tobii system recorded binocular data. For the former cases, only one eye was recorded as determined by ocular dominance. The experiment was not influenced by the Eye-tracking recording.","title":"Eye Tracking Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-code-scheme_1","text":"The behavioral code scheme is similar to the M-EEG modality which is explained in this section .","title":"Behavioral Data Code Scheme"},{"location":"03_cogdataset/#eye-tracker-data-code","text":"The eye tracker code scheme for the iEEG modality follows a similar structure to that described for M-EEG data. You can find detailed explanations here .","title":"Eye Tracker Data Code"},{"location":"03_cogdataset/#ieeg-code-scheme","text":"","title":"iEEG Code Scheme"},{"location":"03_cogdataset/#photodiode-trigger-scheme","text":"For ECOG patients, the type of port utilized by the M-EEG team (LPT) was incompatible with our recording system. Consequently, a photodiode was employed. A photodiode is an electronic device that records changes in luminance and converts them into voltage. An example of a photodiode In the experimental code, it was ensured that when a new event occurred on the screen (such as stimulus onset or stimulus offset), a white flash appeared in the bottom right corner. The photodiode device was positioned atop the flashing square and connected to the amplifier recording the iEEG channel signals. This additional channel facilitated the identification of event onsets in our task. This type of recording only allows binary signals (the photodiode is either on or off). However, specific events were encoded with varying numbers of subsequent pulses. Stimulus Presentation Onset The flashing square was flashed only once at the onset of each new stimulus. Stimulus Presentation Offset The flashing square was flashed only once at the offset of each stimulus. Start of the Inter-Trial Interval The flashing square was flashed only once at the beginning of the inter-trial interval. The inter-trial interval was initiated 2 seconds after stimulus onset and persisted for a random duration (following a truncated exponential distribution between 0.2 and 2 seconds, with a mean of 0.4 seconds). Block Start The start of an experimental block was marked by sending 4 consecutive pulses. Block End The end of an experimental block was marked by sending 2 consecutive pulses. Experiment Start and End The beginning and end of the experiment were marked by sending 3 consecutive pulses. Schematic representation of the photodiode channel","title":"Photodiode Trigger Scheme"},{"location":"03_cogdataset/#log-file-alignment","text":"The photodiode channel solely indicates when a specific event occurred in the iEEG signals, lacking any information about the nature of the event (unlike an LPT trigger in MEG). To identify specific events in the signal, the timing information from the log file was combined with that from the photodiode. The log file contains a description of each presented event along with a corresponding time stamp from the experimental computer. The photodiode channel recorded time stamps for each event, indicating when it occurred according to the acquisition computer clock. The goal was to align the log file and the photodiode to associate each event in the photodiode signal with the corresponding event description in the log file. This step was crucial since misalignment could lead to incorrect event descriptions in the iEEG signal, compromising the entire analysis. The procedure relies on the fact that both the log file and the photodiode had timestamps. These timestamps were recorded on different clocks. Unfortunately, computer clocks tended to drift away from one another, and these drifts accumulated to be quite significant over extended periods of time (they could be several seconds apart after 1 hour). Therefore, the timestamps of the photodiode and the log file could not be used interchangeably. However, over short periods of time, these drifts were negligible. What this meant was that the interval between two successive timestamps in the log file should be quite consistent with the intervals between two successive events in the photodiode. This provided us with the most thorough check possible: if the events in the log file and in the photodiode were aligned, then there should be only tiny differences between the differences between successive events in both. Here is a step-by-step description of the alignment procedure. Extract the Photodiode Timestamps The timestamps from the photodiode triggers were extracted as the first step. As illustrated in the figure Schematic representation of the photodiode channel , a square pulse was generated for each event during the recording. The onset of each of these pulses was sought. To achieve this, a threshold was initially established, below which the photodiode was considered to be in the off state and above which it was considered to be on (based on visual inspection of the data, which was facilitated by the clean nature of photodiode signals). Subsequently, the signal was binarized using this threshold (signal_bin = signal > threshold), resulting in a signal consisting only of ones and zeros. Next, the discrete difference of the binary signal was computed (y(i + 1) = y(i + 1) - y(i)). This operation produced a \u201c1\u201d when the photodiode transitioned from off to on (onset) and a \u201c-1\u201d when it transitioned from on to off (offset). Since only the onset was of interest, the timestamps of the ones were extracted, representing the timestamps of the photodiode. Verify Event Count Alignment The first step in aligning the photodiode events and the log files was to check if the number of events in each matched. If they did not match, then there was a problem. Aligning the Two Signals To ensure alignment of both signals, the discrete difference between the photodiode and log file timestamps was computed, providing the interval between successive events for each signal. The resulting arrays were then plotted atop each other. Misalignment between the two sources of timing information could be easily detected, as they did not overlap. Perfect overlap between the two was necessary to consider the signals aligned. Additionally, the difference between the two signals was computed to ensure minimal deviation. Integrating Information Once the two signals were properly aligned, the log file events could be used as descriptors of the events marked at the timestamps from the photodiode. Illustration of the alignment pipeline The alignment procedure allowed the information from the log file to be utilized in generating well-described events in the iEEG data. The events were encoded as \u201c/\u201d separated strings and stored in an events.tsv table. An extensive description of each event type and their levels are as follows: Task Name: Dur Task Description: Description of the experimental factors and their level with the correct syntax to access them from the MNE epochs object. Note that factor and level names are case-sensitive. We describe the syntax for each condition separately. However, note that you can fetch a combination of factors from the epochs using a forward slash. For example, if you wish to fetch the face target trials, you can combine both conditions like so: epochs['face/Relevant target'] will fetch all face target trials. In addition, the epochs objects are equipped with metadata, where the name of the column is the name of the factor, and the level follows the same nomenclature as below. This can be used for more sophisticated trial filtering and retrieval. Experimental Design: Event Type: Marks the different events occurring within a trial. Factor Type: Categorical Factor 1: Name: stimulus onset Description: Marks the onset of the visual stimuli. With epochs['stimulus onset'], extract all epochs time-locked to the visual stimulus onset. Factor 2: Name: stimulus offset Description: Marks the offset of the visual stimuli. With epochs['stimulus offset'], extract all epochs time-locked to the offset of the visual stimulus. Factor 3: Name: jitter onset Description: Marks the beginning of the inter-trial jitter. All trials lasted 2 sec., with an added jitter of 400ms on average. With epochs['jitter onset'], extract all epochs time-locked to the beginning of the jitter period (2 sec. after stimulus onset). Block: Marks the experimental blocks. Factor Type: Discrete Factor 1: Name: block_* Description: Experimental blocks. Our experiment consisted of 5 blocks, in between which participants were allowed to take a break. With epochs['block_1'], extract all epochs of the first experimental block. Miniblock: Marks the experimental miniblocks. Factor Type: Discrete Factor 1: Name: miniblock_* Description: Experimental miniblocks. Each experimental block consisted of 4 miniblocks. At the beginning of each miniblock, the two target stimuli were presented to the participant, which the participant had to remember to be able to detect in the stream of stimuli. With epochs['miniblock_1'], extract all epochs of the first experimental miniblock. Category: Category of the visual stimuli. Factor Type: Categorical Factor 1: Name: face Description: Identifies face trials. With epochs['face'], extract all epochs in which a face was presented. Factor 2: Name: object Description: Identifies object trials. With epochs['object'], extract all epochs in which an object was presented. Factor 3: Name: letter Description: Identifies letter trials. With epochs['letter'], extract all epochs in which a letter was presented. Factor 4: Name: false Description: Identifies false font trials (i.e., symbols). With epochs['false'], extract all epochs in which a false font was presented. Identity: Identity of the visual stimuli. Factor Type: Categorical Factor 1: Name: face_* Description: Identifies the identity of face trials. With epochs['face_*'], extract all epochs in which that specific face was presented. From 1-9, leading 0. Factor 2: Name: object_* Description: Identifies the identity of object trials. With epochs['object_*'], extract all epochs in which that specific object was presented. From 1-9, leading 0. Factor 3: Name: letter_* Description: Identifies the identity of letter trials. With epochs['letter_*'], extract all epochs in which that specific letter was presented. From 1-9, leading 0. Factor 4: Name: false_* Description: Identifies the identity of false font trials (i.e., symbols). With epochs['false__*'], extract all epochs in which that specific false font was presented. From 1-9, leading 0. Orientation: Orientation of the displayed stimuli. Factor Type: Categorical Factor 1: Name: Center Description: Identifies stimuli presented in the center orientation. With epochs['Center'], extract all epochs in which a stimulus was presented in the center orientation. Factor 2: Name: Left Description: Identifies stimuli presented in the Left orientation. With epochs['Left'], extract all epochs in which a stimulus was presented in the Left orientation. Factor 3: Name: Right Description: Identifies stimuli presented in the Right orientation. With epochs['Right'], extract all epochs in which a stimulus was presented in the Right orientation. Duration: Duration a visual stimulus was presented for. Factor Type: Categorical Factor 1: Name: 500ms Description: Identifies stimuli presented for 500ms. With epochs['500ms'], extract all epochs in which the stimulus was displayed for 500ms. Factor 2: Name: 1000ms Description: Identifies stimuli presented for 1000ms. With epochs['1000ms'], extract all epochs in which the stimulus was displayed for 1000ms. Factor 3: Name: 1500ms Description: Identifies stimuli presented for 1500ms. With epochs['1500ms'], extract all epochs in which the stimulus was displayed for 1500ms. Task Relevance: Task relevance of a given trial. Factor Type: Categorical Factor 1: Name: Relevant target Description: Identifies target stimuli. Target stimuli are presented at the beginning of each miniblock, and participants must detect them among the sequence of presented stimuli by pressing a button. With epochs['Relevant target'], extract all target trials. Factor 2: Name: Relevant non-target Description: Identifies task-relevant non-target stimuli. We considered task-relevant stimuli that were of the same category as the target but of a different identity. With epochs['Relevant non-target'], extract all task-relevant non-target trials. Factor 3: Name: Irrelevant Description: Identifies task-irrelevant non-target stimuli. We considered task-irrelevant stimuli that were of a different category than the target. With epochs['Irrelevant'], extract all task-irrelevant non-target trials. Response: Rated response of the participants. Factor Type: Categorical Factor 1: Name: Hit Description: Participants correctly identified a target by pressing a button. With epochs['Hit'], extract all target trials for which the participants pressed a key. Factor 2: Name: CorrRej Description: Participants correctly rejected a non-target stimulus and did not press any button. With epochs['CorrRej'], extract all non-target trials for which the participants did not press a key. Factor 3: Name: Miss Description: Participants failed to press a button when a target stimulus was presented. With epochs['Miss'], extract all target trials in which participants failed to press a button. Factor 4: Name: FA Description: Participants mistakenly pressed a button when a non-target stimulus was presented. With epochs['FA'], extract all non-target trials in which participants pressed a button. Factor 5: Name: n.a. Description: For the events stimulus offset and jitter onset, the response is set to n.a. as the response relates to the visual stimulus, not to the other events. This should not be used to access the data.","title":"Log File Alignment"},{"location":"03_cogdataset/#surface-reconstruction-and-electrode-localization","text":"Subject-specific pial surfaces were automatically reconstructed based on a pre-implant T1 weighted MR image using the Freesurfer image analysis suite (\u2018recon-all\u2019, Dale et al., 1999). Post-implant CT images were co-registered with the pre-implant MR images using FLIRT (Jenkinson and Smith, 2001), as implemented in FSL (Smith et al., 2004). For NYU patients, we used a semi-automatic approach to generating electrode labels. For manual cases, co-registered MR and CT slices were examined using FSLView (Smith et al., 2004). For grids, we localized three corner electrodes and the remaining electrodes coordinates were then automatically interpolated along the shared plane using the known inter-electrode distances. Strip and depth electrodes were localized manually when they did not follow straight trajectories. When depth electrodes were in a straight line, the first and last electrodes were localized manually, and electrodes in between were automatically interpolated and labeled based on known inter-electrode distances and serial labeling convention. For WU patients, electrodes were localized manually using the SubNuclear toolbox . Electrode locations were further refined within the space of the pre-operative MRI using three-dimensional non-linear thin-plate spline warping (Rohr et al., 2001), which corrected for post-operative shift and distortion. The warping was constrained with manually selected points through the brain, which was visually aligned with landmarks in pre-implantation MRI and post-implantation CT. For Harvard subjects, individual contacts from depth electrodes were labeled manually from the CT image using the BioImage Suite \u2019s Electrode Editor tool (legacy version 3.5; Joshi, et al., 2011). The coordinates in CT image-space were converted to coordinates within the patient\u2019s segmented MRI brain-space using the iELVis toolbox (yangWangElecPjct; Yang, Wang, et al., 2012; Groppe et al., 2017). For all sites, the electrode spatial coordinates were transformed from the individual patient space into the standard space of the Montreal Neurological Institute (MNI-152) template for plotting purposes. At NYU, this transformation was performed using the DARTEL algorithm (Ashburner, 2007) implemented in SPM8 (Wellcome Department of Imaging Neuroscience, London, United Kingdom). At Harvard, this transformation was performed using the iELVis toolkit . At WU the transformation was performed with the SubNuclear toolbox using img2imgcoord utility.","title":"Surface Reconstruction and Electrode Localization"},{"location":"03_cogdataset/#finger-localizer-task","text":"In the Finger Localizer task, participants were presented with four circles, one of which was filled with a specific color, serving as a cue for participants to press the corresponding colored button on the response box. The filled state of the circle persisted for the duration of the response time, followed by an additional delay of 200 milliseconds. The Inter-Trial Intervals (ITIs) were uniformly distributed, with a mean of 0.55 seconds and a range from 0.400 to 0.700 seconds. The experimental protocol comprised 80 trials, distributed equally among the four colors, with 20 trials per color, and the sequence of trials was randomized. This task aimed to identify brain regions responsible for motor control, particularly those governing finger movements, and to pinpoint electrodes selectively activated by specific motor responses, such as button presses. An illustration depicting a trial in which the participant is required to press the blue button Please note: Although participants completed this task concurrently with Experiment 1 , we did not utilize the data in the analysis, as it was primarily acquired for use in Experiment 2 . Consequently, the data pertaining to the Finger Localizer task is not included in this version of our data release.","title":"Finger Localizer Task"},{"location":"03_cogdataset/#task-tieeg","text":"Participants proceeded to Experiment 1 either after or before completing the Finger Localizer task . tiEEG consisted of 5 runs containing 4 blocks each, and 34-38 trials per block, 32 non-targets (8 of each category) and 2-6 targets, with each trial lasting 2.4 s approximately, for a total of 720 trials. Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Additional information about the task can be found here . Task Runs Blocks Trials Total trials Experiment 1 5 4 34-38 per block 720","title":"Task (tiEEG)"},{"location":"03_cogdataset/#full-structure-of-session_1","text":"Complete standard procedure of an iEEG session is available in iEEG Standard Operating Procedure .","title":"Full Structure of Session"},{"location":"03_cogdataset/#inclusion-criteria_1","text":"For the iEEG studies, subjects were 10-65 years old, able to provide informed consent, had IQ > 70, fluent in English, with self-reported normal hearing, normal or corrected-to-normal vision, and cognitive and language abilities within or above the normal range in formal neuropsychological testing performed before surgery. They must not have had an electrographic seizure within 3-hours prior to testing.","title":"Inclusion Criteria"},{"location":"03_cogdataset/#quality-check_1","text":"A comprehensive quality assessment was conducted on the iEEG data. The data underwent manual annotation by epileptologists, excluding channels within the epileptic onset zone, as well as those exhibiting artifacts or showing complete flatness due to electrode contact issues. Channel rejection was independently performed by both the data monitoring and iEEG teams, with results compared to ensure consistency. Additionally, electrode reconstruction was verified to align with subjects' CT scans. Finally, we inspected for significant disturbances in the spectra.","title":"Quality Check"},{"location":"03_cogdataset/#exclusion-criteria_1","text":"Subjects who were unable to complete a sufficient number of trials due to excessive muscular artifacts, movement, noisy recordings, or a decision by the subject to terminate the experiment were excluded. Subjects who exhibited a low performance in the attention task were also excluded \u2013 this translates to <70% Hits and >30% FAs. In addition, data was also excluded if it did not pass any of the pre-defined data quality checks.","title":"Exclusion Criteria"},{"location":"03_cogdataset/#deviations-from-data-curation-procedure","text":"Our approach to defacing MR/CT DICOM images involved utilizing the built-in face masking tool from XNAT. However, for 12 iEEG subjects, we encountered difficulties in executing this step. This was primarily due to variations in the DICOM images, which could include cropped images, aliasing artifacts, broken headers, missing slices, non-equidistant slices within a series, and other issues. Below is the list of subjects where the execution of the XNAT facemasking tool failed: Subject_ID: CF103, CF104, CF112, CF113, CF116, CF117, CF120, CF121, CF122, CF124, CF125, CF126 To address this issue, we implemented a slightly different workflow that allowed us to successfully deface MR/CT images of these 12 subjects. However, this new approach differed in its ability to regenerate the original DICOM images post-defacement (the original output from the XNAT facemasking tool). Instead, it generated defaced NIFTI images as the primary output. For our current version of data release, we have decided to share only the defaced NIFTI images for these subjects. Details about this workflow are provided below: Anonymization: MR/CT DICOM images underwent anonymization to remove the subject\u2019s Protected Health Information (PHI). NIFTI Conversion: Anonymized DICOM images were then converted to the NIFTI image format using the dcm2niix package (version: 1.0.20220505) (Li et al., 2016). Defacing of NIFTI: Defacing of the NIFTI images was performed using the PyDeface package (version: 2.0.2) (Gulban et al., 2022). Verification: This step involved checking the quality of the defaced NIFTI images using 2D/3D image plots to compare before and after the defacing stage. Alternative workflow for defacing 12 challenging MR/CT DICOM Images Miscellaneous: In the MR data for subject CF103, one DICOM slice was inadvertently dropped during the conversion process from DICOM to NIFTI format. However, the resulting NIFTI file remains functional and usable.","title":"Deviations from Data Curation Procedure"},{"location":"04_data/","text":"Data curation procedurs A detailed explanation about the multiple steps that were taken to prepare the data to be released in public will be available in Appendix 7 . Data Curation Standard Operating Procedure.","title":"Data curation procedurs"},{"location":"04_data/#data-curation-procedurs","text":"A detailed explanation about the multiple steps that were taken to prepare the data to be released in public will be available in Appendix 7 . Data Curation Standard Operating Procedure.","title":"Data curation procedurs"},{"location":"05_access/","text":"Access to COGITATE Data There are two ways of accessing the COGITATE data: \"Live\" Database Release: XNAT (eXtensible Neuroimaging Archive Toolkit) Archival Format: Bundles NOTE: XNAT release not available yet! 1. XNAT This database offers a web interface for navigating the data and an API (Application Programming Interface) for programmatically retrieving specific databases based on user interests. Comprehensive instructions on how to register, access, and query our database are provided below. Step 1: Registration If you are a new user and have not registered yet, you should visit Cogitate_XNAT_registration . Once the registration is done, a verification step, the same as the \u201cCreating an Account\u201d, is needed. If you have already registered, you can skip this step and login at Cogitate_XNAT . Step 2: Navigating at XNAT After completing the registration step, you can log in with your User and Password. You can see the list of available datasets under the \u201cProjects\u201d tab. Once you click the project\u2019s name, you will see the list of subjects in the farthest left column. In each subject\u2019s folder, the demographic information of that subject and the various sets of data acquired for Experiment 1 are provided. As an example, for a subject with the ID of CA103, the MR session, Eye tracker and MEEG datasets are listed as the below figure. In the MR session folder, you can view and access the MR scan of the subject along with the related imaging parameters. In the Eye tracker folder, the eye tracking data of different runs and some details related to them, including the recorded eye, sampling frequency, distance to screen and screen size are available. Under the folder of MEEG, there are some tabs on the top where you can find information regarding the Case Report Form, Exit Questionnaire, experiment checklist form, data details within the BIDS framework, and at the bottom, you can download different runs of MEG data. Naming Convention on XNAT The data on XNAT is organized into subjects and sessions under a given project. The subjects are identified using the format \u201cCX???\u201d and the sessions follow the format CX???_MODALITY_VISIT_PARADIGMRUN e.g. CA103_MEEG_1_DurR1 indicated MEEG measurement for subject ID CA103 during the first visit with Dur experimental paradigm run 1 (R1). 2. Bundles This approach involves providing a collection of links to the prepared bundles of the data and accompanying metadata, which are available in zip format. These links grant users the ability to download specific modalities, example datasets, or the complete dataset. Here is a brief explanation about how to access the data bundles: Step 1: Create a Data User Account Step 1: Create a Data User Account Access to the data bundles requires a quick and easy registration process. Provide user information, including name and email address. Read and accept the Terms of Use and GDPR Requirements ( General Data Protection Regulation ). Once you register, you will receive four (4) emails to the email account you registered with. In some cases, checking your junk mail may be necessary. Welcome email : general information Data User Account Verification email: Within the verification email, you must click on the \u2018verify my account\u2019 option to finalize step 1 of creating a data user account in order to gain access to all current and future data releases. Resource Material : A handy email that contains all the important links that serve as reference materials to Cogitate data. Mailing List Subscription: In order to stay up-to-date and informed about news related to COGITATE data releases, you must activate your email subscription (this is in compliance with GDPR requirements). Tip: The registration procedure needed for accessing the data bundles is a separate step than what is required to access XNAT. Step 2: Login and logout of your Data User account Step 2: Login and logout of your Data User account To login to your account, go to the Login button on the top right of the page. Enter your email and password used when registering. You should now have access to the Cogitate Data User main page and Data Bundles. To log out of your account, go to the top navigation bar and hover of Data. In the dropdown menu, click on Data User Account. A panel will open on the right side of the screen - click on Account Settings in the bottom of that panel. Then the option to Sign out will appear under your username. Click on Sign out. Tip: The Login button will remain as 'Login' even after signing in to your account. The only way of knowing whether you are logged in or out, is by clicking on Data User Account, under the Data heading or being able to download data (i.e. indicating you are in fact, logged in) Step 3: How To Download the Data Login to your account Scroll down and click on the \u201cAccess Data Bundles\u201d Click on the download button next to each dataset Naming Convention for Bundles Raw data bundles follow the below naming convention. The project root directory consists of subdirectories named after the subject's ID which is of the format \u201cCX???\u201d. The subject directories consist of various sub directories as described below. Except for the metadata directory the sessions follow the pattern subject-ID_PARADIGM_MODALITY. If the modality data is paradigm agnostic, e.g. MR, CT then the paradigm is left blank. We currently have two paradigms in the data EXP1 indicating the experiment described and FingerLoc for the finger localiser. The session directories further contains individual scans following the format CX???_MODALITY_1_DurR1.EDF. The metadata subdirectory further consists of various assessments and questionnaires that provide valuable information.","title":"Access to COGITATE Data"},{"location":"05_access/#access-to-cogitate-data","text":"There are two ways of accessing the COGITATE data: \"Live\" Database Release: XNAT (eXtensible Neuroimaging Archive Toolkit) Archival Format: Bundles NOTE: XNAT release not available yet!","title":"Access to COGITATE Data"},{"location":"05_access/#1-xnat","text":"This database offers a web interface for navigating the data and an API (Application Programming Interface) for programmatically retrieving specific databases based on user interests. Comprehensive instructions on how to register, access, and query our database are provided below.","title":"1. XNAT"},{"location":"05_access/#step-1-registration","text":"If you are a new user and have not registered yet, you should visit Cogitate_XNAT_registration . Once the registration is done, a verification step, the same as the \u201cCreating an Account\u201d, is needed. If you have already registered, you can skip this step and login at Cogitate_XNAT .","title":"Step 1: Registration"},{"location":"05_access/#step-2-navigating-at-xnat","text":"After completing the registration step, you can log in with your User and Password. You can see the list of available datasets under the \u201cProjects\u201d tab. Once you click the project\u2019s name, you will see the list of subjects in the farthest left column. In each subject\u2019s folder, the demographic information of that subject and the various sets of data acquired for Experiment 1 are provided. As an example, for a subject with the ID of CA103, the MR session, Eye tracker and MEEG datasets are listed as the below figure. In the MR session folder, you can view and access the MR scan of the subject along with the related imaging parameters. In the Eye tracker folder, the eye tracking data of different runs and some details related to them, including the recorded eye, sampling frequency, distance to screen and screen size are available. Under the folder of MEEG, there are some tabs on the top where you can find information regarding the Case Report Form, Exit Questionnaire, experiment checklist form, data details within the BIDS framework, and at the bottom, you can download different runs of MEG data.","title":"Step 2: Navigating at XNAT"},{"location":"05_access/#naming-convention-on-xnat","text":"The data on XNAT is organized into subjects and sessions under a given project. The subjects are identified using the format \u201cCX???\u201d and the sessions follow the format CX???_MODALITY_VISIT_PARADIGMRUN e.g. CA103_MEEG_1_DurR1 indicated MEEG measurement for subject ID CA103 during the first visit with Dur experimental paradigm run 1 (R1).","title":"Naming Convention on XNAT"},{"location":"05_access/#2-bundles","text":"This approach involves providing a collection of links to the prepared bundles of the data and accompanying metadata, which are available in zip format. These links grant users the ability to download specific modalities, example datasets, or the complete dataset. Here is a brief explanation about how to access the data bundles:","title":"2. Bundles"},{"location":"05_access/#step-1-create-a-data-user-account","text":"Step 1: Create a Data User Account Access to the data bundles requires a quick and easy registration process. Provide user information, including name and email address. Read and accept the Terms of Use and GDPR Requirements ( General Data Protection Regulation ). Once you register, you will receive four (4) emails to the email account you registered with. In some cases, checking your junk mail may be necessary. Welcome email : general information Data User Account Verification email: Within the verification email, you must click on the \u2018verify my account\u2019 option to finalize step 1 of creating a data user account in order to gain access to all current and future data releases. Resource Material : A handy email that contains all the important links that serve as reference materials to Cogitate data. Mailing List Subscription: In order to stay up-to-date and informed about news related to COGITATE data releases, you must activate your email subscription (this is in compliance with GDPR requirements). Tip: The registration procedure needed for accessing the data bundles is a separate step than what is required to access XNAT.","title":"Step 1: Create a Data User Account"},{"location":"05_access/#step-2-login-and-logout-of-your-data-user-account","text":"Step 2: Login and logout of your Data User account To login to your account, go to the Login button on the top right of the page. Enter your email and password used when registering. You should now have access to the Cogitate Data User main page and Data Bundles. To log out of your account, go to the top navigation bar and hover of Data. In the dropdown menu, click on Data User Account. A panel will open on the right side of the screen - click on Account Settings in the bottom of that panel. Then the option to Sign out will appear under your username. Click on Sign out. Tip: The Login button will remain as 'Login' even after signing in to your account. The only way of knowing whether you are logged in or out, is by clicking on Data User Account, under the Data heading or being able to download data (i.e. indicating you are in fact, logged in) Step 3: How To Download the Data Login to your account Scroll down and click on the \u201cAccess Data Bundles\u201d Click on the download button next to each dataset","title":"Step 2: Login and logout of your Data User account"},{"location":"05_access/#naming-convention-for-bundles","text":"Raw data bundles follow the below naming convention. The project root directory consists of subdirectories named after the subject's ID which is of the format \u201cCX???\u201d. The subject directories consist of various sub directories as described below. Except for the metadata directory the sessions follow the pattern subject-ID_PARADIGM_MODALITY. If the modality data is paradigm agnostic, e.g. MR, CT then the paradigm is left blank. We currently have two paradigms in the data EXP1 indicating the experiment described and FingerLoc for the finger localiser. The session directories further contains individual scans following the format CX???_MODALITY_1_DurR1.EDF. The metadata subdirectory further consists of various assessments and questionnaires that provide valuable information.","title":"Naming Convention for Bundles"},{"location":"06_experiment1/","text":"Experiment 1: Directory Structure of Data Bundles Raw Data Raw data files are organized hierarchically: Experiment modality --> Subjects --> data folders The metadata related to each level of the hierarchy is contained in a mandatory folder called 'metadata'. Each data folder follows a naming convention {subject_context_modality[_modifier]} the section of the names are separated by underscores. This naming convention aims at making it easy to identify the data files that relate to the same moment in time and that were acquired simultaneously. subject -> this refers the the subject ID context -> the task or context. This section is optional and can be empty, e.g. if a subject had a standalone MR scan the context is left blank, resulting in a double underscorlike in the case of the CT scan or MR scan in the above example modality -> or type of data collected The Cogitate consortium collected several types of data/metadata during the experiments: BEH : behavioral events ET : Eye tracking data MR : Magnetic resonance data (anatomical scans) MEEG : Magneto-Electroencephalographic data EXQU : Exit Questionnaire CRF : Case Report Form All metadata related to the subject can be found under the aptly named 'metadata' folder under the subject folder (this refers mainly to the EXQU and CRF files). The remaining metadata for the experiment as well as the demographic information on the subjects can be found in the metadata folder above the subject. This folder includes experiment wide metadata in json format and a csv table with the demographic data of all subjects: devices : A list of devices used to collect the data protocols : a link to the Standard Operating Procedure (SOP) document used for the data collection subjects_demographics : the full set of subjects and their metadata for the specific experiment modality tasks_ taskname : a description of the behavioral task or context with which we named the data bundles. wirings : a pdf file showing how the devices are connected to each other depicting the experimental setup. Raw M-EEG Data Directory Structure COG_MEEG_EXP1_RELEASE/ \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_MEEG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_MEEG.json # A link to the Standard Operating Procedures (SOP) \u2502 \u251c\u2500\u2500 subjects_demographics_MEEG.json # Demographic information of MEEG subjects \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_RestinEO.json # Description of the Resting state task \u2502 \u251c\u2500\u2500 tasks_Rnoise.json # Description of the Rnoise task \u2502 \u2514\u2500\u2500 wirings_MEEG.PDF # Wiring diagram of devices_MEEG.json connections \u2514\u2500\u2500 CB036 # Subject folder \u251c\u2500\u2500 metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CB036_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CB036_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CB036_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CB036_EXP1_LPTTriggers/ # Trigger data for synchronization \u251c\u2500\u2500 CB036_EXP1_MEEG/ # MEEG data collected during EXP1 (fif) \u251c\u2500\u2500 CB036_EXP1_ET/ # Eye Tracking data collected during EXP1 (asc) \u251c\u2500\u2500 CB036_RestinEO_MEEG/ # MEEG data collected during RestingEO task (fif) \u251c\u2500\u2500 CB036_RestinEO_ET/ # Eye Tracking data collected during RestingEO task \u251c\u2500\u2500 CB036_Rnoise_MEEG/ # MEEG data collected during Rnoise task (fif) \u2514\u2500\u2500 CB036__MR/ # MR anatomical scan data (fif) Raw iEEG Data Directory Structure COG_ECOG_EXP1_RELEASE/ # Experiment modality top level folder \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_ECOG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_ECOG.json # A link to the Standard Operating Procedure (SOP) document used for the data collection \u2502 \u251c\u2500\u2500 subjects_demographics_ECOG.json # Full set of experiment modality subjects with their respective demographic information \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_FingerLoc.json # Description of the Finger Localizer task \u2502 \u2514\u2500\u2500 wirings_ECOG.pdf # Wiring pdf file showing how the devices described in devices_ECOG.json are connected to each other \u2514\u2500\u2500 CE103 # Subject folder \u251c\u2500\u2500metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CE103_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CE103_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CE103_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ECOG/ # ECOG data files collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ET/ # Eye Tracking data collected during EXP1 \u251c\u2500\u2500 CE103_FingerLoc_ECOG/ # Ecog data collected during the Finger Localizer task \u251c\u2500\u2500 CE103_FingerLoc_BEH/ # Behavioral event data collected during the Finger Localizer task \u251c\u2500\u2500 CE103__CT/ # CT scan data (no task) \u251c\u2500\u2500 CE103__MR/ # MR anatomical data \u2514\u2500\u2500 CE103__ElecCoords/ # Contains coordinate output files of MR/CT coregistration end electrode reconstruction pipeline BIDS Format The BIDS (Brain Imaging Data Structure) file structure for M-EEG (Magnetoencephalography) and iEEG (intracranial EEG) data adheres to a standardized format for organizing neuroimaging data. Each file follows a structured naming convention indicating subject, session, task, and data type. Here's a breakdown of the key elements within each modality's data directory structure: - dataset_description.json: Provides general information about the dataset. - participants.json and participants.tsv: Contain demographic information about subjects. - README.md: Offers an overview of the data and BIDS format. - Subject-specific data: Organized under sub-[SubjectID]/. - Session-specific data: Organized under ses-[SessionID]/. - Anatomical and functional data: Stored in appropriate folders (anat/ for anatomical, meg/ for MEG, and ieeg/ for iEEG). - Metadata: Related to subjects and experiments is stored in metadata/ directories. This structured approach ensures clarity and consistency in data organization, facilitating ease of access and analysis for researchers and collaborators. BIDS M-EEG Data Directory Structure COG_MEEG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Including metadata and coreg (coregistration) | |-- additional_metadata # Containing all of the metadata | | |-- dataset_description.json # General information about BIDS version, type of dataset | | |-- METADATA # Including metadata including the list of devices, link to COGITATE GitHub repository, types of tasks, stimuli and responses and wiring diagram of MEG data | | | |-- analysis.json # Analysis steps, the order of them and the link of analysis code repository | | | |-- devices_MEEG.json # List of devices used for MEG data acquisition | | | |-- protocols.json # Link of COGITATE wiki and MEG SOP | | | |-- tasks_EXP1_MEEG.json # Description of behavioral task, stimuli and responses | | | |-- tasks_RestinEO_MEEG.json # Description of resting-state task and type of the response | | | |-- tasks_Rnoise_MEEG.json # Description of empty room task | | | `-- wiring_MEEG.pdf # Wiring diagram of MEG | | |-- README.md # Containing an explanation about additional_metadata directory | | |-- sub-CA103 # Subject folder | | | `-- METADATA # Containing Case Report Form, Exit Questionnaire and subject\u2019s demography | | | |-- CA103_CRF.json # Case Report Form | | | |-- CA103_demographics.json # Subject\u2019s demography | | | `-- CA103_EXQU.json # Exit Questionnaire | `-- coreg # The results of the coregistration | |-- dataset_description.json # BIDS version, Data Type, and description of the files of this directory | |-- README.md | |-- sub-CA103 # Subject folder | | `-- ses-1 # Session 1/visit 1 | | `-- meg # MEG folder | | `-- sub-CA103_ses-1_trans.fif # The output of coregistering MEG sensors and head to the anatomical data |-- participants.json # General information about subjects\u2019 demography |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README.md # Overview of MEG data and the BIDS format |-- sub-CA103 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- anat # Folder of anatomical data | | |-- sub-CA103_ses-1_T1w.json # Anatomical landmark coordinates | | `-- sub-CA103_ses-1_T1w.nii.gz # Anatomical data | |-- meg # Folder of MEG data | | |-- sub-CA103_ses-1_acq-calibration_meg.dat # Calibration data | | |-- sub-CA103_ses-1_acq-crosstalk_meg.fif # Crosstalk data | | |-- sub-CA103_ses-1_coordsystem.json # Including Information about MEG and head coil and coordinate system, units, description and anatomical landmark coordinates | | |-- sub-CA103_ses-1_task-dur_run-01_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_events.json # Description of sample, value and trial type | | |-- sub-CA103_ses-1_task-dur_run-01_events.tsv # Contains information about the events/stimuli presented during Experiment 1, run 1, event\u2019s onset time and duration, type of event, event code (trigger code) and sample | | |-- sub-CA103_ses-1_task-dur_run-01_meg.fif # Contains the raw/unprocessed MEG data during the task of Experiment 1/session 1, run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during run 1 | | |-- sub-CA103_ses-1_task-dur_run-02_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_events.json | | |-- sub-CA103_ses-1_task-dur_run-02_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-02_meg.json | | |-- sub-CA103_ses-1_task-dur_run-03_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_events.json | | |-- sub-CA103_ses-1_task-dur_run-03_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-03_meg.json | | |-- sub-CA103_ses-1_task-dur_run-04_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_events.json | | |-- sub-CA103_ses-1_task-dur_run-04_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-04_meg.json | | |-- sub-CA103_ses-1_task-dur_run-05_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_events.json | | |-- sub-CA103_ses-1_task-dur_run-05_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-05_meg.json | | |-- sub-CA103_ses-1_task-noise_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during noise recording | | |-- sub-CA103_ses-1_task-noise_meg.fif # Contains the raw/unprocessed MEG data during noise recording of Experiment 1/session 1 | | |-- sub-CA103_ses-1_task-noise_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during noise recording | | |-- sub-CA103_ses-1_task-rest_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during resting-state recording | | |-- sub-CA103_ses-1_task-rest_meg.fif # Contains the raw/unprocessed MEG data during resting-state recording of Experiment 1/session 1 | | `-- sub-CA103_ses-1_task-rest_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during resting-state recording | `-- sub-CA103_ses-1_scans.tsv # List of MEG data files BIDS iEEG Data Directory Structure COG_ECOG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Directory containing derived data | |-- fs # Outputs of FreeSurfer processing | | `-- sub-CF102 # Subject folder | | |-- label # Contains files representing segmented brain regions | | |-- mri # Contains various outputs of the FreeSurfer MRI processing pipeline, such as brain masks, tissue segmentations, and cortical surface reconstructions | | |-- scripts # Contains relevant information related to the execution and status tracking of the FreeSurfer's recon-all pipeline for MRI data processing, including build and status stamps, logs, and environment settings | | |-- stats # statistical data related to various anatomical and morphometric measurements derived from brain segmentation and parcellation processes | | |-- surf # Contains various surface representations of the cerebral cortex, including vertex-wise measurements such as cortical area, curvature, thickness, sulcal depth, and surface normals, for both left and right hemispheres, derived from structural MRI data | | `-- touch # Contains information about completion of various processing steps related to surface generation, segmentation, registration, normalization, and quality control for both left and right hemispheres |-- participants.json # Demographic information about participants |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README |-- sub-CF102 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- ieeg # Folder of iEEG data | | |-- sub-CF102_ses-1_laplace_mapping_ieeg.json # Contains electrode groups and their references for laplace mapping for session 1 | | |-- sub-CF102_ses-1_space-ACPC_coordsystem.json # Contains information about the coordinate system during session 1 | | |-- sub-CF102_ses-1_space-ACPC_electrodes.tsv # Contains spatial coordinates (x, y, z)/locations of electrodes on the subject's brain surface | | |-- sub-CF102_ses-1_task-Dur_channels.tsv # Contains information about the iEEG data channels during task and session 1 including their names, type, units, frequency cutoffs, description, sampling frequency, and status | | |-- sub-CF102_ses-1_task-Dur_events.json # Contains description for \u201csample\u201d, \u201cvalue\u201d, and \u201ctrial_type\u201d | | |-- sub-CF102_ses-1_task-Dur_events.tsv # Contains event-related data during the task and session 1 including onset, duration, trial type, value and sample | | |-- sub-CF102_ses-1_task-Dur_ieeg.eeg # Contains iEEG data during task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.json # Contains metadata for iEEG recorded during the task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.vhdr # Contains metadata for iEEG recorded during the task and session 1 | | `-- sub-CF102_ses-1_task-Dur_ieeg.vmrk # A marker file containing annotations or event markers corresponding to the events during the task and session 1 | `-- sub-CF102_ses-1_scans.tsv","title":"Experiment 1: Directory Structure of Data Bundles"},{"location":"06_experiment1/#experiment-1-directory-structure-of-data-bundles","text":"","title":"Experiment 1: Directory Structure of Data Bundles"},{"location":"06_experiment1/#raw-data","text":"Raw data files are organized hierarchically: Experiment modality --> Subjects --> data folders The metadata related to each level of the hierarchy is contained in a mandatory folder called 'metadata'. Each data folder follows a naming convention {subject_context_modality[_modifier]} the section of the names are separated by underscores. This naming convention aims at making it easy to identify the data files that relate to the same moment in time and that were acquired simultaneously. subject -> this refers the the subject ID context -> the task or context. This section is optional and can be empty, e.g. if a subject had a standalone MR scan the context is left blank, resulting in a double underscorlike in the case of the CT scan or MR scan in the above example modality -> or type of data collected The Cogitate consortium collected several types of data/metadata during the experiments: BEH : behavioral events ET : Eye tracking data MR : Magnetic resonance data (anatomical scans) MEEG : Magneto-Electroencephalographic data EXQU : Exit Questionnaire CRF : Case Report Form All metadata related to the subject can be found under the aptly named 'metadata' folder under the subject folder (this refers mainly to the EXQU and CRF files). The remaining metadata for the experiment as well as the demographic information on the subjects can be found in the metadata folder above the subject. This folder includes experiment wide metadata in json format and a csv table with the demographic data of all subjects: devices : A list of devices used to collect the data protocols : a link to the Standard Operating Procedure (SOP) document used for the data collection subjects_demographics : the full set of subjects and their metadata for the specific experiment modality tasks_ taskname : a description of the behavioral task or context with which we named the data bundles. wirings : a pdf file showing how the devices are connected to each other depicting the experimental setup.","title":"Raw Data"},{"location":"06_experiment1/#raw-m-eeg-data-directory-structure","text":"COG_MEEG_EXP1_RELEASE/ \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_MEEG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_MEEG.json # A link to the Standard Operating Procedures (SOP) \u2502 \u251c\u2500\u2500 subjects_demographics_MEEG.json # Demographic information of MEEG subjects \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_RestinEO.json # Description of the Resting state task \u2502 \u251c\u2500\u2500 tasks_Rnoise.json # Description of the Rnoise task \u2502 \u2514\u2500\u2500 wirings_MEEG.PDF # Wiring diagram of devices_MEEG.json connections \u2514\u2500\u2500 CB036 # Subject folder \u251c\u2500\u2500 metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CB036_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CB036_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CB036_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CB036_EXP1_LPTTriggers/ # Trigger data for synchronization \u251c\u2500\u2500 CB036_EXP1_MEEG/ # MEEG data collected during EXP1 (fif) \u251c\u2500\u2500 CB036_EXP1_ET/ # Eye Tracking data collected during EXP1 (asc) \u251c\u2500\u2500 CB036_RestinEO_MEEG/ # MEEG data collected during RestingEO task (fif) \u251c\u2500\u2500 CB036_RestinEO_ET/ # Eye Tracking data collected during RestingEO task \u251c\u2500\u2500 CB036_Rnoise_MEEG/ # MEEG data collected during Rnoise task (fif) \u2514\u2500\u2500 CB036__MR/ # MR anatomical scan data (fif)","title":"Raw M-EEG Data Directory Structure"},{"location":"06_experiment1/#raw-ieeg-data-directory-structure","text":"COG_ECOG_EXP1_RELEASE/ # Experiment modality top level folder \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_ECOG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_ECOG.json # A link to the Standard Operating Procedure (SOP) document used for the data collection \u2502 \u251c\u2500\u2500 subjects_demographics_ECOG.json # Full set of experiment modality subjects with their respective demographic information \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_FingerLoc.json # Description of the Finger Localizer task \u2502 \u2514\u2500\u2500 wirings_ECOG.pdf # Wiring pdf file showing how the devices described in devices_ECOG.json are connected to each other \u2514\u2500\u2500 CE103 # Subject folder \u251c\u2500\u2500metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CE103_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CE103_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CE103_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ECOG/ # ECOG data files collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ET/ # Eye Tracking data collected during EXP1 \u251c\u2500\u2500 CE103_FingerLoc_ECOG/ # Ecog data collected during the Finger Localizer task \u251c\u2500\u2500 CE103_FingerLoc_BEH/ # Behavioral event data collected during the Finger Localizer task \u251c\u2500\u2500 CE103__CT/ # CT scan data (no task) \u251c\u2500\u2500 CE103__MR/ # MR anatomical data \u2514\u2500\u2500 CE103__ElecCoords/ # Contains coordinate output files of MR/CT coregistration end electrode reconstruction pipeline","title":"Raw iEEG Data Directory Structure"},{"location":"06_experiment1/#bids-format","text":"The BIDS (Brain Imaging Data Structure) file structure for M-EEG (Magnetoencephalography) and iEEG (intracranial EEG) data adheres to a standardized format for organizing neuroimaging data. Each file follows a structured naming convention indicating subject, session, task, and data type. Here's a breakdown of the key elements within each modality's data directory structure: - dataset_description.json: Provides general information about the dataset. - participants.json and participants.tsv: Contain demographic information about subjects. - README.md: Offers an overview of the data and BIDS format. - Subject-specific data: Organized under sub-[SubjectID]/. - Session-specific data: Organized under ses-[SessionID]/. - Anatomical and functional data: Stored in appropriate folders (anat/ for anatomical, meg/ for MEG, and ieeg/ for iEEG). - Metadata: Related to subjects and experiments is stored in metadata/ directories. This structured approach ensures clarity and consistency in data organization, facilitating ease of access and analysis for researchers and collaborators.","title":"BIDS Format"},{"location":"06_experiment1/#bids-m-eeg-data-directory-structure","text":"COG_MEEG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Including metadata and coreg (coregistration) | |-- additional_metadata # Containing all of the metadata | | |-- dataset_description.json # General information about BIDS version, type of dataset | | |-- METADATA # Including metadata including the list of devices, link to COGITATE GitHub repository, types of tasks, stimuli and responses and wiring diagram of MEG data | | | |-- analysis.json # Analysis steps, the order of them and the link of analysis code repository | | | |-- devices_MEEG.json # List of devices used for MEG data acquisition | | | |-- protocols.json # Link of COGITATE wiki and MEG SOP | | | |-- tasks_EXP1_MEEG.json # Description of behavioral task, stimuli and responses | | | |-- tasks_RestinEO_MEEG.json # Description of resting-state task and type of the response | | | |-- tasks_Rnoise_MEEG.json # Description of empty room task | | | `-- wiring_MEEG.pdf # Wiring diagram of MEG | | |-- README.md # Containing an explanation about additional_metadata directory | | |-- sub-CA103 # Subject folder | | | `-- METADATA # Containing Case Report Form, Exit Questionnaire and subject\u2019s demography | | | |-- CA103_CRF.json # Case Report Form | | | |-- CA103_demographics.json # Subject\u2019s demography | | | `-- CA103_EXQU.json # Exit Questionnaire | `-- coreg # The results of the coregistration | |-- dataset_description.json # BIDS version, Data Type, and description of the files of this directory | |-- README.md | |-- sub-CA103 # Subject folder | | `-- ses-1 # Session 1/visit 1 | | `-- meg # MEG folder | | `-- sub-CA103_ses-1_trans.fif # The output of coregistering MEG sensors and head to the anatomical data |-- participants.json # General information about subjects\u2019 demography |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README.md # Overview of MEG data and the BIDS format |-- sub-CA103 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- anat # Folder of anatomical data | | |-- sub-CA103_ses-1_T1w.json # Anatomical landmark coordinates | | `-- sub-CA103_ses-1_T1w.nii.gz # Anatomical data | |-- meg # Folder of MEG data | | |-- sub-CA103_ses-1_acq-calibration_meg.dat # Calibration data | | |-- sub-CA103_ses-1_acq-crosstalk_meg.fif # Crosstalk data | | |-- sub-CA103_ses-1_coordsystem.json # Including Information about MEG and head coil and coordinate system, units, description and anatomical landmark coordinates | | |-- sub-CA103_ses-1_task-dur_run-01_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_events.json # Description of sample, value and trial type | | |-- sub-CA103_ses-1_task-dur_run-01_events.tsv # Contains information about the events/stimuli presented during Experiment 1, run 1, event\u2019s onset time and duration, type of event, event code (trigger code) and sample | | |-- sub-CA103_ses-1_task-dur_run-01_meg.fif # Contains the raw/unprocessed MEG data during the task of Experiment 1/session 1, run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during run 1 | | |-- sub-CA103_ses-1_task-dur_run-02_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_events.json | | |-- sub-CA103_ses-1_task-dur_run-02_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-02_meg.json | | |-- sub-CA103_ses-1_task-dur_run-03_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_events.json | | |-- sub-CA103_ses-1_task-dur_run-03_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-03_meg.json | | |-- sub-CA103_ses-1_task-dur_run-04_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_events.json | | |-- sub-CA103_ses-1_task-dur_run-04_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-04_meg.json | | |-- sub-CA103_ses-1_task-dur_run-05_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_events.json | | |-- sub-CA103_ses-1_task-dur_run-05_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-05_meg.json | | |-- sub-CA103_ses-1_task-noise_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during noise recording | | |-- sub-CA103_ses-1_task-noise_meg.fif # Contains the raw/unprocessed MEG data during noise recording of Experiment 1/session 1 | | |-- sub-CA103_ses-1_task-noise_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during noise recording | | |-- sub-CA103_ses-1_task-rest_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during resting-state recording | | |-- sub-CA103_ses-1_task-rest_meg.fif # Contains the raw/unprocessed MEG data during resting-state recording of Experiment 1/session 1 | | `-- sub-CA103_ses-1_task-rest_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during resting-state recording | `-- sub-CA103_ses-1_scans.tsv # List of MEG data files","title":"BIDS M-EEG Data Directory Structure"},{"location":"06_experiment1/#bids-ieeg-data-directory-structure","text":"COG_ECOG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Directory containing derived data | |-- fs # Outputs of FreeSurfer processing | | `-- sub-CF102 # Subject folder | | |-- label # Contains files representing segmented brain regions | | |-- mri # Contains various outputs of the FreeSurfer MRI processing pipeline, such as brain masks, tissue segmentations, and cortical surface reconstructions | | |-- scripts # Contains relevant information related to the execution and status tracking of the FreeSurfer's recon-all pipeline for MRI data processing, including build and status stamps, logs, and environment settings | | |-- stats # statistical data related to various anatomical and morphometric measurements derived from brain segmentation and parcellation processes | | |-- surf # Contains various surface representations of the cerebral cortex, including vertex-wise measurements such as cortical area, curvature, thickness, sulcal depth, and surface normals, for both left and right hemispheres, derived from structural MRI data | | `-- touch # Contains information about completion of various processing steps related to surface generation, segmentation, registration, normalization, and quality control for both left and right hemispheres |-- participants.json # Demographic information about participants |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README |-- sub-CF102 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- ieeg # Folder of iEEG data | | |-- sub-CF102_ses-1_laplace_mapping_ieeg.json # Contains electrode groups and their references for laplace mapping for session 1 | | |-- sub-CF102_ses-1_space-ACPC_coordsystem.json # Contains information about the coordinate system during session 1 | | |-- sub-CF102_ses-1_space-ACPC_electrodes.tsv # Contains spatial coordinates (x, y, z)/locations of electrodes on the subject's brain surface | | |-- sub-CF102_ses-1_task-Dur_channels.tsv # Contains information about the iEEG data channels during task and session 1 including their names, type, units, frequency cutoffs, description, sampling frequency, and status | | |-- sub-CF102_ses-1_task-Dur_events.json # Contains description for \u201csample\u201d, \u201cvalue\u201d, and \u201ctrial_type\u201d | | |-- sub-CF102_ses-1_task-Dur_events.tsv # Contains event-related data during the task and session 1 including onset, duration, trial type, value and sample | | |-- sub-CF102_ses-1_task-Dur_ieeg.eeg # Contains iEEG data during task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.json # Contains metadata for iEEG recorded during the task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.vhdr # Contains metadata for iEEG recorded during the task and session 1 | | `-- sub-CF102_ses-1_task-Dur_ieeg.vmrk # A marker file containing annotations or event markers corresponding to the events during the task and session 1 | `-- sub-CF102_ses-1_scans.tsv","title":"BIDS iEEG Data Directory Structure"},{"location":"07_bug-reports/","text":"Support and Report Bugs There are various ways the COGITATE team can support you as a Data User, depending on the type of help you are seeking. For general questions, email us at: cogitate-support@ae.mpg.de For reporting issues or bugs: Visit github.com/cogitate-consortium/cogitate-data and ensure you are logged into your GitHub account Don\u2019t have a GitHub account? Create one here . Click on Issues Create New Issue Please provide maximal details possible Click Submit new issue. Your issue will be logged with our Data Release team and dealt with in a timely manner.","title":"Support and Report Bugs"},{"location":"07_bug-reports/#support-and-report-bugs","text":"There are various ways the COGITATE team can support you as a Data User, depending on the type of help you are seeking. For general questions, email us at: cogitate-support@ae.mpg.de For reporting issues or bugs: Visit github.com/cogitate-consortium/cogitate-data and ensure you are logged into your GitHub account Don\u2019t have a GitHub account? Create one here . Click on Issues Create New Issue Please provide maximal details possible Click Submit new issue. Your issue will be logged with our Data Release team and dealt with in a timely manner.","title":"Support and Report Bugs"},{"location":"08_links/","text":"Links and Reference Materials COGITATE Website COGITATE Main Scientific Paper COGITATE Preregistration v4 - December 2022 COGITATE GitHub Repositories COGITATE Wiki Subjects Demography YouTube Channel videos: - COGITATE Experiment 1 - HOW TO create a COGITATE Data User account - HOW TO login and logout of your COGITATE Data User account - HOW TO Download a COGITATE Data Bundle XNAT Support Modality Place of Acquisition Number of Subjects Reference Materials M-EEG University of Birmingham, Center for Human Brain Health (CHBH) 102 M-EEG Wiring Diagram M-EEG Experiment 1 Code M-EEG SOP Peking University (PKU) fMRI Donders Center for Cognitive Neuroimaging (DCCN) 122 Yale Magnetic Resonance Research Center (MRRC) ECoG (iEEG) Harvard University at Boston Children\u2019s Hospital 38 iEEG Wiring Diagram iEEG Experiment 1 Code iEEG SOP New York University Langone (NYU) University of Wisconsin","title":"Links and Reference Materials"},{"location":"08_links/#links-and-reference-materials","text":"COGITATE Website COGITATE Main Scientific Paper COGITATE Preregistration v4 - December 2022 COGITATE GitHub Repositories COGITATE Wiki Subjects Demography YouTube Channel videos: - COGITATE Experiment 1 - HOW TO create a COGITATE Data User account - HOW TO login and logout of your COGITATE Data User account - HOW TO Download a COGITATE Data Bundle XNAT Support Modality Place of Acquisition Number of Subjects Reference Materials M-EEG University of Birmingham, Center for Human Brain Health (CHBH) 102 M-EEG Wiring Diagram M-EEG Experiment 1 Code M-EEG SOP Peking University (PKU) fMRI Donders Center for Cognitive Neuroimaging (DCCN) 122 Yale Magnetic Resonance Research Center (MRRC) ECoG (iEEG) Harvard University at Boston Children\u2019s Hospital 38 iEEG Wiring Diagram iEEG Experiment 1 Code iEEG SOP New York University Langone (NYU) University of Wisconsin","title":"Links and Reference Materials"},{"location":"09_appendices/","text":"Appendices Appendix 1. Screening Form M-EEG Screening Form This questionnaire should be filled by the participant before the experiment. It asks for some information such as name, weight (kg), email, data of birth and phone number of the subject following by a couple of questions including: Have you participated in a MEG study before? Do you suffer from any medical condition that may be relevant (e.g. epilepsy, diabetes, asthma)? Do you suffer from claustrophobia? Have you been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D)? Have you been formally diagnosed with autism spectrum disorder (ASD)? Other information (e.g. spectacle prescription) Appendix 2. Case Report Form This form was for reporting any issues that might have happened during the experiment. After the end of the experiment and saving the data, the operator filled out this form. M-EEG Case Report Form For M-EEG modality, the below items were asked for different sections of the data acquisition (Data for Empty Room, Resting-state, run 1, \u2026, run 5) in the Case Report Form: Eye tracking not working Eye tracking showing bad patterns Problems in task performance Strong head motion Strong body motion Trigger monitoring Bad MEG sensors Bad EEG sensors Notes: explaining about the issue in a more detail iEEG Case Report Form The iEEG case report form contained essential details such as the subject's ID, task name, and the distance from the stimulus interface to the subject in centimeters. It also included comprehensive information about the eye-tracking setup, specifying which eye was tracked, the sampling rate, calibration method, pupil shape, and camera lens used. Additionally, trigger information, including the trigger channels utilized and other relevant parameters, was documented. Furthermore, some fields were provided for notes and miscellaneous information. Here are the specific fields listed in the iEEG case report form: Subject ID Task Name Stimulus Interface Computer Name Distance (cm) Practice Run Pre-Task Equipment Disinfected Post-Task Equipment Disinfected Other Notes Screen Brightness Eye Tracking Sticker Glasses or Lenses Trigger Information: Trigger Channel Trigger Reference Channel Trigger Channel Gain Audio Channels Eye Tracker Setup: Eye Tracked Sampling Rate Calibration Type Pupil Camera Lens Appendix 3. Exit Questionnaire This Form should be filled by the participant after the experiment and asks the below questions: How difficult was it to stay focused for the entire duration of the experiment? Did you notice that the stimuli duration varied? When stimuli were presented for a short period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a medium period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a long period of time, were you able to focus on them for as long as they were presented? Was it difficult to keep the targets in memory for the entirety of a block? For each stimulus category, how hard was it to recognize them among the other stimuli? [Faces] For each stimulus category, how hard was it to recognize them among the other stimuli? [Object] For each stimulus category, how hard was it to recognize them among the other stimuli? [Letters] For each stimulus category, how hard was it to recognize them among the other stimuli? [Symbols] Were specific stimuli orientations harder to recognize than others? If yes, which ones? What did you think of the block length? Would the task have been easier if the stimulus duration didn't vary so much? Do you have any additional comments to share? Appendix 4. MEG Standard Operating Procedure Appendix 5. GDPR Requirements Appendix 6. Terms of Use Appendix 7. Data Curation Standard Operating Procedure Appendix 8. iEEG Standard Operating Procedure Appendix 9. Documentation Changes","title":"Appendices"},{"location":"09_appendices/#appendices","text":"","title":"Appendices"},{"location":"09_appendices/#appendix-1-screening-form","text":"","title":"Appendix 1. Screening Form"},{"location":"09_appendices/#m-eeg-screening-form","text":"This questionnaire should be filled by the participant before the experiment. It asks for some information such as name, weight (kg), email, data of birth and phone number of the subject following by a couple of questions including: Have you participated in a MEG study before? Do you suffer from any medical condition that may be relevant (e.g. epilepsy, diabetes, asthma)? Do you suffer from claustrophobia? Have you been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D)? Have you been formally diagnosed with autism spectrum disorder (ASD)? Other information (e.g. spectacle prescription)","title":"M-EEG Screening Form"},{"location":"09_appendices/#appendix-2-case-report-form","text":"This form was for reporting any issues that might have happened during the experiment. After the end of the experiment and saving the data, the operator filled out this form.","title":"Appendix 2. Case Report Form"},{"location":"09_appendices/#m-eeg-case-report-form","text":"For M-EEG modality, the below items were asked for different sections of the data acquisition (Data for Empty Room, Resting-state, run 1, \u2026, run 5) in the Case Report Form: Eye tracking not working Eye tracking showing bad patterns Problems in task performance Strong head motion Strong body motion Trigger monitoring Bad MEG sensors Bad EEG sensors Notes: explaining about the issue in a more detail","title":"M-EEG Case Report Form"},{"location":"09_appendices/#ieeg-case-report-form","text":"The iEEG case report form contained essential details such as the subject's ID, task name, and the distance from the stimulus interface to the subject in centimeters. It also included comprehensive information about the eye-tracking setup, specifying which eye was tracked, the sampling rate, calibration method, pupil shape, and camera lens used. Additionally, trigger information, including the trigger channels utilized and other relevant parameters, was documented. Furthermore, some fields were provided for notes and miscellaneous information. Here are the specific fields listed in the iEEG case report form: Subject ID Task Name Stimulus Interface Computer Name Distance (cm) Practice Run Pre-Task Equipment Disinfected Post-Task Equipment Disinfected Other Notes Screen Brightness Eye Tracking Sticker Glasses or Lenses Trigger Information: Trigger Channel Trigger Reference Channel Trigger Channel Gain Audio Channels Eye Tracker Setup: Eye Tracked Sampling Rate Calibration Type Pupil Camera Lens","title":"iEEG Case Report Form"},{"location":"09_appendices/#appendix-3-exit-questionnaire","text":"This Form should be filled by the participant after the experiment and asks the below questions: How difficult was it to stay focused for the entire duration of the experiment? Did you notice that the stimuli duration varied? When stimuli were presented for a short period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a medium period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a long period of time, were you able to focus on them for as long as they were presented? Was it difficult to keep the targets in memory for the entirety of a block? For each stimulus category, how hard was it to recognize them among the other stimuli? [Faces] For each stimulus category, how hard was it to recognize them among the other stimuli? [Object] For each stimulus category, how hard was it to recognize them among the other stimuli? [Letters] For each stimulus category, how hard was it to recognize them among the other stimuli? [Symbols] Were specific stimuli orientations harder to recognize than others? If yes, which ones? What did you think of the block length? Would the task have been easier if the stimulus duration didn't vary so much? Do you have any additional comments to share?","title":"Appendix 3. Exit Questionnaire"},{"location":"09_appendices/#appendix-4-meg-standard-operating-procedure","text":"","title":"Appendix 4. MEG Standard Operating Procedure"},{"location":"09_appendices/#appendix-5-gdpr-requirements","text":"","title":"Appendix 5. GDPR Requirements"},{"location":"09_appendices/#appendix-6-terms-of-use","text":"","title":"Appendix 6. Terms of Use"},{"location":"09_appendices/#appendix-7-data-curation-standard-operating-procedure","text":"","title":"Appendix 7. Data Curation Standard Operating Procedure"},{"location":"09_appendices/#appendix-8-ieeg-standard-operating-procedure","text":"","title":"Appendix 8. iEEG Standard Operating Procedure"},{"location":"09_appendices/#appendix-9-documentation-changes","text":"","title":"Appendix 9. Documentation Changes"},{"location":"10_acknowledgements/","text":"Acknowledgements This project and data are made possible through the support of a grant from Templeton World Charity Foundation . The opinions expressed throughout our documentation and data are those of the authors and do not necessarily reflect the views of Templeton World Charity Foundation. Members of the Cogitate Consortium contributed an incredible amount of time, effort and expertise to design the experiments, collect the data and subsequently curate the data and code for public release. We would like to acknowledge the significant contributions of the following team members: Experimental design: Yoav Roll, Alex Lepauvre, Aya Khalaf, and Katarina Bentz, Felix Bernoully Konstantinos Vasileiadis, Aris Semertzidis, Nikos Gregos Data Collection: fMRI Donders (Floris de Lange, David Richter, Sarah Kusch, Monique Smulders) Yale (Hal Blumenfeld, Aya Khalaf, Abdel Sharaf) M-EEG Birmingham (Oscar Ferrante, Ole Jensen, Dorottya Hetenyi, Tara Ghafari) PKU (Ling Liu, Huan Luo, Shujun Yang) iEEG NYU (Sasha Devore, Simon Henin, Stephanie Montenegro, Jay Jeschke, Alia Seedat) Harvard (Gabriel Krieman, Katarina Bendtz, David Mazumder, Marcelo Armendariz) WI (Urszula Gorska) Data Quality Checks: Rony Hirschhorn (Tel Aviv University) Urszula Gorska (WI) Csaba Kozma (WI) Liad Mudrik (Tel Aviv University) Michael Pitts (Reed College) Data Curation: COGITATE\u2019s Data Release Team: Niccol\u00f2 Bonacchi, Tanya Brown, Kyle Kahraman, Lucia Melloni, Praveen Sripad, Fatemeh Taheriyan, Diptyajit Das Flywheel Cloud Services (XNAT): James Dickson, Angela Farrar, Dan Marcus,","title":"Acknowledgements"},{"location":"10_acknowledgements/#acknowledgements","text":"This project and data are made possible through the support of a grant from Templeton World Charity Foundation . The opinions expressed throughout our documentation and data are those of the authors and do not necessarily reflect the views of Templeton World Charity Foundation. Members of the Cogitate Consortium contributed an incredible amount of time, effort and expertise to design the experiments, collect the data and subsequently curate the data and code for public release. We would like to acknowledge the significant contributions of the following team members: Experimental design: Yoav Roll, Alex Lepauvre, Aya Khalaf, and Katarina Bentz, Felix Bernoully Konstantinos Vasileiadis, Aris Semertzidis, Nikos Gregos Data Collection: fMRI Donders (Floris de Lange, David Richter, Sarah Kusch, Monique Smulders) Yale (Hal Blumenfeld, Aya Khalaf, Abdel Sharaf) M-EEG Birmingham (Oscar Ferrante, Ole Jensen, Dorottya Hetenyi, Tara Ghafari) PKU (Ling Liu, Huan Luo, Shujun Yang) iEEG NYU (Sasha Devore, Simon Henin, Stephanie Montenegro, Jay Jeschke, Alia Seedat) Harvard (Gabriel Krieman, Katarina Bendtz, David Mazumder, Marcelo Armendariz) WI (Urszula Gorska) Data Quality Checks: Rony Hirschhorn (Tel Aviv University) Urszula Gorska (WI) Csaba Kozma (WI) Liad Mudrik (Tel Aviv University) Michael Pitts (Reed College) Data Curation: COGITATE\u2019s Data Release Team: Niccol\u00f2 Bonacchi, Tanya Brown, Kyle Kahraman, Lucia Melloni, Praveen Sripad, Fatemeh Taheriyan, Diptyajit Das Flywheel Cloud Services (XNAT): James Dickson, Angela Farrar, Dan Marcus,","title":"Acknowledgements"},{"location":"11_glossary/","text":"Glossary Term Definition Wiring Diagram A diagram that shows how different devices or sensors were connected Standard Operating Procedure (SOP) A document that outlines the steps or procedures to be followed for each modality GDPR (General Data Protection Regulation) A comprehensive data protection and privacy regulation in the European Union (EU) and the European Economic Area (EEA) - It offers instructions on leveraging the data of the users who register to download COGITATE data HIPAA (Health Insurance Portability & Accountability Act) Standards for the protection of sensitive patient health information - HIPAA is used to identify and remove personal identifiers in the curation process BIDS (Brain Imaging Data Structure) A standard format for organizing and describing neuroimaging data - This is one of the formats of the released COGITATE datasets XNAT (Extensible Neuroimaging Archive Toolkit) An open-source software platform designed for managing, sharing, and analyzing neuroimaging and related data in research settings - This is the platform used for COGITATE project to store the data and facilitate accessing it for the users","title":"Glossary"},{"location":"11_glossary/#glossary","text":"Term Definition Wiring Diagram A diagram that shows how different devices or sensors were connected Standard Operating Procedure (SOP) A document that outlines the steps or procedures to be followed for each modality GDPR (General Data Protection Regulation) A comprehensive data protection and privacy regulation in the European Union (EU) and the European Economic Area (EEA) - It offers instructions on leveraging the data of the users who register to download COGITATE data HIPAA (Health Insurance Portability & Accountability Act) Standards for the protection of sensitive patient health information - HIPAA is used to identify and remove personal identifiers in the curation process BIDS (Brain Imaging Data Structure) A standard format for organizing and describing neuroimaging data - This is one of the formats of the released COGITATE datasets XNAT (Extensible Neuroimaging Archive Toolkit) An open-source software platform designed for managing, sharing, and analyzing neuroimaging and related data in research settings - This is the platform used for COGITATE project to store the data and facilitate accessing it for the users","title":"Glossary"},{"location":"12_references/","text":"References Tarr, M. J. (1996). The Object Databank. Carnegie Mellon University, Tarr Lab page Databank direct link . Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O., Gosselin, F., & Tanaka, J. W. (2010). Controlling low-level image properties: the SHINE toolbox. Behavior Research Methods, 42(3), 671-684. https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/index.html https://doi.org/10.1371/journal.pone.0268577 https://osf.io/gm3vd Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial vision, 10(4), 437-442.","title":"References"},{"location":"12_references/#references","text":"Tarr, M. J. (1996). The Object Databank. Carnegie Mellon University, Tarr Lab page Databank direct link . Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O., Gosselin, F., & Tanaka, J. W. (2010). Controlling low-level image properties: the SHINE toolbox. Behavior Research Methods, 42(3), 671-684. https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/index.html https://doi.org/10.1371/journal.pone.0268577 https://osf.io/gm3vd Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial vision, 10(4), 437-442.","title":"References"},{"location":"13_faq/","text":"FAQ Cogitate M-EEG BIDS FAQs Where are the trans files? The trans.fif files are provided under /derivatives/coreg. They are the results of the coregistration. This information is also included as landmarks in sidecar json of the NIFTI file. What is under derivatives/additional_metadata? This directory contains some additional metadata collected along with the data. The top level METADATA directory contains some files common to all subjects: They are analysis.json, devices_MEEG.json, protocols.json, tasks_EXP1_MEEG.json, tasks_RestinEO_MEEG.json, tasks_Rnoise_MEEG.json, wiring_MEEG.pdf. Subject level directories contain three files: CXXXX_CRF.json, CXXXX_demographics.json, CXXXX_EXQU.json which are respectively subject specific demographics, case report form and the exit questionnaire. The demographics information is redundant with the information in the participants.tsv files. What does BATCH1 mean? The M-EEG datasets for COGITATE are being released in two batches in order to facilitate the BIOMAG Connectivity Challenge 2024. This means that one half of the data is initially made available to the participants of the challenge followed by the second half a few months later. This is where BATCH1 and BATCH2 come in. I cannot find the EOG channel in the data? The EOG channel for site A can be found in EOG004 and the EOG channels for site B can be found in BIO002 . Typically in our code, we prefer to rename the channels as below. # Fix EOG001 channel name (required for CA only) if 'EOG004' in raw.ch_names: raw. rename_channels ({ 'EOG004': 'EOG001'}) # Fix EOG001 channel name(required for CB only) eog_ch = raw.copy().pick_types(meg=False, eeg=False, eog=True) if len(eog_ch. ch_names) < 2: raw.set_channel_types({'BIO002': 'eog'}) raw.rename_channels({'BIO002': 'EOG002'}) What do the channels MISC1, MISC2 and MISC3 contain? These channels contain the eye tracker data with MISC1 (X), MISC2 (Y), and MISC3 (pupil) channels containing the X, Y (gaze) and the Pupil information respectively. This information is however also shared separately in the eye tracking data release.","title":"FAQ"},{"location":"13_faq/#faq","text":"","title":"FAQ"},{"location":"13_faq/#cogitate-m-eeg-bids-faqs","text":"","title":"Cogitate M-EEG BIDS FAQs"},{"location":"13_faq/#where-are-the-trans-files","text":"The trans.fif files are provided under /derivatives/coreg. They are the results of the coregistration. This information is also included as landmarks in sidecar json of the NIFTI file.","title":"Where are the trans files?"},{"location":"13_faq/#what-is-under-derivativesadditional_metadata","text":"This directory contains some additional metadata collected along with the data. The top level METADATA directory contains some files common to all subjects: They are analysis.json, devices_MEEG.json, protocols.json, tasks_EXP1_MEEG.json, tasks_RestinEO_MEEG.json, tasks_Rnoise_MEEG.json, wiring_MEEG.pdf. Subject level directories contain three files: CXXXX_CRF.json, CXXXX_demographics.json, CXXXX_EXQU.json which are respectively subject specific demographics, case report form and the exit questionnaire. The demographics information is redundant with the information in the participants.tsv files.","title":"What is under derivatives/additional_metadata?"},{"location":"13_faq/#what-does-batch1-mean","text":"The M-EEG datasets for COGITATE are being released in two batches in order to facilitate the BIOMAG Connectivity Challenge 2024. This means that one half of the data is initially made available to the participants of the challenge followed by the second half a few months later. This is where BATCH1 and BATCH2 come in.","title":"What does BATCH1 mean?"},{"location":"13_faq/#i-cannot-find-the-eog-channel-in-the-data","text":"The EOG channel for site A can be found in EOG004 and the EOG channels for site B can be found in BIO002 . Typically in our code, we prefer to rename the channels as below. # Fix EOG001 channel name (required for CA only) if 'EOG004' in raw.ch_names: raw. rename_channels ({ 'EOG004': 'EOG001'}) # Fix EOG001 channel name(required for CB only) eog_ch = raw.copy().pick_types(meg=False, eeg=False, eog=True) if len(eog_ch. ch_names) < 2: raw.set_channel_types({'BIO002': 'eog'}) raw.rename_channels({'BIO002': 'EOG002'})","title":"I cannot find the EOG channel in the data?"},{"location":"13_faq/#what-do-the-channels-misc1-misc2-and-misc3-contain","text":"These channels contain the eye tracker data with MISC1 (X), MISC2 (Y), and MISC3 (pupil) channels containing the X, Y (gaze) and the Pupil information respectively. This information is however also shared separately in the eye tracking data release.","title":"What do the channels MISC1, MISC2 and MISC3 contain?"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome to the Cogitate Data Release Documentation Version Author(s) 1.1 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N. Date Editor(s) 13.05.2024 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N.","title":"Welcome to the Cogitate Data Release Documentation"},{"location":"#welcome-to-the-cogitate-data-release-documentation","text":"Version Author(s) 1.1 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N. Date Editor(s) 13.05.2024 Taheriyan, F., Das, D., Kahraman, K., Sripad, P., Brown, T., Oh, H., Melloni, L, Bonacchi, N.","title":"Welcome to the Cogitate Data Release Documentation"},{"location":"01_intro/","text":"Introduction This document provides guidance and detailed information on the datasets released by COGITATE, how to access them, the directory structure, and a description on various types of data acquired for each modality. Updates This is the second version ( V1.1 ) of the COGITATE data release document. New updates or any changes to the previous versions will be announced here, and as a versioned document here as well. In V1.1 , the raw/unprocessed and BIDS (Brain Imaging Data Structure) format of iEEG (intracranial electroencephalography) data, collected by the Cogitate Consortium for Experiment 1, are released. Future Releases Here are the items that will be released soon: Experiment 1 BIDS format of the M-EEG data (batch 2) Unprocessed/raw format of all M-EEG data (batch 1 and batch 2) Unprocessed/raw and BIDS format of fMRI data Attention: M-EEG, MEEG, M/EEG, MEG/EEG or MEG might be used interchangeably throughout this document or the name of data folders, but all of them pertain to a singular data. This also applies to iEEG and ECoG (Electrocorticography).","title":"Introduction"},{"location":"01_intro/#introduction","text":"This document provides guidance and detailed information on the datasets released by COGITATE, how to access them, the directory structure, and a description on various types of data acquired for each modality.","title":"Introduction"},{"location":"01_intro/#updates","text":"This is the second version ( V1.1 ) of the COGITATE data release document. New updates or any changes to the previous versions will be announced here, and as a versioned document here as well. In V1.1 , the raw/unprocessed and BIDS (Brain Imaging Data Structure) format of iEEG (intracranial electroencephalography) data, collected by the Cogitate Consortium for Experiment 1, are released.","title":"Updates"},{"location":"01_intro/#future-releases","text":"Here are the items that will be released soon:","title":"Future Releases"},{"location":"01_intro/#experiment-1","text":"BIDS format of the M-EEG data (batch 2) Unprocessed/raw format of all M-EEG data (batch 1 and batch 2) Unprocessed/raw and BIDS format of fMRI data Attention: M-EEG, MEEG, M/EEG, MEG/EEG or MEG might be used interchangeably throughout this document or the name of data folders, but all of them pertain to a singular data. This also applies to iEEG and ECoG (Electrocorticography).","title":"Experiment 1"},{"location":"02_overview/","text":"Overview of COGITATE What are the mechanisms that give rise to consciousness? This question has been the focus of extensive research, leading to the development of several prominent theories, including Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). Critically, however, the focus so far has been on testing each theory independently, gathering evidence for/against them separately, leaving open a crucial question: which theory has higher explanatory power when tested against each other directly? COGITATE is a pioneering Open Science adversarial collaboration to bridge this gap and evaluate GNWT and IIT through two studies, named Experiment 1 (EXP1) and Experiment 2 (EXP2). In these experiments, multimodal empirical tests are conducted on human volunteers, combining magneto-electroencephalography (M-EEG), functional magnetic resonance imaging (fMRI) and invasive intracortical recordings (iEEG) along with behavioral and eye tracking measurements. The reason for this approach is to maximize the sensitivity and specificity to the tests of each hypothesis, while accounting for trade-offs between temporal and spatial specificity inherent to the currently available methods in human neuroscience. Goals The aim of the COGITATE project is to accelerate research on consciousness and establish a groundbreaking model for scientific practices in cognitive neuroscience at large, by demonstrating the impact of team-based adversary research and open data to address some of the major riddles in the field, much like established practices in other fields of inquiry such as physics and genomics. Furthermore, the resulting products of this research include a large and unique multimodal database, high-end analysis tools, and a new paradigm for probing consciousness in naturalistic settings. All experimental procedures, multimodal datasets, and analysis tools developed in this project will be made openly available to the public. These products will propel further discoveries in the field of consciousness, and in cognitive neuroscience in general, which will exceed and outlast the direct outputs of the proposed studies. Experiments The COGITATE consortium performed two experiments: In Experiment 1 (EXP1), two sets of clearly visible task relevant and irrelevant stimuli were shown to the subjects with different durations. The goal was to test the effects of maintenance of a percept in consciousness and task relevance and contradictory predictions regarding the involvement of prefrontal and posterior, category selective cortical areas in consciousness. Specifically, the main questions were: How is the persistence of a stimulus in consciousness reflected in cortical hemodynamic and electrophysiological activity, i.e., are the neural responses phasic or sustained throughout a conscious experience? Do activity patterns in prefrontal areas relate to visual consciousness per se or to its consequences, i.e., task-related processes? In Experiment 2 (EXP2), a novel paradigm was developed to test the key predictions of GNWT and IIT while overcoming a major obstacle in the field: creating more naturalistic conditions of invisibility that do not degrade the physical input. To achieve this goal, an engaging video game was used with the help of which salient stimuli were presented for relatively long durations in the background. Sometimes the stimuli was not consciously seen due to attentional engagement by the game. This approach allowed us to uniquely study neural activity elicited by seen or unseen stimuli under naturalistic conditions so that the stimuli can either be task relevant or task irrelevant. Experiment 1: Conscious Perception Objective The primary aim of this experiment was to investigate neural activity in response to stimuli that are consciously perceived. It was designed to manipulate two key factors: Relevance of the Stimulus to the Task: This factor was categorized into three levels\u2014Task-relevant target, Task-relevant non-target, and Task-irrelevant stimulus. Stimulus Duration: The stimuli were presented for durations of 500 ms, 1000 ms, and 1500 ms This design framework allowed us to test several key hypotheses, including: Disentangling consciousness-related activations from task-related activations. Identifying brain regions that convey information about the content of consciousness. Examining the persistence of the content of consciousness over time. Design This experiment followed a 3x3x4x2 factorial design, with the following items: Relevance of Stimulus to the Task (3) - Task-relevant target - Task-relevant non-target - Task-irrelevant stimulus Stimulus Duration (3) - 500 ms - 1000 ms - 1500 ms Stimulus Category (4) - Faces - Objects - Letters - False-fonts (meaningless symbols) Stimulus Orientation (2) - Side view (right or left view) - Front view Sample Size The sample sizes were determined based on common practices in the literature, resulting in a total of 122 subjects for fMRI, 102 for M-EEG, and 38 for iEEG. All subjects met specific criteria, including age and health conditions, to ensure data quality. Task Description A sequence of images including faces, objects, letters or meaningless symbols (\u2018false fonts\u2019) with front or side (left or right) view were presented to the subjects. At the beginning of each sequence, the target images were presented and subjects were asked to memorize and remember them during the sequence. Subjects were instructed to press any buttons with their index finger when they saw targets (in either front or side views) as quickly and accurately as possible. The duration of each sequence was approximately 2 minutes. The next sequence started when the subjects pressed the space key. Here is an example of the tasks: For a comprehensive summary of more details about the experiments, please refer to the following supplementary resources: PLOS One Methods paper (COGITATE Main Scientific Paper 1 (MSP-1)): Melloni L, Mudrik L, Pitts M, Bendtz K, Ferrante O, et al. (2023) An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE 18(2): e0268577. https://doi.org/10.1371/journal.pone.0268577 COGITATE Preregistration, v4 EXP 1 Demo video Experiment 2: Video Game Engagement Currently in preparation. It will be released soon! Task Code and Stimuli repositories The task code and stimuli used for EXP1 and for all modalities are available in the cogitate-experiment-code repository.","title":"Overview of COGITATE"},{"location":"02_overview/#overview-of-cogitate","text":"What are the mechanisms that give rise to consciousness? This question has been the focus of extensive research, leading to the development of several prominent theories, including Global Neuronal Workspace Theory (GNWT) and Integrated Information Theory (IIT). Critically, however, the focus so far has been on testing each theory independently, gathering evidence for/against them separately, leaving open a crucial question: which theory has higher explanatory power when tested against each other directly? COGITATE is a pioneering Open Science adversarial collaboration to bridge this gap and evaluate GNWT and IIT through two studies, named Experiment 1 (EXP1) and Experiment 2 (EXP2). In these experiments, multimodal empirical tests are conducted on human volunteers, combining magneto-electroencephalography (M-EEG), functional magnetic resonance imaging (fMRI) and invasive intracortical recordings (iEEG) along with behavioral and eye tracking measurements. The reason for this approach is to maximize the sensitivity and specificity to the tests of each hypothesis, while accounting for trade-offs between temporal and spatial specificity inherent to the currently available methods in human neuroscience.","title":"Overview of COGITATE"},{"location":"02_overview/#goals","text":"The aim of the COGITATE project is to accelerate research on consciousness and establish a groundbreaking model for scientific practices in cognitive neuroscience at large, by demonstrating the impact of team-based adversary research and open data to address some of the major riddles in the field, much like established practices in other fields of inquiry such as physics and genomics. Furthermore, the resulting products of this research include a large and unique multimodal database, high-end analysis tools, and a new paradigm for probing consciousness in naturalistic settings. All experimental procedures, multimodal datasets, and analysis tools developed in this project will be made openly available to the public. These products will propel further discoveries in the field of consciousness, and in cognitive neuroscience in general, which will exceed and outlast the direct outputs of the proposed studies.","title":"Goals"},{"location":"02_overview/#experiments","text":"The COGITATE consortium performed two experiments: In Experiment 1 (EXP1), two sets of clearly visible task relevant and irrelevant stimuli were shown to the subjects with different durations. The goal was to test the effects of maintenance of a percept in consciousness and task relevance and contradictory predictions regarding the involvement of prefrontal and posterior, category selective cortical areas in consciousness. Specifically, the main questions were: How is the persistence of a stimulus in consciousness reflected in cortical hemodynamic and electrophysiological activity, i.e., are the neural responses phasic or sustained throughout a conscious experience? Do activity patterns in prefrontal areas relate to visual consciousness per se or to its consequences, i.e., task-related processes? In Experiment 2 (EXP2), a novel paradigm was developed to test the key predictions of GNWT and IIT while overcoming a major obstacle in the field: creating more naturalistic conditions of invisibility that do not degrade the physical input. To achieve this goal, an engaging video game was used with the help of which salient stimuli were presented for relatively long durations in the background. Sometimes the stimuli was not consciously seen due to attentional engagement by the game. This approach allowed us to uniquely study neural activity elicited by seen or unseen stimuli under naturalistic conditions so that the stimuli can either be task relevant or task irrelevant.","title":"Experiments"},{"location":"02_overview/#experiment-1-conscious-perception","text":"","title":"Experiment 1: Conscious Perception"},{"location":"02_overview/#objective","text":"The primary aim of this experiment was to investigate neural activity in response to stimuli that are consciously perceived. It was designed to manipulate two key factors: Relevance of the Stimulus to the Task: This factor was categorized into three levels\u2014Task-relevant target, Task-relevant non-target, and Task-irrelevant stimulus. Stimulus Duration: The stimuli were presented for durations of 500 ms, 1000 ms, and 1500 ms This design framework allowed us to test several key hypotheses, including: Disentangling consciousness-related activations from task-related activations. Identifying brain regions that convey information about the content of consciousness. Examining the persistence of the content of consciousness over time.","title":"Objective"},{"location":"02_overview/#design","text":"This experiment followed a 3x3x4x2 factorial design, with the following items: Relevance of Stimulus to the Task (3) - Task-relevant target - Task-relevant non-target - Task-irrelevant stimulus Stimulus Duration (3) - 500 ms - 1000 ms - 1500 ms Stimulus Category (4) - Faces - Objects - Letters - False-fonts (meaningless symbols) Stimulus Orientation (2) - Side view (right or left view) - Front view","title":"Design"},{"location":"02_overview/#sample-size","text":"The sample sizes were determined based on common practices in the literature, resulting in a total of 122 subjects for fMRI, 102 for M-EEG, and 38 for iEEG. All subjects met specific criteria, including age and health conditions, to ensure data quality.","title":"Sample Size"},{"location":"02_overview/#task-description","text":"A sequence of images including faces, objects, letters or meaningless symbols (\u2018false fonts\u2019) with front or side (left or right) view were presented to the subjects. At the beginning of each sequence, the target images were presented and subjects were asked to memorize and remember them during the sequence. Subjects were instructed to press any buttons with their index finger when they saw targets (in either front or side views) as quickly and accurately as possible. The duration of each sequence was approximately 2 minutes. The next sequence started when the subjects pressed the space key. Here is an example of the tasks: For a comprehensive summary of more details about the experiments, please refer to the following supplementary resources: PLOS One Methods paper (COGITATE Main Scientific Paper 1 (MSP-1)): Melloni L, Mudrik L, Pitts M, Bendtz K, Ferrante O, et al. (2023) An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE 18(2): e0268577. https://doi.org/10.1371/journal.pone.0268577 COGITATE Preregistration, v4 EXP 1 Demo video","title":"Task Description"},{"location":"02_overview/#experiment-2-video-game-engagement","text":"Currently in preparation. It will be released soon!","title":"Experiment 2: Video Game Engagement"},{"location":"02_overview/#task-code-and-stimuli-repositories","text":"The task code and stimuli used for EXP1 and for all modalities are available in the cogitate-experiment-code repository.","title":"Task Code and Stimuli repositories"},{"location":"03_cogdataset/","text":"COGITATE Dataset The COGITATE dataset is a comprehensive collection of multimodal neuroimaging data, encompassing a total of 262 subjects. COGITATE employs three distinct neuroimaging techniques: fMRI, M-EEG, and iEEG/ECoG. The MEG modality comprised 102 healthy subjects, also above the age of 18, with no known psychiatric or neurological issues. These participants were sourced from the Centre for Human Brain Health at the University of Birmingham (Birmingham, United Kingdom) and the Center for MRI Research of Peking University (Beijing, China). Similarly, the fMRI modality included 122 healthy volunteers, all of whom were above the age of 18 and predominantly right-handed. These participants had no known history of psychiatric or neurological disorders and were recruited from the Yale Magnetic Resonance Research Center (New Haven, CT, United States) and the Donders Centre for Cognitive Neuroimaging (Nijmegen, Netherlands). In contrast, the iEEG modality involved a more specialized cohort of 38 patients diagnosed with pharmaco-resistant focal epilepsy. These participants ranged in age from 10 to 65 years, had an IQ above 70, and met specific health criteria. They were recruited from multiple medical centers specializing in epilepsy treatment, including the Comprehensive Epilepsy Center at New York University (New York, NY, United States), Brigham and Women\u2019s Hospital, Boston Children\u2019s Hospital (Boston, MA, United States), and the University of Wisconsin School of Medicine and Public Health (Madison, WI, United States). Age histograms across modalities Sex proportions across modalities Handedness proportions across modalities Demography of Subjects You can find the profile of participants for all modalities at subjects_demography . Here is a brief explanation about the information collected from the subjects. Demographic Information for M-EEG The below items are included in the subjects\u2019 demography for M-EEG modality: Participant_ID (participant identifier), sex (biological sex of participant), age (age of participant at the time of testing), handedness (right, left or ambidextrous), included in MSP (whether the data was used for the experiments or not), phase* (determining in which stage the data is, phase 2/phaseII (optimization) or phase 3/phaseIII (replication)), QC** status (passed/not), if Not (in QC status) QC rejection reason, weight (weight of participant at the time of study in pounds), height (height of participant at the time of study in inches), primary and secondary language, race (ethnicity of the participant), education, compensation (whether the subject was paid or not), colorblind (determining whether the subject can distinguish the colors and perceiving them correctly or not), visual correction (none or using any glasses or contact lenses), eye dominance (which eye is dominant), eye chart results (the outcome of a visual acuity test performed using the eye chart) and dioptry (visual acuity of the participant in Diopters). * Phase: COGITATE project has three phases. In phase 1 , all data were acquired by theory neutral teams. To ensure replicability of the results, the entire dataset was split into two halves, each with an equal mixture of data from each of the labs for each modality. In phase 2 , after evaluating data quality, the first half of the data were used for developing analysis tools (optimization of methods). The purpose of phase 2 was to define the best analysis practices and to agree upon, in consultation with expert advisors. In phase 3 , the replication phase, the second half of the data were analyzed using the concurred procedure, agreed upon protocols, thereby allowing an in-house replication of the results obtained in phase 2. * *QC (quality control): A number of items were checked for all the data of each modality which are elaborated in the section of Quality Check and Exclusion Criteria . Demographic Information for fMRI All of the items are similar to the M-EEG modality. Demographic Information for iEEG In addition to the properties mentioned for M-EEG modality, the below parameters were also provided for this modality: Electrode scheme (the scheme used for implanting the electrodes, Stereo, Subdural grid & Strips), number of implanted electrodes, implant hemisphere (brain hemisphere where the electrodes implanted, right, left, both or bilateral), IQ (score and name of the test used for assessment (FSIQ, WISC, VCI, POI, WMI, PSI, AMI, VMI), WADA (intracarotid sodium amobarbital, a test that determines which side of the subject\u2019s brain controls language and memory functions), seizure type (classification of seizure type), age of onset (age at which the first symptoms of seizure appeared), auditory normal hearing (indicator of whether the participant had normal hearing capabilities, yes or no), epilepsy seizure classification (categorization of epilepsy as per standard seizure classification), epilepsy seizure aura (description of any sensory or perceptual symptoms before a seizure occured), epilepsy seizure semiology (signs and symptoms exhibited during epileptic seizures), epilepsy seizure frequency (frequency of seizures experienced by participant), epilepsy post ictal semiology (symptoms and signs after an epileptic seizure), epilepsy trigger (identified factors or circumstances that increased the likelihood of experiencing a seizure), epilepsy duration uncontrolled (the duration that seizures had not been successfully managed or medically controlled), epilepsy seizure onset zone (brain region identified as the initial site of seizure activity), epilepsy resection (details of any surgical resection performed for seizure control), epilepsy language lateralization (determination of the dominant hemisphere for language function), epilepsy past surgical history (record of any previous surgeries related to the treatment of epilepsy), epilepsy past medical history (medical history relevant to epilepsy diagnosis and treatment), epilepsy family history (presence of seizure or epilepsy disorders in family members), other neurological disorders (any other diagnosed neurological disorders besides epilepsy), epilepsy MRI findings (summary of MRI findings relevant to epilepsy diagnosis), epilepsy pathology findings (pathological findings from tissue analysis post-surgery or biopsy). Quality Check Data from all modalities were checked at three levels. The first level checks tested whether the datasets contained all expected files keeping their naming conventions, and that all personal information had been removed. The second level checks tested subjects\u2019 performance with respect to behavior. For Experiment 1 , subjects were excluded if their hit rate was lower than 80% or (False Alarm) FAs higher than 20% for M-EEG and fMRI, and for iEEG, a more relaxed criteria of 70% Hits and 30% FAs was used. Two M-EEG subjects were excluded due to low hit rates and one iEEG patient was excluded due to high FAs. The third level checks assessed the quality of the neural data. Exclusion Criteria The generic exclusion criteria used across Experiment 1 and Experiment 2 included: (a) insufficient number of trials in each of the experimental conditions (<30 for M-EEG or <20 for fMRI), due to excessive muscular artifacts, movement, noisy recording, or subjects deciding to stop the experiments. If a given analysis showed that a good enough signal could be obtained with fewer trials, these numbers were amended; and (b) low performance in the attention tasks. In Experiment 1 , this translates into: <80% Hits, >20% FAs for fMRI and M-EEG subjects; <70% Hits, >30% FAs for iEEG patients. In addition, data was excluded from analysis if it did not pass any of the predefined data quality checks. Description of COGITATE Data Although our data collection had a specific purpose, the data we gathered holds potential value for a range of diverse inquiries. Consequently, the COGITATE consortium has chosen to openly share all raw data collected (including the data that did not pass the quality criteria), to facilitate its utilization for various research endeavors and promote data reusability. We have made available two primary formats for the data acquired during the experimental phase of the COGITATE project, specifically Experiment 1 : Unprocessed/Raw Data BIDS Format 1. Unprocessed/Raw Data The unprocessed data format closely resembles the original acquired data, having undergone minimal processing to ensure compliance with GDPR (General Data Protection Regulation)/ HIPAA (Health Insurance Portability & Accountability Act) anonymity standards. 2. BIDS Format BIDS format, widely adopted in cognitive neuroscience, enhances data reusability. To facilitate others in leveraging our data, we have released it in BIDS format. File Type Glossary Here are the various file formats used for each modality of the COGITATE dataset along with a short description of them. Eye Tracking & Behavioral Data Unprocessed/Raw release format - Filetype: ASC/CSV BIDS Format - Filetype: ASC/CSV The two eye trackers used within COGITATE are: 1. EyeLink eye tracker 2. Tobii eye tracker 1) EyeLink eye tracker : Most of the sites used this eye tracker which produces data in the EDF format, EyeLink Data Format. This data was immediately converted to ASCII text files using the converter provided by Eyelink. This is the ASC files that we used in our data. 2) Tobii eye tracker : The other eye tracker was the Tobii eye tracker used by New York University Langone for ECOG data. This eye tracker produces data in the form of CSV files. The files generated by eye tracking systems, containing information about eye movement and gaze behavior which typically store a time-stamped sequence of gaze data points and include information such as: Timestamps: The exact time at which each gaze data point was recorded. Gaze Coordinates: The x and y coordinates on the screen where the person's gaze is directed. Pupil Diameter: The size of the person's pupil, which can provide insights into changes in visual processing or cognitive load. Fixations: Periods of stable gaze where the person is looking at a specific point without significant movement. Saccades: Rapid eye movements between fixations, indicating shifts in attention. Blinks: Instances when the person's eyes are closed, which can be important for data cleaning and analysis. Behavioral data is available in CSV format and it provides below information: Blocks Events Trials Stimulus and jitter duration Subject's responses M-EEG Data Unprocessed/Raw release format Filetype: FIF BIDS Format Filetype: FIF File Format for the Input and Output of MEG and EEG data FIF files contain various types of information related to neuroimaging data, including: Raw sensor data: MEG and EEG measurements recorded from sensors placed on the scalp or near the head. Event information: Time-stamped triggers or markers indicating the timing of events, such as stimulus presentations or subject responses. Sensor locations and orientations: Information about the physical positions and orientations of sensors used in the measurements. Head geometry: Information about the shape and structure of the subject's head, which is crucial for accurate source localization. Covariance matrices: Statistical information about the relationships between sensor measurements at different time points or frequencies. Anatomical MRI data: High-resolution structural images of the subject's brain, used for source localization and spatial alignment. iEEG/ECoG Data Unprocessed/Raw release format Filetype: EDF BIDS Format Filetype: EDF European Data Format files used for storing and exchanging time-series biological and physiological data EDF files are designed to accommodate data from multiple channels, allowing researchers to store and manage data collected simultaneously from different sensors or electrodes. The format supports both raw signal data and associated metadata, including information about sampling rates, units of measurement, patient demographics, and recording conditions. MR/CT Data Unprocessed/Raw release format Filetype: DICOM/NIFTI BIDS Format Filetype: DICOM/NIFTI DICOM is a standard format utilized for storing CT (Computed Tomography) scans and MRI (Magnetic Resonance Imaging) data. These files encompass not only the image data but also essential metadata, including imaging parameters. NIFTI serves as another format employed for a subset of subjects where our standard procedure encountered challenges. With the exception of the MR and CT scans for 12 subjects within the iEEG data, all other datasets of similar nature are stored in DICOM format. Further details regarding these 12 problematic datasets are available in this section . NIFTI files encapsulate image data alongside metadata concerning spatial orientation, voxel dimensions, and additional imaging parameters. Data Acquisition The Cogitate dataset encompasses three distinct neuroimaging modalities, along with synchronized eye-tracking and behavioral data linked to each of these modalities. Here we detail the acquisition protocol for each modality in the corresponding data release: M-EEG, iEEG Stimuli Stimuli belonged to four categories that naturally fell into two groups that were clearly distinct from each other: pictures (20 faces and 20 objects) and symbols (20 letters and 20 false-fonts). Face stimuli were created using the FaceGen Modeler 3.1 program and object stimuli were taken from the Object Databank (Tarr, 1996). Faces and objects were grey-scaled (RGB: 125, 125, 125), and manipulated to have similar size and equal luminance using the SHINE toolbox (Willenbockel et al., 2010). Equal proportions of male and female faces were presented. They all had hair and belonged to different ethnicities (e.g., Caucasian, Asian, African, American) to facilitate face individuation. The orientation of the stimuli was manipulated, such that half of the stimuli from each category had a side view and the other half a front view. All letter stimuli and false fonts were generated with MAXON CINEMA 4D Studio (RC-R20) 20.059 on macOS 10.14, appearing in gray (RGB: 125, 125, 125). Three views were rendered for each font set (real font, false/pseudo font) at 0\u00b0, 30\u00b0 and -30\u00b0 horizontal viewing angle with the following settings: Extrusion depth 9.79% of character height, camera distance 5.65 times character height and 18\u00b0 above the center of the letter (High Angle), with a simulated focal length of 135 mm (35 mm equiv.). All stimuli were presented on a rectangular aperture at an average visual angle of 6 \u030a by 6 \u030a. Procedure Stimuli were presented sequentially, all supra-threshold, with half being task-relevant and the other half task-irrelevant. Only one stimulus was shown on the screen at any given time. To define task relevance, subjects were instructed to detect two targets from different categories, regardless of their orientation. This online reporting enabled an explicit assessment of subjects\u2019 performance, engaging report-related areas for later analysis. Each block began with notification of the two target stimuli, either pictorial (faces and objects) or symbolic (letters and false fonts), creating a clear distinction between relevant and irrelevant stimuli. At the start of each block, specific target stimuli were revealed with instructions such as \u201cdetect face A and object B\u201d or \"detect letter C and false-font D.\" Targets did not repeat across blocks. Each run included two blocks of the Face/Object task and two blocks of the Letter/False-font task, with the order counterbalanced across runs. Subjects were instructed to maintain central fixation throughout each trial. Gaze was monitored online through an eye tracker, with repeated calibrations ensuring good quality data. Each block comprised stimuli from all four categories, with each stimulus displayed for 500, 1000, or 1500 ms, followed by a blank interval, ensuring a consistent trial duration of 2000 ms. Within each block, three trial types were presented: i) Task Relevant Targets, consisting of the specific stimuli participants were tasked with detecting; ii) Task Relevant Non-Targets, encompassing stimuli from relevant categories that were not designated targets; and iii) Task Irrelevant Stimuli, comprising stimuli from the remaining categories. M-EEG Data Acquisition M-EEG recordings were acquired at the Centre for Human Brain Health (CHBH) of University of Birmingham in the United Kingdom, and at the Center for MRI Research of Peking University (PKU) in China. Hardware Both centers had a 306-channel, whole-head TRIUX MEG system from MEGIN (York Instruments; formerly Elekta). The MEG system comprised 204 planar gradiometers and 102 magnetometers in a helmet-shaped array. Simultaneous EEG was recorded using an integrated EEG system and a 64-channel electrode cap. The MEG system was equipped with a zero boil-off Helium recycling system and the noise-resilient ARMOR sensors and placed in a shielded room (2 layers of mu-metal and 1 layer of aluminum). To reduce environmental noise, the integrated active shielding system was used at PKU. In order to cover the brain more homogeneously, the MEG gantry was positioned at 68 degrees. Location of Electrodes and ECG/EOG Measurements The location of the fiducials, the positions of the 64 EEG electrodes and the participant\u2019s head shape were recorded using a 3-D digitizer system (Polhemus Isotrak). A set of bipolar electrodes were placed on the subject\u2019s chest (upper left and upper right chest position) to record the cardiac signal (ECG). Two sets of bipolar electrodes were placed around the eyes (two located at the outer canthi of the right and left eyes and two above and below the center of the right eye) to record eye movements and blinks (EOG). Ground and reference electrodes were placed on the back of the neck and on the right cheek, respectively. The impedance of all of the electrodes was checked to be below 10 kOhm. Standard Locations of EOG and ECG electrodes Head Position Indicator (HPI) Coils The participant\u2019s head position inside the MEG system was measured at the beginning and at the end of each run using four head position indicator (HPI) coils placed on the EEG cap. Specifically, the HPI coils were placed next to the left and right mastoids and on the left and right forehead. Their location relative to anatomical landmarks was digitized with a Polhemus Isotrak System. During the measurement, high frequency (>200 Hz) signals were produced by those coils and the localization of these signals was used to estimate the head position in the sensor space. To avoid the potential artifacts produced by the non-linear interaction between the signals generated by these coils, head position measurement was performed only during resting periods (as opposed to continuously). Standard locations of HPI coils. Coil Numbers: 1. Blue, 2. White, 3. Red, 4. Black, 5. Yellow Anatomical MRI Data Acquisition For each subject, a high resolution T1-weighted MRI volume (3T Siemens MRI Prisma scanner) was acquired. At CHBH, a 32-channel coil with a resolution of 1 x 1 x 1 mm, TR/TE = 2000/2.03ms; TI = 880 ms; 8\u00b0 flip angle, 208 sagittal slices and field of view (FOV): 256 x 256 matrix was acquired for source localization with individual realistic head modeling. At PKU, a 64-channel coil with a resolution of 0.5 x 0.5 x 1 mm, TR/TE = 2530/2.98ms; TI = 1100 ms; 7\u00b0 flip angle, 192 sagittal slices; FOV: 448 \u00d7 512 matrix was used. To avoid possible interference of body magnetization on the MEG recording, all MRI scans were acquired at least one week before the MEG session, or at any time afterwards. The FreeSurfer standard template was used (fsaverage) for participants lacking an anatomical scan (N=5). Behavioral Data Acquisition The task was executed using Matlab (PKU: R2018b; UB: R2019b) with Psychtoolbox v.3 (Pelli, 1997) on a custom PC at UB and a Dell XPS desktop PC at PKU. Visual stimuli were presented on a screen placed in front of the subjects with a PROPixx DLP LED projector (VPixx Technologies Inc.) at a resolution of 1920 x 1080 pixels and a refresh rate of 120 Hz. The distance between the subject\u2019s eyes and the screen was different at each site (CHBH: 119 cm, PKU: 85 cm) to achieve the same FOV of 36.6 x 21.2 degrees. Subjects responded with an 8-button response box (Millikey LH-8). Eye Tracking Data Acquisition Eye movements were monitored and recorded from both eyes (binocular eye-tracking) using the MEG-compatible EyeLink 1000 Plus eye-tracker (SR Research Ltd., Ottawa, Canada). Nine-point calibration was performed at the beginning of the experiment, and recalibrated if necessary at the beginning of each block/word. Pupil size and corneal reflection data were collected at a sampling rate of 1000 Hz. Behavioral Data Code Scheme Stimuli are coded as a 4-digit number. 1st digit = stimulus type (1 = face; 2 = object; 3 = letter; 4 = false font) 2nd digit = stimulus orientation (1 = center; 2 = left; 3 = right) 3rd & 4th digits = stimulus id (1...20; for faces 1...10 is male, 11...20 is female) e.g., \"1219\" = 1 is face, 2 is left orientation and 19 is a female stimulus #19 Eye Tracker and MEG Code Scheme The channel name that contains the eye tracker data in the FIF file is as follows: MISC1 (X), MISC2 (Y), and MISC3 (pupil) Defining some terms Trial : Stimulus presentation followed by a fixation (the two add up to 2 sec) followed by a jitter of 200 msec to 2000 ms. Mini block : presentation of 34 to 38 stimuli, in the beginning of which the target stimuli were presented. Block : composed of 4 mini blocks. At the end of each block, there was a break. Break : Pause between 2 blocks Successive trigger scheme The triggers were sent successively. The first trigger represented the stimulus type, followed by orientation, stimulus duration, and task relevance, all interspaced by 50 ms. Additionally, a trigger was sent upon key press. 1st Trigger (on Stimulus Onset): Stimulus Type 1 to 20: faces - 1 to 10 males, - 11 to 20 females 21 to 40: objects 41 to 60: letters 61 to 80: falses 2nd Trigger (2 Frames after Stimulus Onset): Stimulus Orientation 101: Center 102: Left 103: Right 3rd Trigger (4 Frames after Stimulus Onset): Stimulus Duration 151: 500 msec 152: 1000 msec 153: 1500 msec 4th Trigger (6 Frames after Stimulus Onset): Stimulus Task Relevance 201: Task relevant target 202: Task relevant non target 203: Task irrelevant 5th Trigger (8 Frames after Stimulus Onset): Trial ID Triggers 111-148: Trial number Response Trigger 255: Following button press. Stimulus Presentation End 96: Offset of stimulus presentation (onset of blank) 97: Offset of blank (onset of jitter period) Note that both these are fixations, they are just divided into blank and jitter. General Triggers to Mark Experiment Progression 86: Onset of experiment 81: Onset of recording 83: Offset of recording Miniblock ID Triggers 161-200: Miniblock ID trigger Zeroes 0: Zeros were sent between the successive triggers to reset the LPT, see below. These were also sent to the eye tracker but did not mean anything and they can safely be ignored. How The LPT Triggers Were Sent The LPT port of the computer was used for sending the triggers and it was done by using the sendTrig function. This function sets the port in a specific state (whatever trigger we want to send) and logs the trigger afterwards, noting if it is sent and what time the command for sending it is executed. For each trigger that is being sent, the port is being reset after a frame to 0. In the beginning of the experiment, a few triggers were sent to mark experiment onset and onset of recording. Then, a mini block was initiated. The participant was presented with the target screen and required to press the spacebar to proceed. When the participant pressed the space button, the miniblock ID was sent. Only once the miniblock trigger was sent the fixation appeared. This means that there was a small delay between key press and fixation onset. Following the first fixation, a jitter started, which was also logged. Then, the first stimulus was displayed. Upon the presentation of the stimulus, the successive triggers were initiated. The first trigger occurred directly after the onset of the stimulus, indicating the stimulus ID (1-80). Then, after 2 frames, the orientation trigger (101-103) was sent, followed by the duration trigger (151 to 153) at 4 frames, the task demand trigger (201-203) at 6 frames, and finally, the trial ID trigger (111 to 148) at 8 frames. Empty Room Recording Prior to each experiment, MEG signals from the empty room were recorded for 3-minutes. Resting-State (rM-EEG) The resting-state data for each participant was also recorded for 5-minutes and the subjects were asked to keep their eyes open and fixated on a point presented at the center of the screen. M-EEG signals were sampled at a rate of 1 kHz and band-pass filtered between 0.01 and 330 Hz prior to sampling. Task (tM-EEG) Following the empty room and rM-EEG recordings, subjects were asked to complete the task defined in the Procedure section. tM-EEG consisted of 10 runs, with 4 blocks each. During each block, a ratio of 34-38 trials was presented, with 32 non-targets (8 of each category) and 2-6 targets (number chosen randomly). Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Task Runs Blocks Trials Total trials Experiment 1 10 4 34-38 per block 1440 Full Structure of Session Complete standard procedure of an M-EEG session is available in MEG Standard Operating Procedure . Inclusion Criteria The items below were assessed for the subjects before the data was acquired: Age range: 18 to 35 (since over the age of 35 subjects might have a hard time maintaining central focus) Handedness: right Hearing problems: no Hearing aid: no Vision problems: no, or corrected-to-normal with soft lenses No MRI in the last week MRI compatible: no metal, medical implants, etc. No claustrophobia. Note: dental implants are allowed (particularly for non-magnetic materials) unless it generates big impacts on MEG signals, and this will be checked prior to MEG recording. No known history of psychiatric or neurological disorders, e.g., Not have been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D). Not have been formally diagnosed with autism spectrum disorder (ASD) Not suffer from epilepsy Quality Check and Exclusion Criteria For M-EEG, the first stage of the third-level checks focused on system-related and external noise generators. It was tested using the signal spectra in the empty room recording, the resting state session, and the experiment itself for all sensors. Any sensor and/or specific frequency revealing extensive noise using visual inspection, was flagged to document potential problems. Ultimately, this did not lead to any exclusions. Next, all experimental data blocks were visually inspected for abnormalities in spectra (peaks not explainable by physiology), and in ICA components, and checked for extremely noisy (based on the score of differences between the original and Maxwell-filtered data > 7) and flat sensors. The latter step was performed in a collaboration between the data monitoring team and members of the centers where data was acquired to check whether any potential changes in preprocessing for particular subjects were needed. Finally, we tested if all experimental cells (i.e. task-relevant non-targets and task-irrelevant stimuli for each one of the four categories) have enough trials (N=30). iEEG Data Acquisition iEEG recordings were obtained from patients with pharmacologically resistant epilepsy undergoing invasive electrophysiological monitoring at the Comprehensive Epilepsy Center at New York University (NYU) Langone Health Center, Brigham and Women\u2019s Hospital, Children\u2019s Hospital Boston (Harvard Medical School), and University of Wisconsin School of Medicine and Public Health (WU). Hardware Brain activity was recorded with a combination of intracranially subdural platinum-iridium electrodes embedded in SILASTIC sheets (2.3 mm diameter contacts, Ad-Tech Medical Instrument and PMT Corporation) and/or depth stereo-electroencephalographic platinum- iridium electrodes (PMT Corporation; 0.8-mm diameter, 2.0-mm length cylinders; separated from adjacent contacts by 1.5 to 2.43 mm), or Behnke-Fried depth stereo- electroencephalographic platinum-iridium electrodes (Ad-Tech Medical, BF08R-SP21X-0C2, 1.28 mm in diameter, 1.57 mm in length, 3 to 5.5 mm spacing). The decision to implant, electrode targeting, and the duration of invasive monitoring was solely determined on clinical grounds and without reference to this or any other study. Electrodes were arranged as grid arrays (either 8 \u00d7 8 with 10 mm center-to-center spacing, 8 x 16 contacts with 3 mm spacing, or hybrid macro/micro 8 x 8 contacts with 10 mm spacing and 64 integrated microcontacts with 5 mm spacing), linear strips (1 \u00d7 8/12 contacts), depth electrodes (1 \u00d7 8/12 contacts), or a combination thereof. Subdural electrodes covered extensive portions of lateral and medial frontal, parietal, occipital, and temporal cortex of the left and/or right hemisphere. Recordings from grid, strip and depth electrode arrays were done using a Natus Quantum amplifier (Pleasonton, CA) or a Neuralynx Atlas amplifier (Bozeman, MT). A total of 4057 electrodes (892 grids, 346 strips, 2819 depths) were implanted across 32 patients with drug-resistant focal epilepsy undergoing clinically motivated invasive monitoring. 3512 electrodes (780 grids, 307 strips, 2425 depths) that were unaffected by epileptic activity, artifacts, or electrical noise were used in subsequent analyses. To determine the electrode localization for each patient, a postoperative CT (computed tomography) scan and a pre-operative T1 MRI were acquired and co-registered. Recordings were obtained continuously during the patients\u2019 stay in the hospital. All data was stored with stimulus and timing markers permitting offline synchronization. Anatomical MRI Data Acquisition Before the participants underwent surgery and electrode implantation, T1-weighted MR data were acquired from them. At NYU, imaging was performed using the Siemens Biograph mMR scanner. At Harvard, the imaging sequence utilized was MPRAGE (magnetization-prepared rapid gradient-echo), with a Siemens Skyra 3T scanner. At WU, imaging was conducted using the GE MEDICAL SYSTEMS SIGNA Artist scanner. The rationale behind acquiring MR scans was the spatial resolution it offers for brain tissue visualization. CT Data Acquisition Following surgery, post-operative CT scans were obtained from the subjects to assist in localizing the electrodes on specific brain tissue. At NYU, scans were performed using a Siemens SOMATOM Force scanner. At Harvard, imaging was conducted using the Medtronic O-arm MVS O2, manufactured by Medtronic. At WU, scans were acquired utilizing the GE MEDICAL SYSTEMS Optima CT660 scanner. Please note: MR and CT data were collected for the subjects at Brigham and Women\u2019s Hospital and Children\u2019s Hospital Boston. However, due to the data protection policies, they are not included in the COGITATE Data Release. Behavioral Data Acquisition The task was implemented using Matlab (Harvard: R2020b; NYU: R2020a, WU: 2021a), Psychtoolbox v.3 (Pelli, 1997), and run on a Dell Precision 5540 laptop, with a 15.6\" Ultrasharp screen (screen size 345 x 195 mm2; resolution 1920 x 1080) at NYU and Harvard and on a Dell D29M PC with an Acer V196WL 19\" LED LCD monitor (screen size 406.4 x 254 mm2; resolution 1440 x 990) at WU. The distance between the subject\u2019s eyes and the screen was 80 cm. But the actual distance was measured for each subject before the start of recording to ensure that the size of the stimulus was 6 x 6 of visual angle. Participants responded using an 8-button response box (Millikey LH-8; response hand(s) varied based on the setting in the patient\u2019s room). Eye Tracking Data Acquisition At Harvard and Wisconsin, EyeLink 1000 Plus Camera was used to collect eye-tracking data, and a thirteen-point calibration was performed several times during the experiment. The calibration was performed at the beginning of the experiment, and recalibrated in-between blocks, if necessary to meet precision requirements. At NYU, eye-tracking data was collected throughout the duration of the experiment using a Tobii-4C eye-tracker. A nine-point calibration was performed several times during the experiment. Pupil size and corneal reflection data were collected at a sampling rate of 500 Hz at Harvard and Wisconsin and at a sampling rate of 90 Hz at NYU. The Eyelink system recorded monocular data, while the Tobii system recorded binocular data. For the former cases, only one eye was recorded as determined by ocular dominance. The experiment was not influenced by the Eye-tracking recording. Behavioral Data Code Scheme The behavioral code scheme is similar to the M-EEG modality which is explained in this section . Eye Tracker Data Code The eye tracker code scheme for the iEEG modality follows a similar structure to that described for M-EEG data. You can find detailed explanations here . iEEG Code Scheme Photodiode Trigger Scheme For ECOG patients, the type of port utilized by the M-EEG team (LPT) was incompatible with our recording system. Consequently, a photodiode was employed. A photodiode is an electronic device that records changes in luminance and converts them into voltage. An example of a photodiode In the experimental code, it was ensured that when a new event occurred on the screen (such as stimulus onset or stimulus offset), a white flash appeared in the bottom right corner. The photodiode device was positioned atop the flashing square and connected to the amplifier recording the iEEG channel signals. This additional channel facilitated the identification of event onsets in our task. This type of recording only allows binary signals (the photodiode is either on or off). However, specific events were encoded with varying numbers of subsequent pulses. Stimulus Presentation Onset The flashing square was flashed only once at the onset of each new stimulus. Stimulus Presentation Offset The flashing square was flashed only once at the offset of each stimulus. Start of the Inter-Trial Interval The flashing square was flashed only once at the beginning of the inter-trial interval. The inter-trial interval was initiated 2 seconds after stimulus onset and persisted for a random duration (following a truncated exponential distribution between 0.2 and 2 seconds, with a mean of 0.4 seconds). Block Start The start of an experimental block was marked by sending 4 consecutive pulses. Block End The end of an experimental block was marked by sending 2 consecutive pulses. Experiment Start and End The beginning and end of the experiment were marked by sending 3 consecutive pulses. Schematic representation of the photodiode channel Log File Alignment The photodiode channel solely indicates when a specific event occurred in the iEEG signals, lacking any information about the nature of the event (unlike an LPT trigger in MEG). To identify specific events in the signal, the timing information from the log file was combined with that from the photodiode. The log file contains a description of each presented event along with a corresponding time stamp from the experimental computer. The photodiode channel recorded time stamps for each event, indicating when it occurred according to the acquisition computer clock. The goal was to align the log file and the photodiode to associate each event in the photodiode signal with the corresponding event description in the log file. This step was crucial since misalignment could lead to incorrect event descriptions in the iEEG signal, compromising the entire analysis. The procedure relies on the fact that both the log file and the photodiode had timestamps. These timestamps were recorded on different clocks. Unfortunately, computer clocks tended to drift away from one another, and these drifts accumulated to be quite significant over extended periods of time (they could be several seconds apart after 1 hour). Therefore, the timestamps of the photodiode and the log file could not be used interchangeably. However, over short periods of time, these drifts were negligible. What this meant was that the interval between two successive timestamps in the log file should be quite consistent with the intervals between two successive events in the photodiode. This provided us with the most thorough check possible: if the events in the log file and in the photodiode were aligned, then there should be only tiny differences between the differences between successive events in both. Here is a step-by-step description of the alignment procedure. Extract the Photodiode Timestamps The timestamps from the photodiode triggers were extracted as the first step. As illustrated in the figure Schematic representation of the photodiode channel , a square pulse was generated for each event during the recording. The onset of each of these pulses was sought. To achieve this, a threshold was initially established, below which the photodiode was considered to be in the off state and above which it was considered to be on (based on visual inspection of the data, which was facilitated by the clean nature of photodiode signals). Subsequently, the signal was binarized using this threshold (signal_bin = signal > threshold), resulting in a signal consisting only of ones and zeros. Next, the discrete difference of the binary signal was computed (y(i + 1) = y(i + 1) - y(i)). This operation produced a \u201c1\u201d when the photodiode transitioned from off to on (onset) and a \u201c-1\u201d when it transitioned from on to off (offset). Since only the onset was of interest, the timestamps of the ones were extracted, representing the timestamps of the photodiode. Verify Event Count Alignment The first step in aligning the photodiode events and the log files was to check if the number of events in each matched. If they did not match, then there was a problem. Aligning the Two Signals To ensure alignment of both signals, the discrete difference between the photodiode and log file timestamps was computed, providing the interval between successive events for each signal. The resulting arrays were then plotted atop each other. Misalignment between the two sources of timing information could be easily detected, as they did not overlap. Perfect overlap between the two was necessary to consider the signals aligned. Additionally, the difference between the two signals was computed to ensure minimal deviation. Integrating Information Once the two signals were properly aligned, the log file events could be used as descriptors of the events marked at the timestamps from the photodiode. Illustration of the alignment pipeline The alignment procedure allowed the information from the log file to be utilized in generating well-described events in the iEEG data. The events were encoded as \u201c/\u201d separated strings and stored in an events.tsv table. An extensive description of each event type and their levels are as follows: Task Name: Dur Task Description: Description of the experimental factors and their level with the correct syntax to access them from the MNE epochs object. Note that factor and level names are case-sensitive. We describe the syntax for each condition separately. However, note that you can fetch a combination of factors from the epochs using a forward slash. For example, if you wish to fetch the face target trials, you can combine both conditions like so: epochs['face/Relevant target'] will fetch all face target trials. In addition, the epochs objects are equipped with metadata, where the name of the column is the name of the factor, and the level follows the same nomenclature as below. This can be used for more sophisticated trial filtering and retrieval. Experimental Design: Event Type: Marks the different events occurring within a trial. Factor Type: Categorical Factor 1: Name: stimulus onset Description: Marks the onset of the visual stimuli. With epochs['stimulus onset'], extract all epochs time-locked to the visual stimulus onset. Factor 2: Name: stimulus offset Description: Marks the offset of the visual stimuli. With epochs['stimulus offset'], extract all epochs time-locked to the offset of the visual stimulus. Factor 3: Name: jitter onset Description: Marks the beginning of the inter-trial jitter. All trials lasted 2 sec., with an added jitter of 400ms on average. With epochs['jitter onset'], extract all epochs time-locked to the beginning of the jitter period (2 sec. after stimulus onset). Block: Marks the experimental blocks. Factor Type: Discrete Factor 1: Name: block_* Description: Experimental blocks. Our experiment consisted of 5 blocks, in between which participants were allowed to take a break. With epochs['block_1'], extract all epochs of the first experimental block. Miniblock: Marks the experimental miniblocks. Factor Type: Discrete Factor 1: Name: miniblock_* Description: Experimental miniblocks. Each experimental block consisted of 4 miniblocks. At the beginning of each miniblock, the two target stimuli were presented to the participant, which the participant had to remember to be able to detect in the stream of stimuli. With epochs['miniblock_1'], extract all epochs of the first experimental miniblock. Category: Category of the visual stimuli. Factor Type: Categorical Factor 1: Name: face Description: Identifies face trials. With epochs['face'], extract all epochs in which a face was presented. Factor 2: Name: object Description: Identifies object trials. With epochs['object'], extract all epochs in which an object was presented. Factor 3: Name: letter Description: Identifies letter trials. With epochs['letter'], extract all epochs in which a letter was presented. Factor 4: Name: false Description: Identifies false font trials (i.e., symbols). With epochs['false'], extract all epochs in which a false font was presented. Identity: Identity of the visual stimuli. Factor Type: Categorical Factor 1: Name: face_* Description: Identifies the identity of face trials. With epochs['face_*'], extract all epochs in which that specific face was presented. From 1-9, leading 0. Factor 2: Name: object_* Description: Identifies the identity of object trials. With epochs['object_*'], extract all epochs in which that specific object was presented. From 1-9, leading 0. Factor 3: Name: letter_* Description: Identifies the identity of letter trials. With epochs['letter_*'], extract all epochs in which that specific letter was presented. From 1-9, leading 0. Factor 4: Name: false_* Description: Identifies the identity of false font trials (i.e., symbols). With epochs['false__*'], extract all epochs in which that specific false font was presented. From 1-9, leading 0. Orientation: Orientation of the displayed stimuli. Factor Type: Categorical Factor 1: Name: Center Description: Identifies stimuli presented in the center orientation. With epochs['Center'], extract all epochs in which a stimulus was presented in the center orientation. Factor 2: Name: Left Description: Identifies stimuli presented in the Left orientation. With epochs['Left'], extract all epochs in which a stimulus was presented in the Left orientation. Factor 3: Name: Right Description: Identifies stimuli presented in the Right orientation. With epochs['Right'], extract all epochs in which a stimulus was presented in the Right orientation. Duration: Duration a visual stimulus was presented for. Factor Type: Categorical Factor 1: Name: 500ms Description: Identifies stimuli presented for 500ms. With epochs['500ms'], extract all epochs in which the stimulus was displayed for 500ms. Factor 2: Name: 1000ms Description: Identifies stimuli presented for 1000ms. With epochs['1000ms'], extract all epochs in which the stimulus was displayed for 1000ms. Factor 3: Name: 1500ms Description: Identifies stimuli presented for 1500ms. With epochs['1500ms'], extract all epochs in which the stimulus was displayed for 1500ms. Task Relevance: Task relevance of a given trial. Factor Type: Categorical Factor 1: Name: Relevant target Description: Identifies target stimuli. Target stimuli are presented at the beginning of each miniblock, and participants must detect them among the sequence of presented stimuli by pressing a button. With epochs['Relevant target'], extract all target trials. Factor 2: Name: Relevant non-target Description: Identifies task-relevant non-target stimuli. We considered task-relevant stimuli that were of the same category as the target but of a different identity. With epochs['Relevant non-target'], extract all task-relevant non-target trials. Factor 3: Name: Irrelevant Description: Identifies task-irrelevant non-target stimuli. We considered task-irrelevant stimuli that were of a different category than the target. With epochs['Irrelevant'], extract all task-irrelevant non-target trials. Response: Rated response of the participants. Factor Type: Categorical Factor 1: Name: Hit Description: Participants correctly identified a target by pressing a button. With epochs['Hit'], extract all target trials for which the participants pressed a key. Factor 2: Name: CorrRej Description: Participants correctly rejected a non-target stimulus and did not press any button. With epochs['CorrRej'], extract all non-target trials for which the participants did not press a key. Factor 3: Name: Miss Description: Participants failed to press a button when a target stimulus was presented. With epochs['Miss'], extract all target trials in which participants failed to press a button. Factor 4: Name: FA Description: Participants mistakenly pressed a button when a non-target stimulus was presented. With epochs['FA'], extract all non-target trials in which participants pressed a button. Factor 5: Name: n.a. Description: For the events stimulus offset and jitter onset, the response is set to n.a. as the response relates to the visual stimulus, not to the other events. This should not be used to access the data. Surface Reconstruction and Electrode Localization Subject-specific pial surfaces were automatically reconstructed based on a pre-implant T1 weighted MR image using the Freesurfer image analysis suite (\u2018recon-all\u2019, Dale et al., 1999). Post-implant CT images were co-registered with the pre-implant MR images using FLIRT (Jenkinson and Smith, 2001), as implemented in FSL (Smith et al., 2004). For NYU patients, we used a semi-automatic approach to generating electrode labels. For manual cases, co-registered MR and CT slices were examined using FSLView (Smith et al., 2004). For grids, we localized three corner electrodes and the remaining electrodes coordinates were then automatically interpolated along the shared plane using the known inter-electrode distances. Strip and depth electrodes were localized manually when they did not follow straight trajectories. When depth electrodes were in a straight line, the first and last electrodes were localized manually, and electrodes in between were automatically interpolated and labeled based on known inter-electrode distances and serial labeling convention. For WU patients, electrodes were localized manually using the SubNuclear toolbox . Electrode locations were further refined within the space of the pre-operative MRI using three-dimensional non-linear thin-plate spline warping (Rohr et al., 2001), which corrected for post-operative shift and distortion. The warping was constrained with manually selected points through the brain, which was visually aligned with landmarks in pre-implantation MRI and post-implantation CT. For Harvard subjects, individual contacts from depth electrodes were labeled manually from the CT image using the BioImage Suite \u2019s Electrode Editor tool (legacy version 3.5; Joshi, et al., 2011). The coordinates in CT image-space were converted to coordinates within the patient\u2019s segmented MRI brain-space using the iELVis toolbox (yangWangElecPjct; Yang, Wang, et al., 2012; Groppe et al., 2017). For all sites, the electrode spatial coordinates were transformed from the individual patient space into the standard space of the Montreal Neurological Institute (MNI-152) template for plotting purposes. At NYU, this transformation was performed using the DARTEL algorithm (Ashburner, 2007) implemented in SPM8 (Wellcome Department of Imaging Neuroscience, London, United Kingdom). At Harvard, this transformation was performed using the iELVis toolkit . At WU the transformation was performed with the SubNuclear toolbox using img2imgcoord utility. Finger Localizer Task In the Finger Localizer task, participants were presented with four circles, one of which was filled with a specific color, serving as a cue for participants to press the corresponding colored button on the response box. The filled state of the circle persisted for the duration of the response time, followed by an additional delay of 200 milliseconds. The Inter-Trial Intervals (ITIs) were uniformly distributed, with a mean of 0.55 seconds and a range from 0.400 to 0.700 seconds. The experimental protocol comprised 80 trials, distributed equally among the four colors, with 20 trials per color, and the sequence of trials was randomized. This task aimed to identify brain regions responsible for motor control, particularly those governing finger movements, and to pinpoint electrodes selectively activated by specific motor responses, such as button presses. An illustration depicting a trial in which the participant is required to press the blue button Please note: Although participants completed this task concurrently with Experiment 1 , we did not utilize the data in the analysis, as it was primarily acquired for use in Experiment 2 . Consequently, the data pertaining to the Finger Localizer task is not included in this version of our data release. Task (tiEEG) Participants proceeded to Experiment 1 either after or before completing the Finger Localizer task . tiEEG consisted of 5 runs containing 4 blocks each, and 34-38 trials per block, 32 non-targets (8 of each category) and 2-6 targets, with each trial lasting 2.4 s approximately, for a total of 720 trials. Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Additional information about the task can be found here . Task Runs Blocks Trials Total trials Experiment 1 5 4 34-38 per block 720 Full Structure of Session Complete standard procedure of an iEEG session is available in iEEG Standard Operating Procedure . Inclusion Criteria For the iEEG studies, subjects were 10-65 years old, able to provide informed consent, had IQ > 70, fluent in English, with self-reported normal hearing, normal or corrected-to-normal vision, and cognitive and language abilities within or above the normal range in formal neuropsychological testing performed before surgery. They must not have had an electrographic seizure within 3-hours prior to testing. Quality Check A comprehensive quality assessment was conducted on the iEEG data. The data underwent manual annotation by epileptologists, excluding channels within the epileptic onset zone, as well as those exhibiting artifacts or showing complete flatness due to electrode contact issues. Channel rejection was independently performed by both the data monitoring and iEEG teams, with results compared to ensure consistency. Additionally, electrode reconstruction was verified to align with subjects' CT scans. Finally, we inspected for significant disturbances in the spectra. Exclusion Criteria Subjects who were unable to complete a sufficient number of trials due to excessive muscular artifacts, movement, noisy recordings, or a decision by the subject to terminate the experiment were excluded. Subjects who exhibited a low performance in the attention task were also excluded \u2013 this translates to <70% Hits and >30% FAs. In addition, data was also excluded if it did not pass any of the pre-defined data quality checks. Deviations from Data Curation Procedure Our approach to defacing MR/CT DICOM images involved utilizing the built-in face masking tool from XNAT. However, for 12 iEEG subjects, we encountered difficulties in executing this step. This was primarily due to variations in the DICOM images, which could include cropped images, aliasing artifacts, broken headers, missing slices, non-equidistant slices within a series, and other issues. Below is the list of subjects where the execution of the XNAT facemasking tool failed: Subject_ID: CF103, CF104, CF112, CF113, CF116, CF117, CF120, CF121, CF122, CF124, CF125, CF126 To address this issue, we implemented a slightly different workflow that allowed us to successfully deface MR/CT images of these 12 subjects. However, this new approach differed in its ability to regenerate the original DICOM images post-defacement (the original output from the XNAT facemasking tool). Instead, it generated defaced NIFTI images as the primary output. For our current version of data release, we have decided to share only the defaced NIFTI images for these subjects. Details about this workflow are provided below: Anonymization: MR/CT DICOM images underwent anonymization to remove the subject\u2019s Protected Health Information (PHI). NIFTI Conversion: Anonymized DICOM images were then converted to the NIFTI image format using the dcm2niix package (version: 1.0.20220505) (Li et al., 2016). Defacing of NIFTI: Defacing of the NIFTI images was performed using the PyDeface package (version: 2.0.2) (Gulban et al., 2022). Verification: This step involved checking the quality of the defaced NIFTI images using 2D/3D image plots to compare before and after the defacing stage. Alternative workflow for defacing 12 challenging MR/CT DICOM Images Miscellaneous: In the MR data for subject CF103, one DICOM slice was inadvertently dropped during the conversion process from DICOM to NIFTI format. However, the resulting NIFTI file remains functional and usable.","title":"COGITATE Dataset"},{"location":"03_cogdataset/#cogitate-dataset","text":"The COGITATE dataset is a comprehensive collection of multimodal neuroimaging data, encompassing a total of 262 subjects. COGITATE employs three distinct neuroimaging techniques: fMRI, M-EEG, and iEEG/ECoG. The MEG modality comprised 102 healthy subjects, also above the age of 18, with no known psychiatric or neurological issues. These participants were sourced from the Centre for Human Brain Health at the University of Birmingham (Birmingham, United Kingdom) and the Center for MRI Research of Peking University (Beijing, China). Similarly, the fMRI modality included 122 healthy volunteers, all of whom were above the age of 18 and predominantly right-handed. These participants had no known history of psychiatric or neurological disorders and were recruited from the Yale Magnetic Resonance Research Center (New Haven, CT, United States) and the Donders Centre for Cognitive Neuroimaging (Nijmegen, Netherlands). In contrast, the iEEG modality involved a more specialized cohort of 38 patients diagnosed with pharmaco-resistant focal epilepsy. These participants ranged in age from 10 to 65 years, had an IQ above 70, and met specific health criteria. They were recruited from multiple medical centers specializing in epilepsy treatment, including the Comprehensive Epilepsy Center at New York University (New York, NY, United States), Brigham and Women\u2019s Hospital, Boston Children\u2019s Hospital (Boston, MA, United States), and the University of Wisconsin School of Medicine and Public Health (Madison, WI, United States). Age histograms across modalities Sex proportions across modalities Handedness proportions across modalities","title":"COGITATE Dataset"},{"location":"03_cogdataset/#demography-of-subjects","text":"You can find the profile of participants for all modalities at subjects_demography . Here is a brief explanation about the information collected from the subjects.","title":"Demography of Subjects"},{"location":"03_cogdataset/#demographic-information-for-m-eeg","text":"The below items are included in the subjects\u2019 demography for M-EEG modality: Participant_ID (participant identifier), sex (biological sex of participant), age (age of participant at the time of testing), handedness (right, left or ambidextrous), included in MSP (whether the data was used for the experiments or not), phase* (determining in which stage the data is, phase 2/phaseII (optimization) or phase 3/phaseIII (replication)), QC** status (passed/not), if Not (in QC status) QC rejection reason, weight (weight of participant at the time of study in pounds), height (height of participant at the time of study in inches), primary and secondary language, race (ethnicity of the participant), education, compensation (whether the subject was paid or not), colorblind (determining whether the subject can distinguish the colors and perceiving them correctly or not), visual correction (none or using any glasses or contact lenses), eye dominance (which eye is dominant), eye chart results (the outcome of a visual acuity test performed using the eye chart) and dioptry (visual acuity of the participant in Diopters). * Phase: COGITATE project has three phases. In phase 1 , all data were acquired by theory neutral teams. To ensure replicability of the results, the entire dataset was split into two halves, each with an equal mixture of data from each of the labs for each modality. In phase 2 , after evaluating data quality, the first half of the data were used for developing analysis tools (optimization of methods). The purpose of phase 2 was to define the best analysis practices and to agree upon, in consultation with expert advisors. In phase 3 , the replication phase, the second half of the data were analyzed using the concurred procedure, agreed upon protocols, thereby allowing an in-house replication of the results obtained in phase 2. * *QC (quality control): A number of items were checked for all the data of each modality which are elaborated in the section of Quality Check and Exclusion Criteria .","title":"Demographic Information for M-EEG"},{"location":"03_cogdataset/#demographic-information-for-fmri","text":"All of the items are similar to the M-EEG modality.","title":"Demographic Information for fMRI"},{"location":"03_cogdataset/#demographic-information-for-ieeg","text":"In addition to the properties mentioned for M-EEG modality, the below parameters were also provided for this modality: Electrode scheme (the scheme used for implanting the electrodes, Stereo, Subdural grid & Strips), number of implanted electrodes, implant hemisphere (brain hemisphere where the electrodes implanted, right, left, both or bilateral), IQ (score and name of the test used for assessment (FSIQ, WISC, VCI, POI, WMI, PSI, AMI, VMI), WADA (intracarotid sodium amobarbital, a test that determines which side of the subject\u2019s brain controls language and memory functions), seizure type (classification of seizure type), age of onset (age at which the first symptoms of seizure appeared), auditory normal hearing (indicator of whether the participant had normal hearing capabilities, yes or no), epilepsy seizure classification (categorization of epilepsy as per standard seizure classification), epilepsy seizure aura (description of any sensory or perceptual symptoms before a seizure occured), epilepsy seizure semiology (signs and symptoms exhibited during epileptic seizures), epilepsy seizure frequency (frequency of seizures experienced by participant), epilepsy post ictal semiology (symptoms and signs after an epileptic seizure), epilepsy trigger (identified factors or circumstances that increased the likelihood of experiencing a seizure), epilepsy duration uncontrolled (the duration that seizures had not been successfully managed or medically controlled), epilepsy seizure onset zone (brain region identified as the initial site of seizure activity), epilepsy resection (details of any surgical resection performed for seizure control), epilepsy language lateralization (determination of the dominant hemisphere for language function), epilepsy past surgical history (record of any previous surgeries related to the treatment of epilepsy), epilepsy past medical history (medical history relevant to epilepsy diagnosis and treatment), epilepsy family history (presence of seizure or epilepsy disorders in family members), other neurological disorders (any other diagnosed neurological disorders besides epilepsy), epilepsy MRI findings (summary of MRI findings relevant to epilepsy diagnosis), epilepsy pathology findings (pathological findings from tissue analysis post-surgery or biopsy).","title":"Demographic Information for iEEG"},{"location":"03_cogdataset/#quality-check","text":"Data from all modalities were checked at three levels. The first level checks tested whether the datasets contained all expected files keeping their naming conventions, and that all personal information had been removed. The second level checks tested subjects\u2019 performance with respect to behavior. For Experiment 1 , subjects were excluded if their hit rate was lower than 80% or (False Alarm) FAs higher than 20% for M-EEG and fMRI, and for iEEG, a more relaxed criteria of 70% Hits and 30% FAs was used. Two M-EEG subjects were excluded due to low hit rates and one iEEG patient was excluded due to high FAs. The third level checks assessed the quality of the neural data.","title":"Quality Check"},{"location":"03_cogdataset/#exclusion-criteria","text":"The generic exclusion criteria used across Experiment 1 and Experiment 2 included: (a) insufficient number of trials in each of the experimental conditions (<30 for M-EEG or <20 for fMRI), due to excessive muscular artifacts, movement, noisy recording, or subjects deciding to stop the experiments. If a given analysis showed that a good enough signal could be obtained with fewer trials, these numbers were amended; and (b) low performance in the attention tasks. In Experiment 1 , this translates into: <80% Hits, >20% FAs for fMRI and M-EEG subjects; <70% Hits, >30% FAs for iEEG patients. In addition, data was excluded from analysis if it did not pass any of the predefined data quality checks.","title":"Exclusion Criteria"},{"location":"03_cogdataset/#description-of-cogitate-data","text":"Although our data collection had a specific purpose, the data we gathered holds potential value for a range of diverse inquiries. Consequently, the COGITATE consortium has chosen to openly share all raw data collected (including the data that did not pass the quality criteria), to facilitate its utilization for various research endeavors and promote data reusability. We have made available two primary formats for the data acquired during the experimental phase of the COGITATE project, specifically Experiment 1 : Unprocessed/Raw Data BIDS Format","title":"Description of COGITATE Data"},{"location":"03_cogdataset/#1-unprocessedraw-data","text":"The unprocessed data format closely resembles the original acquired data, having undergone minimal processing to ensure compliance with GDPR (General Data Protection Regulation)/ HIPAA (Health Insurance Portability & Accountability Act) anonymity standards.","title":"1. Unprocessed/Raw Data"},{"location":"03_cogdataset/#2-bids-format","text":"BIDS format, widely adopted in cognitive neuroscience, enhances data reusability. To facilitate others in leveraging our data, we have released it in BIDS format.","title":"2. BIDS Format"},{"location":"03_cogdataset/#file-type-glossary","text":"Here are the various file formats used for each modality of the COGITATE dataset along with a short description of them.","title":"File Type Glossary"},{"location":"03_cogdataset/#eye-tracking-behavioral-data","text":"Unprocessed/Raw release format - Filetype: ASC/CSV BIDS Format - Filetype: ASC/CSV The two eye trackers used within COGITATE are: 1. EyeLink eye tracker 2. Tobii eye tracker 1) EyeLink eye tracker : Most of the sites used this eye tracker which produces data in the EDF format, EyeLink Data Format. This data was immediately converted to ASCII text files using the converter provided by Eyelink. This is the ASC files that we used in our data. 2) Tobii eye tracker : The other eye tracker was the Tobii eye tracker used by New York University Langone for ECOG data. This eye tracker produces data in the form of CSV files. The files generated by eye tracking systems, containing information about eye movement and gaze behavior which typically store a time-stamped sequence of gaze data points and include information such as: Timestamps: The exact time at which each gaze data point was recorded. Gaze Coordinates: The x and y coordinates on the screen where the person's gaze is directed. Pupil Diameter: The size of the person's pupil, which can provide insights into changes in visual processing or cognitive load. Fixations: Periods of stable gaze where the person is looking at a specific point without significant movement. Saccades: Rapid eye movements between fixations, indicating shifts in attention. Blinks: Instances when the person's eyes are closed, which can be important for data cleaning and analysis. Behavioral data is available in CSV format and it provides below information: Blocks Events Trials Stimulus and jitter duration Subject's responses","title":"Eye Tracking & Behavioral Data"},{"location":"03_cogdataset/#m-eeg-data","text":"Unprocessed/Raw release format Filetype: FIF BIDS Format Filetype: FIF File Format for the Input and Output of MEG and EEG data FIF files contain various types of information related to neuroimaging data, including: Raw sensor data: MEG and EEG measurements recorded from sensors placed on the scalp or near the head. Event information: Time-stamped triggers or markers indicating the timing of events, such as stimulus presentations or subject responses. Sensor locations and orientations: Information about the physical positions and orientations of sensors used in the measurements. Head geometry: Information about the shape and structure of the subject's head, which is crucial for accurate source localization. Covariance matrices: Statistical information about the relationships between sensor measurements at different time points or frequencies. Anatomical MRI data: High-resolution structural images of the subject's brain, used for source localization and spatial alignment.","title":"M-EEG Data"},{"location":"03_cogdataset/#ieegecog-data","text":"Unprocessed/Raw release format Filetype: EDF BIDS Format Filetype: EDF European Data Format files used for storing and exchanging time-series biological and physiological data EDF files are designed to accommodate data from multiple channels, allowing researchers to store and manage data collected simultaneously from different sensors or electrodes. The format supports both raw signal data and associated metadata, including information about sampling rates, units of measurement, patient demographics, and recording conditions.","title":"iEEG/ECoG Data"},{"location":"03_cogdataset/#mrct-data","text":"Unprocessed/Raw release format Filetype: DICOM/NIFTI BIDS Format Filetype: DICOM/NIFTI DICOM is a standard format utilized for storing CT (Computed Tomography) scans and MRI (Magnetic Resonance Imaging) data. These files encompass not only the image data but also essential metadata, including imaging parameters. NIFTI serves as another format employed for a subset of subjects where our standard procedure encountered challenges. With the exception of the MR and CT scans for 12 subjects within the iEEG data, all other datasets of similar nature are stored in DICOM format. Further details regarding these 12 problematic datasets are available in this section . NIFTI files encapsulate image data alongside metadata concerning spatial orientation, voxel dimensions, and additional imaging parameters.","title":"MR/CT Data"},{"location":"03_cogdataset/#data-acquisition","text":"The Cogitate dataset encompasses three distinct neuroimaging modalities, along with synchronized eye-tracking and behavioral data linked to each of these modalities. Here we detail the acquisition protocol for each modality in the corresponding data release: M-EEG, iEEG","title":"Data Acquisition"},{"location":"03_cogdataset/#stimuli","text":"Stimuli belonged to four categories that naturally fell into two groups that were clearly distinct from each other: pictures (20 faces and 20 objects) and symbols (20 letters and 20 false-fonts). Face stimuli were created using the FaceGen Modeler 3.1 program and object stimuli were taken from the Object Databank (Tarr, 1996). Faces and objects were grey-scaled (RGB: 125, 125, 125), and manipulated to have similar size and equal luminance using the SHINE toolbox (Willenbockel et al., 2010). Equal proportions of male and female faces were presented. They all had hair and belonged to different ethnicities (e.g., Caucasian, Asian, African, American) to facilitate face individuation. The orientation of the stimuli was manipulated, such that half of the stimuli from each category had a side view and the other half a front view. All letter stimuli and false fonts were generated with MAXON CINEMA 4D Studio (RC-R20) 20.059 on macOS 10.14, appearing in gray (RGB: 125, 125, 125). Three views were rendered for each font set (real font, false/pseudo font) at 0\u00b0, 30\u00b0 and -30\u00b0 horizontal viewing angle with the following settings: Extrusion depth 9.79% of character height, camera distance 5.65 times character height and 18\u00b0 above the center of the letter (High Angle), with a simulated focal length of 135 mm (35 mm equiv.). All stimuli were presented on a rectangular aperture at an average visual angle of 6 \u030a by 6 \u030a.","title":"Stimuli"},{"location":"03_cogdataset/#procedure","text":"Stimuli were presented sequentially, all supra-threshold, with half being task-relevant and the other half task-irrelevant. Only one stimulus was shown on the screen at any given time. To define task relevance, subjects were instructed to detect two targets from different categories, regardless of their orientation. This online reporting enabled an explicit assessment of subjects\u2019 performance, engaging report-related areas for later analysis. Each block began with notification of the two target stimuli, either pictorial (faces and objects) or symbolic (letters and false fonts), creating a clear distinction between relevant and irrelevant stimuli. At the start of each block, specific target stimuli were revealed with instructions such as \u201cdetect face A and object B\u201d or \"detect letter C and false-font D.\" Targets did not repeat across blocks. Each run included two blocks of the Face/Object task and two blocks of the Letter/False-font task, with the order counterbalanced across runs. Subjects were instructed to maintain central fixation throughout each trial. Gaze was monitored online through an eye tracker, with repeated calibrations ensuring good quality data. Each block comprised stimuli from all four categories, with each stimulus displayed for 500, 1000, or 1500 ms, followed by a blank interval, ensuring a consistent trial duration of 2000 ms. Within each block, three trial types were presented: i) Task Relevant Targets, consisting of the specific stimuli participants were tasked with detecting; ii) Task Relevant Non-Targets, encompassing stimuli from relevant categories that were not designated targets; and iii) Task Irrelevant Stimuli, comprising stimuli from the remaining categories.","title":"Procedure"},{"location":"03_cogdataset/#m-eeg-data-acquisition","text":"M-EEG recordings were acquired at the Centre for Human Brain Health (CHBH) of University of Birmingham in the United Kingdom, and at the Center for MRI Research of Peking University (PKU) in China.","title":"M-EEG Data Acquisition"},{"location":"03_cogdataset/#hardware","text":"Both centers had a 306-channel, whole-head TRIUX MEG system from MEGIN (York Instruments; formerly Elekta). The MEG system comprised 204 planar gradiometers and 102 magnetometers in a helmet-shaped array. Simultaneous EEG was recorded using an integrated EEG system and a 64-channel electrode cap. The MEG system was equipped with a zero boil-off Helium recycling system and the noise-resilient ARMOR sensors and placed in a shielded room (2 layers of mu-metal and 1 layer of aluminum). To reduce environmental noise, the integrated active shielding system was used at PKU. In order to cover the brain more homogeneously, the MEG gantry was positioned at 68 degrees.","title":"Hardware"},{"location":"03_cogdataset/#location-of-electrodes-and-ecgeog-measurements","text":"The location of the fiducials, the positions of the 64 EEG electrodes and the participant\u2019s head shape were recorded using a 3-D digitizer system (Polhemus Isotrak). A set of bipolar electrodes were placed on the subject\u2019s chest (upper left and upper right chest position) to record the cardiac signal (ECG). Two sets of bipolar electrodes were placed around the eyes (two located at the outer canthi of the right and left eyes and two above and below the center of the right eye) to record eye movements and blinks (EOG). Ground and reference electrodes were placed on the back of the neck and on the right cheek, respectively. The impedance of all of the electrodes was checked to be below 10 kOhm. Standard Locations of EOG and ECG electrodes","title":"Location of Electrodes and ECG/EOG Measurements"},{"location":"03_cogdataset/#head-position-indicator-hpi-coils","text":"The participant\u2019s head position inside the MEG system was measured at the beginning and at the end of each run using four head position indicator (HPI) coils placed on the EEG cap. Specifically, the HPI coils were placed next to the left and right mastoids and on the left and right forehead. Their location relative to anatomical landmarks was digitized with a Polhemus Isotrak System. During the measurement, high frequency (>200 Hz) signals were produced by those coils and the localization of these signals was used to estimate the head position in the sensor space. To avoid the potential artifacts produced by the non-linear interaction between the signals generated by these coils, head position measurement was performed only during resting periods (as opposed to continuously). Standard locations of HPI coils. Coil Numbers: 1. Blue, 2. White, 3. Red, 4. Black, 5. Yellow","title":"Head Position Indicator (HPI) Coils"},{"location":"03_cogdataset/#anatomical-mri-data-acquisition","text":"For each subject, a high resolution T1-weighted MRI volume (3T Siemens MRI Prisma scanner) was acquired. At CHBH, a 32-channel coil with a resolution of 1 x 1 x 1 mm, TR/TE = 2000/2.03ms; TI = 880 ms; 8\u00b0 flip angle, 208 sagittal slices and field of view (FOV): 256 x 256 matrix was acquired for source localization with individual realistic head modeling. At PKU, a 64-channel coil with a resolution of 0.5 x 0.5 x 1 mm, TR/TE = 2530/2.98ms; TI = 1100 ms; 7\u00b0 flip angle, 192 sagittal slices; FOV: 448 \u00d7 512 matrix was used. To avoid possible interference of body magnetization on the MEG recording, all MRI scans were acquired at least one week before the MEG session, or at any time afterwards. The FreeSurfer standard template was used (fsaverage) for participants lacking an anatomical scan (N=5).","title":"Anatomical MRI Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-acquisition","text":"The task was executed using Matlab (PKU: R2018b; UB: R2019b) with Psychtoolbox v.3 (Pelli, 1997) on a custom PC at UB and a Dell XPS desktop PC at PKU. Visual stimuli were presented on a screen placed in front of the subjects with a PROPixx DLP LED projector (VPixx Technologies Inc.) at a resolution of 1920 x 1080 pixels and a refresh rate of 120 Hz. The distance between the subject\u2019s eyes and the screen was different at each site (CHBH: 119 cm, PKU: 85 cm) to achieve the same FOV of 36.6 x 21.2 degrees. Subjects responded with an 8-button response box (Millikey LH-8).","title":"Behavioral Data Acquisition"},{"location":"03_cogdataset/#eye-tracking-data-acquisition","text":"Eye movements were monitored and recorded from both eyes (binocular eye-tracking) using the MEG-compatible EyeLink 1000 Plus eye-tracker (SR Research Ltd., Ottawa, Canada). Nine-point calibration was performed at the beginning of the experiment, and recalibrated if necessary at the beginning of each block/word. Pupil size and corneal reflection data were collected at a sampling rate of 1000 Hz.","title":"Eye Tracking Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-code-scheme","text":"Stimuli are coded as a 4-digit number. 1st digit = stimulus type (1 = face; 2 = object; 3 = letter; 4 = false font) 2nd digit = stimulus orientation (1 = center; 2 = left; 3 = right) 3rd & 4th digits = stimulus id (1...20; for faces 1...10 is male, 11...20 is female) e.g., \"1219\" = 1 is face, 2 is left orientation and 19 is a female stimulus #19","title":"Behavioral Data Code Scheme"},{"location":"03_cogdataset/#eye-tracker-and-meg-code-scheme","text":"The channel name that contains the eye tracker data in the FIF file is as follows: MISC1 (X), MISC2 (Y), and MISC3 (pupil)","title":"Eye Tracker and MEG Code Scheme"},{"location":"03_cogdataset/#defining-some-terms","text":"Trial : Stimulus presentation followed by a fixation (the two add up to 2 sec) followed by a jitter of 200 msec to 2000 ms. Mini block : presentation of 34 to 38 stimuli, in the beginning of which the target stimuli were presented. Block : composed of 4 mini blocks. At the end of each block, there was a break. Break : Pause between 2 blocks","title":"Defining some terms"},{"location":"03_cogdataset/#successive-trigger-scheme","text":"The triggers were sent successively. The first trigger represented the stimulus type, followed by orientation, stimulus duration, and task relevance, all interspaced by 50 ms. Additionally, a trigger was sent upon key press.","title":"Successive trigger scheme"},{"location":"03_cogdataset/#1st-trigger-on-stimulus-onset-stimulus-type","text":"1 to 20: faces - 1 to 10 males, - 11 to 20 females 21 to 40: objects 41 to 60: letters 61 to 80: falses","title":"1st Trigger (on Stimulus Onset): Stimulus Type"},{"location":"03_cogdataset/#2nd-trigger-2-frames-after-stimulus-onset-stimulus-orientation","text":"101: Center 102: Left 103: Right","title":"2nd Trigger (2 Frames after Stimulus Onset): Stimulus Orientation"},{"location":"03_cogdataset/#3rd-trigger-4-frames-after-stimulus-onset-stimulus-duration","text":"151: 500 msec 152: 1000 msec 153: 1500 msec","title":"3rd Trigger (4 Frames after Stimulus Onset): Stimulus Duration"},{"location":"03_cogdataset/#4th-trigger-6-frames-after-stimulus-onset-stimulus-task-relevance","text":"201: Task relevant target 202: Task relevant non target 203: Task irrelevant","title":"4th Trigger (6 Frames after Stimulus Onset): Stimulus Task Relevance"},{"location":"03_cogdataset/#5th-trigger-8-frames-after-stimulus-onset-trial-id-triggers","text":"111-148: Trial number","title":"5th Trigger (8 Frames after Stimulus Onset): Trial ID Triggers"},{"location":"03_cogdataset/#response-trigger","text":"255: Following button press.","title":"Response Trigger"},{"location":"03_cogdataset/#stimulus-presentation-end","text":"96: Offset of stimulus presentation (onset of blank) 97: Offset of blank (onset of jitter period) Note that both these are fixations, they are just divided into blank and jitter.","title":"Stimulus Presentation End"},{"location":"03_cogdataset/#general-triggers-to-mark-experiment-progression","text":"86: Onset of experiment 81: Onset of recording 83: Offset of recording","title":"General Triggers to Mark Experiment Progression"},{"location":"03_cogdataset/#miniblock-id-triggers","text":"161-200: Miniblock ID trigger","title":"Miniblock ID Triggers"},{"location":"03_cogdataset/#zeroes","text":"0: Zeros were sent between the successive triggers to reset the LPT, see below. These were also sent to the eye tracker but did not mean anything and they can safely be ignored.","title":"Zeroes"},{"location":"03_cogdataset/#how-the-lpt-triggers-were-sent","text":"The LPT port of the computer was used for sending the triggers and it was done by using the sendTrig function. This function sets the port in a specific state (whatever trigger we want to send) and logs the trigger afterwards, noting if it is sent and what time the command for sending it is executed. For each trigger that is being sent, the port is being reset after a frame to 0. In the beginning of the experiment, a few triggers were sent to mark experiment onset and onset of recording. Then, a mini block was initiated. The participant was presented with the target screen and required to press the spacebar to proceed. When the participant pressed the space button, the miniblock ID was sent. Only once the miniblock trigger was sent the fixation appeared. This means that there was a small delay between key press and fixation onset. Following the first fixation, a jitter started, which was also logged. Then, the first stimulus was displayed. Upon the presentation of the stimulus, the successive triggers were initiated. The first trigger occurred directly after the onset of the stimulus, indicating the stimulus ID (1-80). Then, after 2 frames, the orientation trigger (101-103) was sent, followed by the duration trigger (151 to 153) at 4 frames, the task demand trigger (201-203) at 6 frames, and finally, the trial ID trigger (111 to 148) at 8 frames.","title":"How The LPT Triggers Were Sent"},{"location":"03_cogdataset/#empty-room-recording","text":"Prior to each experiment, MEG signals from the empty room were recorded for 3-minutes.","title":"Empty Room Recording"},{"location":"03_cogdataset/#resting-state-rm-eeg","text":"The resting-state data for each participant was also recorded for 5-minutes and the subjects were asked to keep their eyes open and fixated on a point presented at the center of the screen. M-EEG signals were sampled at a rate of 1 kHz and band-pass filtered between 0.01 and 330 Hz prior to sampling.","title":"Resting-State (rM-EEG)"},{"location":"03_cogdataset/#task-tm-eeg","text":"Following the empty room and rM-EEG recordings, subjects were asked to complete the task defined in the Procedure section. tM-EEG consisted of 10 runs, with 4 blocks each. During each block, a ratio of 34-38 trials was presented, with 32 non-targets (8 of each category) and 2-6 targets (number chosen randomly). Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Task Runs Blocks Trials Total trials Experiment 1 10 4 34-38 per block 1440","title":"Task (tM-EEG)"},{"location":"03_cogdataset/#full-structure-of-session","text":"Complete standard procedure of an M-EEG session is available in MEG Standard Operating Procedure .","title":"Full Structure of Session"},{"location":"03_cogdataset/#inclusion-criteria","text":"The items below were assessed for the subjects before the data was acquired: Age range: 18 to 35 (since over the age of 35 subjects might have a hard time maintaining central focus) Handedness: right Hearing problems: no Hearing aid: no Vision problems: no, or corrected-to-normal with soft lenses No MRI in the last week MRI compatible: no metal, medical implants, etc. No claustrophobia. Note: dental implants are allowed (particularly for non-magnetic materials) unless it generates big impacts on MEG signals, and this will be checked prior to MEG recording. No known history of psychiatric or neurological disorders, e.g., Not have been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D). Not have been formally diagnosed with autism spectrum disorder (ASD) Not suffer from epilepsy","title":"Inclusion Criteria"},{"location":"03_cogdataset/#quality-check-and-exclusion-criteria","text":"For M-EEG, the first stage of the third-level checks focused on system-related and external noise generators. It was tested using the signal spectra in the empty room recording, the resting state session, and the experiment itself for all sensors. Any sensor and/or specific frequency revealing extensive noise using visual inspection, was flagged to document potential problems. Ultimately, this did not lead to any exclusions. Next, all experimental data blocks were visually inspected for abnormalities in spectra (peaks not explainable by physiology), and in ICA components, and checked for extremely noisy (based on the score of differences between the original and Maxwell-filtered data > 7) and flat sensors. The latter step was performed in a collaboration between the data monitoring team and members of the centers where data was acquired to check whether any potential changes in preprocessing for particular subjects were needed. Finally, we tested if all experimental cells (i.e. task-relevant non-targets and task-irrelevant stimuli for each one of the four categories) have enough trials (N=30).","title":"Quality Check and Exclusion Criteria"},{"location":"03_cogdataset/#ieeg-data-acquisition","text":"iEEG recordings were obtained from patients with pharmacologically resistant epilepsy undergoing invasive electrophysiological monitoring at the Comprehensive Epilepsy Center at New York University (NYU) Langone Health Center, Brigham and Women\u2019s Hospital, Children\u2019s Hospital Boston (Harvard Medical School), and University of Wisconsin School of Medicine and Public Health (WU).","title":"iEEG Data Acquisition"},{"location":"03_cogdataset/#hardware_1","text":"Brain activity was recorded with a combination of intracranially subdural platinum-iridium electrodes embedded in SILASTIC sheets (2.3 mm diameter contacts, Ad-Tech Medical Instrument and PMT Corporation) and/or depth stereo-electroencephalographic platinum- iridium electrodes (PMT Corporation; 0.8-mm diameter, 2.0-mm length cylinders; separated from adjacent contacts by 1.5 to 2.43 mm), or Behnke-Fried depth stereo- electroencephalographic platinum-iridium electrodes (Ad-Tech Medical, BF08R-SP21X-0C2, 1.28 mm in diameter, 1.57 mm in length, 3 to 5.5 mm spacing). The decision to implant, electrode targeting, and the duration of invasive monitoring was solely determined on clinical grounds and without reference to this or any other study. Electrodes were arranged as grid arrays (either 8 \u00d7 8 with 10 mm center-to-center spacing, 8 x 16 contacts with 3 mm spacing, or hybrid macro/micro 8 x 8 contacts with 10 mm spacing and 64 integrated microcontacts with 5 mm spacing), linear strips (1 \u00d7 8/12 contacts), depth electrodes (1 \u00d7 8/12 contacts), or a combination thereof. Subdural electrodes covered extensive portions of lateral and medial frontal, parietal, occipital, and temporal cortex of the left and/or right hemisphere. Recordings from grid, strip and depth electrode arrays were done using a Natus Quantum amplifier (Pleasonton, CA) or a Neuralynx Atlas amplifier (Bozeman, MT). A total of 4057 electrodes (892 grids, 346 strips, 2819 depths) were implanted across 32 patients with drug-resistant focal epilepsy undergoing clinically motivated invasive monitoring. 3512 electrodes (780 grids, 307 strips, 2425 depths) that were unaffected by epileptic activity, artifacts, or electrical noise were used in subsequent analyses. To determine the electrode localization for each patient, a postoperative CT (computed tomography) scan and a pre-operative T1 MRI were acquired and co-registered. Recordings were obtained continuously during the patients\u2019 stay in the hospital. All data was stored with stimulus and timing markers permitting offline synchronization.","title":"Hardware"},{"location":"03_cogdataset/#anatomical-mri-data-acquisition_1","text":"Before the participants underwent surgery and electrode implantation, T1-weighted MR data were acquired from them. At NYU, imaging was performed using the Siemens Biograph mMR scanner. At Harvard, the imaging sequence utilized was MPRAGE (magnetization-prepared rapid gradient-echo), with a Siemens Skyra 3T scanner. At WU, imaging was conducted using the GE MEDICAL SYSTEMS SIGNA Artist scanner. The rationale behind acquiring MR scans was the spatial resolution it offers for brain tissue visualization.","title":"Anatomical MRI Data Acquisition"},{"location":"03_cogdataset/#ct-data-acquisition","text":"Following surgery, post-operative CT scans were obtained from the subjects to assist in localizing the electrodes on specific brain tissue. At NYU, scans were performed using a Siemens SOMATOM Force scanner. At Harvard, imaging was conducted using the Medtronic O-arm MVS O2, manufactured by Medtronic. At WU, scans were acquired utilizing the GE MEDICAL SYSTEMS Optima CT660 scanner. Please note: MR and CT data were collected for the subjects at Brigham and Women\u2019s Hospital and Children\u2019s Hospital Boston. However, due to the data protection policies, they are not included in the COGITATE Data Release.","title":"CT Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-acquisition_1","text":"The task was implemented using Matlab (Harvard: R2020b; NYU: R2020a, WU: 2021a), Psychtoolbox v.3 (Pelli, 1997), and run on a Dell Precision 5540 laptop, with a 15.6\" Ultrasharp screen (screen size 345 x 195 mm2; resolution 1920 x 1080) at NYU and Harvard and on a Dell D29M PC with an Acer V196WL 19\" LED LCD monitor (screen size 406.4 x 254 mm2; resolution 1440 x 990) at WU. The distance between the subject\u2019s eyes and the screen was 80 cm. But the actual distance was measured for each subject before the start of recording to ensure that the size of the stimulus was 6 x 6 of visual angle. Participants responded using an 8-button response box (Millikey LH-8; response hand(s) varied based on the setting in the patient\u2019s room).","title":"Behavioral Data Acquisition"},{"location":"03_cogdataset/#eye-tracking-data-acquisition_1","text":"At Harvard and Wisconsin, EyeLink 1000 Plus Camera was used to collect eye-tracking data, and a thirteen-point calibration was performed several times during the experiment. The calibration was performed at the beginning of the experiment, and recalibrated in-between blocks, if necessary to meet precision requirements. At NYU, eye-tracking data was collected throughout the duration of the experiment using a Tobii-4C eye-tracker. A nine-point calibration was performed several times during the experiment. Pupil size and corneal reflection data were collected at a sampling rate of 500 Hz at Harvard and Wisconsin and at a sampling rate of 90 Hz at NYU. The Eyelink system recorded monocular data, while the Tobii system recorded binocular data. For the former cases, only one eye was recorded as determined by ocular dominance. The experiment was not influenced by the Eye-tracking recording.","title":"Eye Tracking Data Acquisition"},{"location":"03_cogdataset/#behavioral-data-code-scheme_1","text":"The behavioral code scheme is similar to the M-EEG modality which is explained in this section .","title":"Behavioral Data Code Scheme"},{"location":"03_cogdataset/#eye-tracker-data-code","text":"The eye tracker code scheme for the iEEG modality follows a similar structure to that described for M-EEG data. You can find detailed explanations here .","title":"Eye Tracker Data Code"},{"location":"03_cogdataset/#ieeg-code-scheme","text":"","title":"iEEG Code Scheme"},{"location":"03_cogdataset/#photodiode-trigger-scheme","text":"For ECOG patients, the type of port utilized by the M-EEG team (LPT) was incompatible with our recording system. Consequently, a photodiode was employed. A photodiode is an electronic device that records changes in luminance and converts them into voltage. An example of a photodiode In the experimental code, it was ensured that when a new event occurred on the screen (such as stimulus onset or stimulus offset), a white flash appeared in the bottom right corner. The photodiode device was positioned atop the flashing square and connected to the amplifier recording the iEEG channel signals. This additional channel facilitated the identification of event onsets in our task. This type of recording only allows binary signals (the photodiode is either on or off). However, specific events were encoded with varying numbers of subsequent pulses. Stimulus Presentation Onset The flashing square was flashed only once at the onset of each new stimulus. Stimulus Presentation Offset The flashing square was flashed only once at the offset of each stimulus. Start of the Inter-Trial Interval The flashing square was flashed only once at the beginning of the inter-trial interval. The inter-trial interval was initiated 2 seconds after stimulus onset and persisted for a random duration (following a truncated exponential distribution between 0.2 and 2 seconds, with a mean of 0.4 seconds). Block Start The start of an experimental block was marked by sending 4 consecutive pulses. Block End The end of an experimental block was marked by sending 2 consecutive pulses. Experiment Start and End The beginning and end of the experiment were marked by sending 3 consecutive pulses. Schematic representation of the photodiode channel","title":"Photodiode Trigger Scheme"},{"location":"03_cogdataset/#log-file-alignment","text":"The photodiode channel solely indicates when a specific event occurred in the iEEG signals, lacking any information about the nature of the event (unlike an LPT trigger in MEG). To identify specific events in the signal, the timing information from the log file was combined with that from the photodiode. The log file contains a description of each presented event along with a corresponding time stamp from the experimental computer. The photodiode channel recorded time stamps for each event, indicating when it occurred according to the acquisition computer clock. The goal was to align the log file and the photodiode to associate each event in the photodiode signal with the corresponding event description in the log file. This step was crucial since misalignment could lead to incorrect event descriptions in the iEEG signal, compromising the entire analysis. The procedure relies on the fact that both the log file and the photodiode had timestamps. These timestamps were recorded on different clocks. Unfortunately, computer clocks tended to drift away from one another, and these drifts accumulated to be quite significant over extended periods of time (they could be several seconds apart after 1 hour). Therefore, the timestamps of the photodiode and the log file could not be used interchangeably. However, over short periods of time, these drifts were negligible. What this meant was that the interval between two successive timestamps in the log file should be quite consistent with the intervals between two successive events in the photodiode. This provided us with the most thorough check possible: if the events in the log file and in the photodiode were aligned, then there should be only tiny differences between the differences between successive events in both. Here is a step-by-step description of the alignment procedure. Extract the Photodiode Timestamps The timestamps from the photodiode triggers were extracted as the first step. As illustrated in the figure Schematic representation of the photodiode channel , a square pulse was generated for each event during the recording. The onset of each of these pulses was sought. To achieve this, a threshold was initially established, below which the photodiode was considered to be in the off state and above which it was considered to be on (based on visual inspection of the data, which was facilitated by the clean nature of photodiode signals). Subsequently, the signal was binarized using this threshold (signal_bin = signal > threshold), resulting in a signal consisting only of ones and zeros. Next, the discrete difference of the binary signal was computed (y(i + 1) = y(i + 1) - y(i)). This operation produced a \u201c1\u201d when the photodiode transitioned from off to on (onset) and a \u201c-1\u201d when it transitioned from on to off (offset). Since only the onset was of interest, the timestamps of the ones were extracted, representing the timestamps of the photodiode. Verify Event Count Alignment The first step in aligning the photodiode events and the log files was to check if the number of events in each matched. If they did not match, then there was a problem. Aligning the Two Signals To ensure alignment of both signals, the discrete difference between the photodiode and log file timestamps was computed, providing the interval between successive events for each signal. The resulting arrays were then plotted atop each other. Misalignment between the two sources of timing information could be easily detected, as they did not overlap. Perfect overlap between the two was necessary to consider the signals aligned. Additionally, the difference between the two signals was computed to ensure minimal deviation. Integrating Information Once the two signals were properly aligned, the log file events could be used as descriptors of the events marked at the timestamps from the photodiode. Illustration of the alignment pipeline The alignment procedure allowed the information from the log file to be utilized in generating well-described events in the iEEG data. The events were encoded as \u201c/\u201d separated strings and stored in an events.tsv table. An extensive description of each event type and their levels are as follows: Task Name: Dur Task Description: Description of the experimental factors and their level with the correct syntax to access them from the MNE epochs object. Note that factor and level names are case-sensitive. We describe the syntax for each condition separately. However, note that you can fetch a combination of factors from the epochs using a forward slash. For example, if you wish to fetch the face target trials, you can combine both conditions like so: epochs['face/Relevant target'] will fetch all face target trials. In addition, the epochs objects are equipped with metadata, where the name of the column is the name of the factor, and the level follows the same nomenclature as below. This can be used for more sophisticated trial filtering and retrieval. Experimental Design: Event Type: Marks the different events occurring within a trial. Factor Type: Categorical Factor 1: Name: stimulus onset Description: Marks the onset of the visual stimuli. With epochs['stimulus onset'], extract all epochs time-locked to the visual stimulus onset. Factor 2: Name: stimulus offset Description: Marks the offset of the visual stimuli. With epochs['stimulus offset'], extract all epochs time-locked to the offset of the visual stimulus. Factor 3: Name: jitter onset Description: Marks the beginning of the inter-trial jitter. All trials lasted 2 sec., with an added jitter of 400ms on average. With epochs['jitter onset'], extract all epochs time-locked to the beginning of the jitter period (2 sec. after stimulus onset). Block: Marks the experimental blocks. Factor Type: Discrete Factor 1: Name: block_* Description: Experimental blocks. Our experiment consisted of 5 blocks, in between which participants were allowed to take a break. With epochs['block_1'], extract all epochs of the first experimental block. Miniblock: Marks the experimental miniblocks. Factor Type: Discrete Factor 1: Name: miniblock_* Description: Experimental miniblocks. Each experimental block consisted of 4 miniblocks. At the beginning of each miniblock, the two target stimuli were presented to the participant, which the participant had to remember to be able to detect in the stream of stimuli. With epochs['miniblock_1'], extract all epochs of the first experimental miniblock. Category: Category of the visual stimuli. Factor Type: Categorical Factor 1: Name: face Description: Identifies face trials. With epochs['face'], extract all epochs in which a face was presented. Factor 2: Name: object Description: Identifies object trials. With epochs['object'], extract all epochs in which an object was presented. Factor 3: Name: letter Description: Identifies letter trials. With epochs['letter'], extract all epochs in which a letter was presented. Factor 4: Name: false Description: Identifies false font trials (i.e., symbols). With epochs['false'], extract all epochs in which a false font was presented. Identity: Identity of the visual stimuli. Factor Type: Categorical Factor 1: Name: face_* Description: Identifies the identity of face trials. With epochs['face_*'], extract all epochs in which that specific face was presented. From 1-9, leading 0. Factor 2: Name: object_* Description: Identifies the identity of object trials. With epochs['object_*'], extract all epochs in which that specific object was presented. From 1-9, leading 0. Factor 3: Name: letter_* Description: Identifies the identity of letter trials. With epochs['letter_*'], extract all epochs in which that specific letter was presented. From 1-9, leading 0. Factor 4: Name: false_* Description: Identifies the identity of false font trials (i.e., symbols). With epochs['false__*'], extract all epochs in which that specific false font was presented. From 1-9, leading 0. Orientation: Orientation of the displayed stimuli. Factor Type: Categorical Factor 1: Name: Center Description: Identifies stimuli presented in the center orientation. With epochs['Center'], extract all epochs in which a stimulus was presented in the center orientation. Factor 2: Name: Left Description: Identifies stimuli presented in the Left orientation. With epochs['Left'], extract all epochs in which a stimulus was presented in the Left orientation. Factor 3: Name: Right Description: Identifies stimuli presented in the Right orientation. With epochs['Right'], extract all epochs in which a stimulus was presented in the Right orientation. Duration: Duration a visual stimulus was presented for. Factor Type: Categorical Factor 1: Name: 500ms Description: Identifies stimuli presented for 500ms. With epochs['500ms'], extract all epochs in which the stimulus was displayed for 500ms. Factor 2: Name: 1000ms Description: Identifies stimuli presented for 1000ms. With epochs['1000ms'], extract all epochs in which the stimulus was displayed for 1000ms. Factor 3: Name: 1500ms Description: Identifies stimuli presented for 1500ms. With epochs['1500ms'], extract all epochs in which the stimulus was displayed for 1500ms. Task Relevance: Task relevance of a given trial. Factor Type: Categorical Factor 1: Name: Relevant target Description: Identifies target stimuli. Target stimuli are presented at the beginning of each miniblock, and participants must detect them among the sequence of presented stimuli by pressing a button. With epochs['Relevant target'], extract all target trials. Factor 2: Name: Relevant non-target Description: Identifies task-relevant non-target stimuli. We considered task-relevant stimuli that were of the same category as the target but of a different identity. With epochs['Relevant non-target'], extract all task-relevant non-target trials. Factor 3: Name: Irrelevant Description: Identifies task-irrelevant non-target stimuli. We considered task-irrelevant stimuli that were of a different category than the target. With epochs['Irrelevant'], extract all task-irrelevant non-target trials. Response: Rated response of the participants. Factor Type: Categorical Factor 1: Name: Hit Description: Participants correctly identified a target by pressing a button. With epochs['Hit'], extract all target trials for which the participants pressed a key. Factor 2: Name: CorrRej Description: Participants correctly rejected a non-target stimulus and did not press any button. With epochs['CorrRej'], extract all non-target trials for which the participants did not press a key. Factor 3: Name: Miss Description: Participants failed to press a button when a target stimulus was presented. With epochs['Miss'], extract all target trials in which participants failed to press a button. Factor 4: Name: FA Description: Participants mistakenly pressed a button when a non-target stimulus was presented. With epochs['FA'], extract all non-target trials in which participants pressed a button. Factor 5: Name: n.a. Description: For the events stimulus offset and jitter onset, the response is set to n.a. as the response relates to the visual stimulus, not to the other events. This should not be used to access the data.","title":"Log File Alignment"},{"location":"03_cogdataset/#surface-reconstruction-and-electrode-localization","text":"Subject-specific pial surfaces were automatically reconstructed based on a pre-implant T1 weighted MR image using the Freesurfer image analysis suite (\u2018recon-all\u2019, Dale et al., 1999). Post-implant CT images were co-registered with the pre-implant MR images using FLIRT (Jenkinson and Smith, 2001), as implemented in FSL (Smith et al., 2004). For NYU patients, we used a semi-automatic approach to generating electrode labels. For manual cases, co-registered MR and CT slices were examined using FSLView (Smith et al., 2004). For grids, we localized three corner electrodes and the remaining electrodes coordinates were then automatically interpolated along the shared plane using the known inter-electrode distances. Strip and depth electrodes were localized manually when they did not follow straight trajectories. When depth electrodes were in a straight line, the first and last electrodes were localized manually, and electrodes in between were automatically interpolated and labeled based on known inter-electrode distances and serial labeling convention. For WU patients, electrodes were localized manually using the SubNuclear toolbox . Electrode locations were further refined within the space of the pre-operative MRI using three-dimensional non-linear thin-plate spline warping (Rohr et al., 2001), which corrected for post-operative shift and distortion. The warping was constrained with manually selected points through the brain, which was visually aligned with landmarks in pre-implantation MRI and post-implantation CT. For Harvard subjects, individual contacts from depth electrodes were labeled manually from the CT image using the BioImage Suite \u2019s Electrode Editor tool (legacy version 3.5; Joshi, et al., 2011). The coordinates in CT image-space were converted to coordinates within the patient\u2019s segmented MRI brain-space using the iELVis toolbox (yangWangElecPjct; Yang, Wang, et al., 2012; Groppe et al., 2017). For all sites, the electrode spatial coordinates were transformed from the individual patient space into the standard space of the Montreal Neurological Institute (MNI-152) template for plotting purposes. At NYU, this transformation was performed using the DARTEL algorithm (Ashburner, 2007) implemented in SPM8 (Wellcome Department of Imaging Neuroscience, London, United Kingdom). At Harvard, this transformation was performed using the iELVis toolkit . At WU the transformation was performed with the SubNuclear toolbox using img2imgcoord utility.","title":"Surface Reconstruction and Electrode Localization"},{"location":"03_cogdataset/#finger-localizer-task","text":"In the Finger Localizer task, participants were presented with four circles, one of which was filled with a specific color, serving as a cue for participants to press the corresponding colored button on the response box. The filled state of the circle persisted for the duration of the response time, followed by an additional delay of 200 milliseconds. The Inter-Trial Intervals (ITIs) were uniformly distributed, with a mean of 0.55 seconds and a range from 0.400 to 0.700 seconds. The experimental protocol comprised 80 trials, distributed equally among the four colors, with 20 trials per color, and the sequence of trials was randomized. This task aimed to identify brain regions responsible for motor control, particularly those governing finger movements, and to pinpoint electrodes selectively activated by specific motor responses, such as button presses. An illustration depicting a trial in which the participant is required to press the blue button Please note: Although participants completed this task concurrently with Experiment 1 , we did not utilize the data in the analysis, as it was primarily acquired for use in Experiment 2 . Consequently, the data pertaining to the Finger Localizer task is not included in this version of our data release.","title":"Finger Localizer Task"},{"location":"03_cogdataset/#task-tieeg","text":"Participants proceeded to Experiment 1 either after or before completing the Finger Localizer task . tiEEG consisted of 5 runs containing 4 blocks each, and 34-38 trials per block, 32 non-targets (8 of each category) and 2-6 targets, with each trial lasting 2.4 s approximately, for a total of 720 trials. Rest breaks between runs and blocks were included. Random jitter was added at the end of each trial (mean inter-trial interval of 0.4 s jittered 0.2-2.0 s, truncated exponential distribution) to avoid periodic presentation of the stimuli. Additional information about the task can be found here . Task Runs Blocks Trials Total trials Experiment 1 5 4 34-38 per block 720","title":"Task (tiEEG)"},{"location":"03_cogdataset/#full-structure-of-session_1","text":"Complete standard procedure of an iEEG session is available in iEEG Standard Operating Procedure .","title":"Full Structure of Session"},{"location":"03_cogdataset/#inclusion-criteria_1","text":"For the iEEG studies, subjects were 10-65 years old, able to provide informed consent, had IQ > 70, fluent in English, with self-reported normal hearing, normal or corrected-to-normal vision, and cognitive and language abilities within or above the normal range in formal neuropsychological testing performed before surgery. They must not have had an electrographic seizure within 3-hours prior to testing.","title":"Inclusion Criteria"},{"location":"03_cogdataset/#quality-check_1","text":"A comprehensive quality assessment was conducted on the iEEG data. The data underwent manual annotation by epileptologists, excluding channels within the epileptic onset zone, as well as those exhibiting artifacts or showing complete flatness due to electrode contact issues. Channel rejection was independently performed by both the data monitoring and iEEG teams, with results compared to ensure consistency. Additionally, electrode reconstruction was verified to align with subjects' CT scans. Finally, we inspected for significant disturbances in the spectra.","title":"Quality Check"},{"location":"03_cogdataset/#exclusion-criteria_1","text":"Subjects who were unable to complete a sufficient number of trials due to excessive muscular artifacts, movement, noisy recordings, or a decision by the subject to terminate the experiment were excluded. Subjects who exhibited a low performance in the attention task were also excluded \u2013 this translates to <70% Hits and >30% FAs. In addition, data was also excluded if it did not pass any of the pre-defined data quality checks.","title":"Exclusion Criteria"},{"location":"03_cogdataset/#deviations-from-data-curation-procedure","text":"Our approach to defacing MR/CT DICOM images involved utilizing the built-in face masking tool from XNAT. However, for 12 iEEG subjects, we encountered difficulties in executing this step. This was primarily due to variations in the DICOM images, which could include cropped images, aliasing artifacts, broken headers, missing slices, non-equidistant slices within a series, and other issues. Below is the list of subjects where the execution of the XNAT facemasking tool failed: Subject_ID: CF103, CF104, CF112, CF113, CF116, CF117, CF120, CF121, CF122, CF124, CF125, CF126 To address this issue, we implemented a slightly different workflow that allowed us to successfully deface MR/CT images of these 12 subjects. However, this new approach differed in its ability to regenerate the original DICOM images post-defacement (the original output from the XNAT facemasking tool). Instead, it generated defaced NIFTI images as the primary output. For our current version of data release, we have decided to share only the defaced NIFTI images for these subjects. Details about this workflow are provided below: Anonymization: MR/CT DICOM images underwent anonymization to remove the subject\u2019s Protected Health Information (PHI). NIFTI Conversion: Anonymized DICOM images were then converted to the NIFTI image format using the dcm2niix package (version: 1.0.20220505) (Li et al., 2016). Defacing of NIFTI: Defacing of the NIFTI images was performed using the PyDeface package (version: 2.0.2) (Gulban et al., 2022). Verification: This step involved checking the quality of the defaced NIFTI images using 2D/3D image plots to compare before and after the defacing stage. Alternative workflow for defacing 12 challenging MR/CT DICOM Images Miscellaneous: In the MR data for subject CF103, one DICOM slice was inadvertently dropped during the conversion process from DICOM to NIFTI format. However, the resulting NIFTI file remains functional and usable.","title":"Deviations from Data Curation Procedure"},{"location":"04_data/","text":"Data curation procedurs A detailed explanation about the multiple steps that were taken to prepare the data to be released in public will be available in Appendix 7 . Data Curation Standard Operating Procedure.","title":"Data curation procedurs"},{"location":"04_data/#data-curation-procedurs","text":"A detailed explanation about the multiple steps that were taken to prepare the data to be released in public will be available in Appendix 7 . Data Curation Standard Operating Procedure.","title":"Data curation procedurs"},{"location":"05_access/","text":"Access to COGITATE Data There are two ways of accessing the COGITATE data: \"Live\" Database Release: XNAT (eXtensible Neuroimaging Archive Toolkit) Archival Format: Bundles NOTE: XNAT release not available yet! 1. XNAT This database offers a web interface for navigating the data and an API (Application Programming Interface) for programmatically retrieving specific databases based on user interests. Comprehensive instructions on how to register, access, and query our database are provided below. Step 1: Registration If you are a new user and have not registered yet, you should visit Cogitate_XNAT_registration . Once the registration is done, a verification step, the same as the \u201cCreating an Account\u201d, is needed. If you have already registered, you can skip this step and login at Cogitate_XNAT . Step 2: Navigating at XNAT After completing the registration step, you can log in with your User and Password. You can see the list of available datasets under the \u201cProjects\u201d tab. Once you click the project\u2019s name, you will see the list of subjects in the farthest left column. In each subject\u2019s folder, the demographic information of that subject and the various sets of data acquired for Experiment 1 are provided. As an example, for a subject with the ID of CA103, the MR session, Eye tracker and MEEG datasets are listed as the below figure. In the MR session folder, you can view and access the MR scan of the subject along with the related imaging parameters. In the Eye tracker folder, the eye tracking data of different runs and some details related to them, including the recorded eye, sampling frequency, distance to screen and screen size are available. Under the folder of MEEG, there are some tabs on the top where you can find information regarding the Case Report Form, Exit Questionnaire, experiment checklist form, data details within the BIDS framework, and at the bottom, you can download different runs of MEG data. Naming Convention on XNAT The data on XNAT is organized into subjects and sessions under a given project. The subjects are identified using the format \u201cCX???\u201d and the sessions follow the format CX???_MODALITY_VISIT_PARADIGMRUN e.g. CA103_MEEG_1_DurR1 indicated MEEG measurement for subject ID CA103 during the first visit with Dur experimental paradigm run 1 (R1). 2. Bundles This approach involves providing a collection of links to the prepared bundles of the data and accompanying metadata, which are available in zip format. These links grant users the ability to download specific modalities, example datasets, or the complete dataset. Here is a brief explanation about how to access the data bundles: Step 1: Create a Data User Account Step 1: Create a Data User Account Access to the data bundles requires a quick and easy registration process. Provide user information, including name and email address. Read and accept the Terms of Use and GDPR Requirements ( General Data Protection Regulation ). Once you register, you will receive four (4) emails to the email account you registered with. In some cases, checking your junk mail may be necessary. Welcome email : general information Data User Account Verification email: Within the verification email, you must click on the \u2018verify my account\u2019 option to finalize step 1 of creating a data user account in order to gain access to all current and future data releases. Resource Material : A handy email that contains all the important links that serve as reference materials to Cogitate data. Mailing List Subscription: In order to stay up-to-date and informed about news related to COGITATE data releases, you must activate your email subscription (this is in compliance with GDPR requirements). Tip: The registration procedure needed for accessing the data bundles is a separate step than what is required to access XNAT. Step 2: Login and logout of your Data User account Step 2: Login and logout of your Data User account To login to your account, go to the Login button on the top right of the page. Enter your email and password used when registering. You should now have access to the Cogitate Data User main page and Data Bundles. To log out of your account, go to the top navigation bar and hover of Data. In the dropdown menu, click on Data User Account. A panel will open on the right side of the screen - click on Account Settings in the bottom of that panel. Then the option to Sign out will appear under your username. Click on Sign out. Tip: The Login button will remain as 'Login' even after signing in to your account. The only way of knowing whether you are logged in or out, is by clicking on Data User Account, under the Data heading or being able to download data (i.e. indicating you are in fact, logged in) Step 3: How To Download the Data Login to your account Scroll down and click on the \u201cAccess Data Bundles\u201d Click on the download button next to each dataset Naming Convention for Bundles Raw data bundles follow the below naming convention. The project root directory consists of subdirectories named after the subject's ID which is of the format \u201cCX???\u201d. The subject directories consist of various sub directories as described below. Except for the metadata directory the sessions follow the pattern subject-ID_PARADIGM_MODALITY. If the modality data is paradigm agnostic, e.g. MR, CT then the paradigm is left blank. We currently have two paradigms in the data EXP1 indicating the experiment described and FingerLoc for the finger localiser. The session directories further contains individual scans following the format CX???_MODALITY_1_DurR1.EDF. The metadata subdirectory further consists of various assessments and questionnaires that provide valuable information.","title":"Access to COGITATE Data"},{"location":"05_access/#access-to-cogitate-data","text":"There are two ways of accessing the COGITATE data: \"Live\" Database Release: XNAT (eXtensible Neuroimaging Archive Toolkit) Archival Format: Bundles NOTE: XNAT release not available yet!","title":"Access to COGITATE Data"},{"location":"05_access/#1-xnat","text":"This database offers a web interface for navigating the data and an API (Application Programming Interface) for programmatically retrieving specific databases based on user interests. Comprehensive instructions on how to register, access, and query our database are provided below.","title":"1. XNAT"},{"location":"05_access/#step-1-registration","text":"If you are a new user and have not registered yet, you should visit Cogitate_XNAT_registration . Once the registration is done, a verification step, the same as the \u201cCreating an Account\u201d, is needed. If you have already registered, you can skip this step and login at Cogitate_XNAT .","title":"Step 1: Registration"},{"location":"05_access/#step-2-navigating-at-xnat","text":"After completing the registration step, you can log in with your User and Password. You can see the list of available datasets under the \u201cProjects\u201d tab. Once you click the project\u2019s name, you will see the list of subjects in the farthest left column. In each subject\u2019s folder, the demographic information of that subject and the various sets of data acquired for Experiment 1 are provided. As an example, for a subject with the ID of CA103, the MR session, Eye tracker and MEEG datasets are listed as the below figure. In the MR session folder, you can view and access the MR scan of the subject along with the related imaging parameters. In the Eye tracker folder, the eye tracking data of different runs and some details related to them, including the recorded eye, sampling frequency, distance to screen and screen size are available. Under the folder of MEEG, there are some tabs on the top where you can find information regarding the Case Report Form, Exit Questionnaire, experiment checklist form, data details within the BIDS framework, and at the bottom, you can download different runs of MEG data.","title":"Step 2: Navigating at XNAT"},{"location":"05_access/#naming-convention-on-xnat","text":"The data on XNAT is organized into subjects and sessions under a given project. The subjects are identified using the format \u201cCX???\u201d and the sessions follow the format CX???_MODALITY_VISIT_PARADIGMRUN e.g. CA103_MEEG_1_DurR1 indicated MEEG measurement for subject ID CA103 during the first visit with Dur experimental paradigm run 1 (R1).","title":"Naming Convention on XNAT"},{"location":"05_access/#2-bundles","text":"This approach involves providing a collection of links to the prepared bundles of the data and accompanying metadata, which are available in zip format. These links grant users the ability to download specific modalities, example datasets, or the complete dataset. Here is a brief explanation about how to access the data bundles:","title":"2. Bundles"},{"location":"05_access/#step-1-create-a-data-user-account","text":"Step 1: Create a Data User Account Access to the data bundles requires a quick and easy registration process. Provide user information, including name and email address. Read and accept the Terms of Use and GDPR Requirements ( General Data Protection Regulation ). Once you register, you will receive four (4) emails to the email account you registered with. In some cases, checking your junk mail may be necessary. Welcome email : general information Data User Account Verification email: Within the verification email, you must click on the \u2018verify my account\u2019 option to finalize step 1 of creating a data user account in order to gain access to all current and future data releases. Resource Material : A handy email that contains all the important links that serve as reference materials to Cogitate data. Mailing List Subscription: In order to stay up-to-date and informed about news related to COGITATE data releases, you must activate your email subscription (this is in compliance with GDPR requirements). Tip: The registration procedure needed for accessing the data bundles is a separate step than what is required to access XNAT.","title":"Step 1: Create a Data User Account"},{"location":"05_access/#step-2-login-and-logout-of-your-data-user-account","text":"Step 2: Login and logout of your Data User account To login to your account, go to the Login button on the top right of the page. Enter your email and password used when registering. You should now have access to the Cogitate Data User main page and Data Bundles. To log out of your account, go to the top navigation bar and hover of Data. In the dropdown menu, click on Data User Account. A panel will open on the right side of the screen - click on Account Settings in the bottom of that panel. Then the option to Sign out will appear under your username. Click on Sign out. Tip: The Login button will remain as 'Login' even after signing in to your account. The only way of knowing whether you are logged in or out, is by clicking on Data User Account, under the Data heading or being able to download data (i.e. indicating you are in fact, logged in) Step 3: How To Download the Data Login to your account Scroll down and click on the \u201cAccess Data Bundles\u201d Click on the download button next to each dataset","title":"Step 2: Login and logout of your Data User account"},{"location":"05_access/#naming-convention-for-bundles","text":"Raw data bundles follow the below naming convention. The project root directory consists of subdirectories named after the subject's ID which is of the format \u201cCX???\u201d. The subject directories consist of various sub directories as described below. Except for the metadata directory the sessions follow the pattern subject-ID_PARADIGM_MODALITY. If the modality data is paradigm agnostic, e.g. MR, CT then the paradigm is left blank. We currently have two paradigms in the data EXP1 indicating the experiment described and FingerLoc for the finger localiser. The session directories further contains individual scans following the format CX???_MODALITY_1_DurR1.EDF. The metadata subdirectory further consists of various assessments and questionnaires that provide valuable information.","title":"Naming Convention for Bundles"},{"location":"06_experiment1/","text":"Experiment 1: Directory Structure of Data Bundles Raw Data Raw data files are organized hierarchically: Experiment modality --> Subjects --> data folders The metadata related to each level of the hierarchy is contained in a mandatory folder called 'metadata'. Each data folder follows a naming convention {subject_context_modality[_modifier]} the section of the names are separated by underscores. This naming convention aims at making it easy to identify the data files that relate to the same moment in time and that were acquired simultaneously. subject -> this refers the the subject ID context -> the task or context. This section is optional and can be empty, e.g. if a subject had a standalone MR scan the context is left blank, resulting in a double underscorlike in the case of the CT scan or MR scan in the above example modality -> or type of data collected The Cogitate consortium collected several types of data/metadata during the experiments: BEH : behavioral events ET : Eye tracking data MR : Magnetic resonance data (anatomical scans) MEEG : Magneto-Electroencephalographic data EXQU : Exit Questionnaire CRF : Case Report Form All metadata related to the subject can be found under the aptly named 'metadata' folder under the subject folder (this refers mainly to the EXQU and CRF files). The remaining metadata for the experiment as well as the demographic information on the subjects can be found in the metadata folder above the subject. This folder includes experiment wide metadata in json format and a csv table with the demographic data of all subjects: devices : A list of devices used to collect the data protocols : a link to the Standard Operating Procedure (SOP) document used for the data collection subjects_demographics : the full set of subjects and their metadata for the specific experiment modality tasks_ taskname : a description of the behavioral task or context with which we named the data bundles. wirings : a pdf file showing how the devices are connected to each other depicting the experimental setup. Raw M-EEG Data Directory Structure COG_MEEG_EXP1_RELEASE/ \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_MEEG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_MEEG.json # A link to the Standard Operating Procedures (SOP) \u2502 \u251c\u2500\u2500 subjects_demographics_MEEG.json # Demographic information of MEEG subjects \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_RestinEO.json # Description of the Resting state task \u2502 \u251c\u2500\u2500 tasks_Rnoise.json # Description of the Rnoise task \u2502 \u2514\u2500\u2500 wirings_MEEG.PDF # Wiring diagram of devices_MEEG.json connections \u2514\u2500\u2500 CB036 # Subject folder \u251c\u2500\u2500 metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CB036_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CB036_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CB036_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CB036_EXP1_LPTTriggers/ # Trigger data for synchronization \u251c\u2500\u2500 CB036_EXP1_MEEG/ # MEEG data collected during EXP1 (fif) \u251c\u2500\u2500 CB036_EXP1_ET/ # Eye Tracking data collected during EXP1 (asc) \u251c\u2500\u2500 CB036_RestinEO_MEEG/ # MEEG data collected during RestingEO task (fif) \u251c\u2500\u2500 CB036_RestinEO_ET/ # Eye Tracking data collected during RestingEO task \u251c\u2500\u2500 CB036_Rnoise_MEEG/ # MEEG data collected during Rnoise task (fif) \u2514\u2500\u2500 CB036__MR/ # MR anatomical scan data (fif) Raw iEEG Data Directory Structure COG_ECOG_EXP1_RELEASE/ # Experiment modality top level folder \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_ECOG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_ECOG.json # A link to the Standard Operating Procedure (SOP) document used for the data collection \u2502 \u251c\u2500\u2500 subjects_demographics_ECOG.json # Full set of experiment modality subjects with their respective demographic information \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_FingerLoc.json # Description of the Finger Localizer task \u2502 \u2514\u2500\u2500 wirings_ECOG.pdf # Wiring pdf file showing how the devices described in devices_ECOG.json are connected to each other \u2514\u2500\u2500 CE103 # Subject folder \u251c\u2500\u2500metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CE103_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CE103_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CE103_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ECOG/ # ECOG data files collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ET/ # Eye Tracking data collected during EXP1 \u251c\u2500\u2500 CE103_FingerLoc_ECOG/ # Ecog data collected during the Finger Localizer task \u251c\u2500\u2500 CE103_FingerLoc_BEH/ # Behavioral event data collected during the Finger Localizer task \u251c\u2500\u2500 CE103__CT/ # CT scan data (no task) \u251c\u2500\u2500 CE103__MR/ # MR anatomical data \u2514\u2500\u2500 CE103__ElecCoords/ # Contains coordinate output files of MR/CT coregistration end electrode reconstruction pipeline BIDS Format The BIDS (Brain Imaging Data Structure) file structure for M-EEG (Magnetoencephalography) and iEEG (intracranial EEG) data adheres to a standardized format for organizing neuroimaging data. Each file follows a structured naming convention indicating subject, session, task, and data type. Here's a breakdown of the key elements within each modality's data directory structure: - dataset_description.json: Provides general information about the dataset. - participants.json and participants.tsv: Contain demographic information about subjects. - README.md: Offers an overview of the data and BIDS format. - Subject-specific data: Organized under sub-[SubjectID]/. - Session-specific data: Organized under ses-[SessionID]/. - Anatomical and functional data: Stored in appropriate folders (anat/ for anatomical, meg/ for MEG, and ieeg/ for iEEG). - Metadata: Related to subjects and experiments is stored in metadata/ directories. This structured approach ensures clarity and consistency in data organization, facilitating ease of access and analysis for researchers and collaborators. BIDS M-EEG Data Directory Structure COG_MEEG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Including metadata and coreg (coregistration) | |-- additional_metadata # Containing all of the metadata | | |-- dataset_description.json # General information about BIDS version, type of dataset | | |-- METADATA # Including metadata including the list of devices, link to COGITATE GitHub repository, types of tasks, stimuli and responses and wiring diagram of MEG data | | | |-- analysis.json # Analysis steps, the order of them and the link of analysis code repository | | | |-- devices_MEEG.json # List of devices used for MEG data acquisition | | | |-- protocols.json # Link of COGITATE wiki and MEG SOP | | | |-- tasks_EXP1_MEEG.json # Description of behavioral task, stimuli and responses | | | |-- tasks_RestinEO_MEEG.json # Description of resting-state task and type of the response | | | |-- tasks_Rnoise_MEEG.json # Description of empty room task | | | `-- wiring_MEEG.pdf # Wiring diagram of MEG | | |-- README.md # Containing an explanation about additional_metadata directory | | |-- sub-CA103 # Subject folder | | | `-- METADATA # Containing Case Report Form, Exit Questionnaire and subject\u2019s demography | | | |-- CA103_CRF.json # Case Report Form | | | |-- CA103_demographics.json # Subject\u2019s demography | | | `-- CA103_EXQU.json # Exit Questionnaire | `-- coreg # The results of the coregistration | |-- dataset_description.json # BIDS version, Data Type, and description of the files of this directory | |-- README.md | |-- sub-CA103 # Subject folder | | `-- ses-1 # Session 1/visit 1 | | `-- meg # MEG folder | | `-- sub-CA103_ses-1_trans.fif # The output of coregistering MEG sensors and head to the anatomical data |-- participants.json # General information about subjects\u2019 demography |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README.md # Overview of MEG data and the BIDS format |-- sub-CA103 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- anat # Folder of anatomical data | | |-- sub-CA103_ses-1_T1w.json # Anatomical landmark coordinates | | `-- sub-CA103_ses-1_T1w.nii.gz # Anatomical data | |-- meg # Folder of MEG data | | |-- sub-CA103_ses-1_acq-calibration_meg.dat # Calibration data | | |-- sub-CA103_ses-1_acq-crosstalk_meg.fif # Crosstalk data | | |-- sub-CA103_ses-1_coordsystem.json # Including Information about MEG and head coil and coordinate system, units, description and anatomical landmark coordinates | | |-- sub-CA103_ses-1_task-dur_run-01_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_events.json # Description of sample, value and trial type | | |-- sub-CA103_ses-1_task-dur_run-01_events.tsv # Contains information about the events/stimuli presented during Experiment 1, run 1, event\u2019s onset time and duration, type of event, event code (trigger code) and sample | | |-- sub-CA103_ses-1_task-dur_run-01_meg.fif # Contains the raw/unprocessed MEG data during the task of Experiment 1/session 1, run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during run 1 | | |-- sub-CA103_ses-1_task-dur_run-02_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_events.json | | |-- sub-CA103_ses-1_task-dur_run-02_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-02_meg.json | | |-- sub-CA103_ses-1_task-dur_run-03_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_events.json | | |-- sub-CA103_ses-1_task-dur_run-03_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-03_meg.json | | |-- sub-CA103_ses-1_task-dur_run-04_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_events.json | | |-- sub-CA103_ses-1_task-dur_run-04_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-04_meg.json | | |-- sub-CA103_ses-1_task-dur_run-05_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_events.json | | |-- sub-CA103_ses-1_task-dur_run-05_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-05_meg.json | | |-- sub-CA103_ses-1_task-noise_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during noise recording | | |-- sub-CA103_ses-1_task-noise_meg.fif # Contains the raw/unprocessed MEG data during noise recording of Experiment 1/session 1 | | |-- sub-CA103_ses-1_task-noise_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during noise recording | | |-- sub-CA103_ses-1_task-rest_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during resting-state recording | | |-- sub-CA103_ses-1_task-rest_meg.fif # Contains the raw/unprocessed MEG data during resting-state recording of Experiment 1/session 1 | | `-- sub-CA103_ses-1_task-rest_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during resting-state recording | `-- sub-CA103_ses-1_scans.tsv # List of MEG data files BIDS iEEG Data Directory Structure COG_ECOG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Directory containing derived data | |-- fs # Outputs of FreeSurfer processing | | `-- sub-CF102 # Subject folder | | |-- label # Contains files representing segmented brain regions | | |-- mri # Contains various outputs of the FreeSurfer MRI processing pipeline, such as brain masks, tissue segmentations, and cortical surface reconstructions | | |-- scripts # Contains relevant information related to the execution and status tracking of the FreeSurfer's recon-all pipeline for MRI data processing, including build and status stamps, logs, and environment settings | | |-- stats # statistical data related to various anatomical and morphometric measurements derived from brain segmentation and parcellation processes | | |-- surf # Contains various surface representations of the cerebral cortex, including vertex-wise measurements such as cortical area, curvature, thickness, sulcal depth, and surface normals, for both left and right hemispheres, derived from structural MRI data | | `-- touch # Contains information about completion of various processing steps related to surface generation, segmentation, registration, normalization, and quality control for both left and right hemispheres |-- participants.json # Demographic information about participants |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README |-- sub-CF102 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- ieeg # Folder of iEEG data | | |-- sub-CF102_ses-1_laplace_mapping_ieeg.json # Contains electrode groups and their references for laplace mapping for session 1 | | |-- sub-CF102_ses-1_space-ACPC_coordsystem.json # Contains information about the coordinate system during session 1 | | |-- sub-CF102_ses-1_space-ACPC_electrodes.tsv # Contains spatial coordinates (x, y, z)/locations of electrodes on the subject's brain surface | | |-- sub-CF102_ses-1_task-Dur_channels.tsv # Contains information about the iEEG data channels during task and session 1 including their names, type, units, frequency cutoffs, description, sampling frequency, and status | | |-- sub-CF102_ses-1_task-Dur_events.json # Contains description for \u201csample\u201d, \u201cvalue\u201d, and \u201ctrial_type\u201d | | |-- sub-CF102_ses-1_task-Dur_events.tsv # Contains event-related data during the task and session 1 including onset, duration, trial type, value and sample | | |-- sub-CF102_ses-1_task-Dur_ieeg.eeg # Contains iEEG data during task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.json # Contains metadata for iEEG recorded during the task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.vhdr # Contains metadata for iEEG recorded during the task and session 1 | | `-- sub-CF102_ses-1_task-Dur_ieeg.vmrk # A marker file containing annotations or event markers corresponding to the events during the task and session 1 | `-- sub-CF102_ses-1_scans.tsv","title":"Experiment 1: Directory Structure of Data Bundles"},{"location":"06_experiment1/#experiment-1-directory-structure-of-data-bundles","text":"","title":"Experiment 1: Directory Structure of Data Bundles"},{"location":"06_experiment1/#raw-data","text":"Raw data files are organized hierarchically: Experiment modality --> Subjects --> data folders The metadata related to each level of the hierarchy is contained in a mandatory folder called 'metadata'. Each data folder follows a naming convention {subject_context_modality[_modifier]} the section of the names are separated by underscores. This naming convention aims at making it easy to identify the data files that relate to the same moment in time and that were acquired simultaneously. subject -> this refers the the subject ID context -> the task or context. This section is optional and can be empty, e.g. if a subject had a standalone MR scan the context is left blank, resulting in a double underscorlike in the case of the CT scan or MR scan in the above example modality -> or type of data collected The Cogitate consortium collected several types of data/metadata during the experiments: BEH : behavioral events ET : Eye tracking data MR : Magnetic resonance data (anatomical scans) MEEG : Magneto-Electroencephalographic data EXQU : Exit Questionnaire CRF : Case Report Form All metadata related to the subject can be found under the aptly named 'metadata' folder under the subject folder (this refers mainly to the EXQU and CRF files). The remaining metadata for the experiment as well as the demographic information on the subjects can be found in the metadata folder above the subject. This folder includes experiment wide metadata in json format and a csv table with the demographic data of all subjects: devices : A list of devices used to collect the data protocols : a link to the Standard Operating Procedure (SOP) document used for the data collection subjects_demographics : the full set of subjects and their metadata for the specific experiment modality tasks_ taskname : a description of the behavioral task or context with which we named the data bundles. wirings : a pdf file showing how the devices are connected to each other depicting the experimental setup.","title":"Raw Data"},{"location":"06_experiment1/#raw-m-eeg-data-directory-structure","text":"COG_MEEG_EXP1_RELEASE/ \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_MEEG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_MEEG.json # A link to the Standard Operating Procedures (SOP) \u2502 \u251c\u2500\u2500 subjects_demographics_MEEG.json # Demographic information of MEEG subjects \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_RestinEO.json # Description of the Resting state task \u2502 \u251c\u2500\u2500 tasks_Rnoise.json # Description of the Rnoise task \u2502 \u2514\u2500\u2500 wirings_MEEG.PDF # Wiring diagram of devices_MEEG.json connections \u2514\u2500\u2500 CB036 # Subject folder \u251c\u2500\u2500 metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CB036_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CB036_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CB036_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CB036_EXP1_LPTTriggers/ # Trigger data for synchronization \u251c\u2500\u2500 CB036_EXP1_MEEG/ # MEEG data collected during EXP1 (fif) \u251c\u2500\u2500 CB036_EXP1_ET/ # Eye Tracking data collected during EXP1 (asc) \u251c\u2500\u2500 CB036_RestinEO_MEEG/ # MEEG data collected during RestingEO task (fif) \u251c\u2500\u2500 CB036_RestinEO_ET/ # Eye Tracking data collected during RestingEO task \u251c\u2500\u2500 CB036_Rnoise_MEEG/ # MEEG data collected during Rnoise task (fif) \u2514\u2500\u2500 CB036__MR/ # MR anatomical scan data (fif)","title":"Raw M-EEG Data Directory Structure"},{"location":"06_experiment1/#raw-ieeg-data-directory-structure","text":"COG_ECOG_EXP1_RELEASE/ # Experiment modality top level folder \u251c\u2500\u2500 metadata/ # Experiment modality level metadata folder \u2502 \u251c\u2500\u2500 devices_ECOG.json # List of devices used to collect the data \u2502 \u251c\u2500\u2500 protocols_ECOG.json # A link to the Standard Operating Procedure (SOP) document used for the data collection \u2502 \u251c\u2500\u2500 subjects_demographics_ECOG.json # Full set of experiment modality subjects with their respective demographic information \u2502 \u251c\u2500\u2500 tasks_EXP1.json # Description of the 1st Cogitate task \u2502 \u251c\u2500\u2500 tasks_FingerLoc.json # Description of the Finger Localizer task \u2502 \u2514\u2500\u2500 wirings_ECOG.pdf # Wiring pdf file showing how the devices described in devices_ECOG.json are connected to each other \u2514\u2500\u2500 CE103 # Subject folder \u251c\u2500\u2500metadata/ # Subject level metadata folder \u2502 \u251c\u2500\u2500 CE103_EXP1_CRF.json # Subject Case Report Form (CRF) \u2502 \u2514\u2500\u2500 CE103_EXP1_EXQU.json # Subject Exit Questionnaire responses \u251c\u2500\u2500 CE103_EXP1_BEH/ # Behavioral Events data collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ECOG/ # ECOG data files collected during EXP1 \u251c\u2500\u2500 CE103_EXP1_ET/ # Eye Tracking data collected during EXP1 \u251c\u2500\u2500 CE103_FingerLoc_ECOG/ # Ecog data collected during the Finger Localizer task \u251c\u2500\u2500 CE103_FingerLoc_BEH/ # Behavioral event data collected during the Finger Localizer task \u251c\u2500\u2500 CE103__CT/ # CT scan data (no task) \u251c\u2500\u2500 CE103__MR/ # MR anatomical data \u2514\u2500\u2500 CE103__ElecCoords/ # Contains coordinate output files of MR/CT coregistration end electrode reconstruction pipeline","title":"Raw iEEG Data Directory Structure"},{"location":"06_experiment1/#bids-format","text":"The BIDS (Brain Imaging Data Structure) file structure for M-EEG (Magnetoencephalography) and iEEG (intracranial EEG) data adheres to a standardized format for organizing neuroimaging data. Each file follows a structured naming convention indicating subject, session, task, and data type. Here's a breakdown of the key elements within each modality's data directory structure: - dataset_description.json: Provides general information about the dataset. - participants.json and participants.tsv: Contain demographic information about subjects. - README.md: Offers an overview of the data and BIDS format. - Subject-specific data: Organized under sub-[SubjectID]/. - Session-specific data: Organized under ses-[SessionID]/. - Anatomical and functional data: Stored in appropriate folders (anat/ for anatomical, meg/ for MEG, and ieeg/ for iEEG). - Metadata: Related to subjects and experiments is stored in metadata/ directories. This structured approach ensures clarity and consistency in data organization, facilitating ease of access and analysis for researchers and collaborators.","title":"BIDS Format"},{"location":"06_experiment1/#bids-m-eeg-data-directory-structure","text":"COG_MEEG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Including metadata and coreg (coregistration) | |-- additional_metadata # Containing all of the metadata | | |-- dataset_description.json # General information about BIDS version, type of dataset | | |-- METADATA # Including metadata including the list of devices, link to COGITATE GitHub repository, types of tasks, stimuli and responses and wiring diagram of MEG data | | | |-- analysis.json # Analysis steps, the order of them and the link of analysis code repository | | | |-- devices_MEEG.json # List of devices used for MEG data acquisition | | | |-- protocols.json # Link of COGITATE wiki and MEG SOP | | | |-- tasks_EXP1_MEEG.json # Description of behavioral task, stimuli and responses | | | |-- tasks_RestinEO_MEEG.json # Description of resting-state task and type of the response | | | |-- tasks_Rnoise_MEEG.json # Description of empty room task | | | `-- wiring_MEEG.pdf # Wiring diagram of MEG | | |-- README.md # Containing an explanation about additional_metadata directory | | |-- sub-CA103 # Subject folder | | | `-- METADATA # Containing Case Report Form, Exit Questionnaire and subject\u2019s demography | | | |-- CA103_CRF.json # Case Report Form | | | |-- CA103_demographics.json # Subject\u2019s demography | | | `-- CA103_EXQU.json # Exit Questionnaire | `-- coreg # The results of the coregistration | |-- dataset_description.json # BIDS version, Data Type, and description of the files of this directory | |-- README.md | |-- sub-CA103 # Subject folder | | `-- ses-1 # Session 1/visit 1 | | `-- meg # MEG folder | | `-- sub-CA103_ses-1_trans.fif # The output of coregistering MEG sensors and head to the anatomical data |-- participants.json # General information about subjects\u2019 demography |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README.md # Overview of MEG data and the BIDS format |-- sub-CA103 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- anat # Folder of anatomical data | | |-- sub-CA103_ses-1_T1w.json # Anatomical landmark coordinates | | `-- sub-CA103_ses-1_T1w.nii.gz # Anatomical data | |-- meg # Folder of MEG data | | |-- sub-CA103_ses-1_acq-calibration_meg.dat # Calibration data | | |-- sub-CA103_ses-1_acq-crosstalk_meg.fif # Crosstalk data | | |-- sub-CA103_ses-1_coordsystem.json # Including Information about MEG and head coil and coordinate system, units, description and anatomical landmark coordinates | | |-- sub-CA103_ses-1_task-dur_run-01_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_events.json # Description of sample, value and trial type | | |-- sub-CA103_ses-1_task-dur_run-01_events.tsv # Contains information about the events/stimuli presented during Experiment 1, run 1, event\u2019s onset time and duration, type of event, event code (trigger code) and sample | | |-- sub-CA103_ses-1_task-dur_run-01_meg.fif # Contains the raw/unprocessed MEG data during the task of Experiment 1/session 1, run 1 | | |-- sub-CA103_ses-1_task-dur_run-01_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during run 1 | | |-- sub-CA103_ses-1_task-dur_run-02_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_events.json | | |-- sub-CA103_ses-1_task-dur_run-02_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-02_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-02_meg.json | | |-- sub-CA103_ses-1_task-dur_run-03_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_events.json | | |-- sub-CA103_ses-1_task-dur_run-03_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-03_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-03_meg.json | | |-- sub-CA103_ses-1_task-dur_run-04_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_events.json | | |-- sub-CA103_ses-1_task-dur_run-04_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-04_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-04_meg.json | | |-- sub-CA103_ses-1_task-dur_run-05_channels.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_events.json | | |-- sub-CA103_ses-1_task-dur_run-05_events.tsv | | |-- sub-CA103_ses-1_task-dur_run-05_meg.fif | | |-- sub-CA103_ses-1_task-dur_run-05_meg.json | | |-- sub-CA103_ses-1_task-noise_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during noise recording | | |-- sub-CA103_ses-1_task-noise_meg.fif # Contains the raw/unprocessed MEG data during noise recording of Experiment 1/session 1 | | |-- sub-CA103_ses-1_task-noise_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during noise recording | | |-- sub-CA103_ses-1_task-rest_channels.tsv # Contains information on the channel names, types, units, sampling rate, status, and frequency cutoffs of the filter applied to the recorded data during resting-state recording | | |-- sub-CA103_ses-1_task-rest_meg.fif # Contains the raw/unprocessed MEG data during resting-state recording of Experiment 1/session 1 | | `-- sub-CA103_ses-1_task-rest_meg.json # Contains power line and sampling frequencies, duration of recording, MEG, EOG and ECG and trigger channel counts during resting-state recording | `-- sub-CA103_ses-1_scans.tsv # List of MEG data files","title":"BIDS M-EEG Data Directory Structure"},{"location":"06_experiment1/#bids-ieeg-data-directory-structure","text":"COG_ECOG_EXP1_BIDS_RELEASE/ |-- dataset_description.json # General information about BIDS version, type of dataset, Authors, Acknowledgments, Funding, Ethics Approvals, and the link of COGITATE website |-- derivatives # Directory containing derived data | |-- fs # Outputs of FreeSurfer processing | | `-- sub-CF102 # Subject folder | | |-- label # Contains files representing segmented brain regions | | |-- mri # Contains various outputs of the FreeSurfer MRI processing pipeline, such as brain masks, tissue segmentations, and cortical surface reconstructions | | |-- scripts # Contains relevant information related to the execution and status tracking of the FreeSurfer's recon-all pipeline for MRI data processing, including build and status stamps, logs, and environment settings | | |-- stats # statistical data related to various anatomical and morphometric measurements derived from brain segmentation and parcellation processes | | |-- surf # Contains various surface representations of the cerebral cortex, including vertex-wise measurements such as cortical area, curvature, thickness, sulcal depth, and surface normals, for both left and right hemispheres, derived from structural MRI data | | `-- touch # Contains information about completion of various processing steps related to surface generation, segmentation, registration, normalization, and quality control for both left and right hemispheres |-- participants.json # Demographic information about participants |-- participants.tsv # Subjects\u2019 demography in tsv format |-- README |-- sub-CF102 # Subject folder | `-- ses-1 # Session 1/visit 1 | |-- ieeg # Folder of iEEG data | | |-- sub-CF102_ses-1_laplace_mapping_ieeg.json # Contains electrode groups and their references for laplace mapping for session 1 | | |-- sub-CF102_ses-1_space-ACPC_coordsystem.json # Contains information about the coordinate system during session 1 | | |-- sub-CF102_ses-1_space-ACPC_electrodes.tsv # Contains spatial coordinates (x, y, z)/locations of electrodes on the subject's brain surface | | |-- sub-CF102_ses-1_task-Dur_channels.tsv # Contains information about the iEEG data channels during task and session 1 including their names, type, units, frequency cutoffs, description, sampling frequency, and status | | |-- sub-CF102_ses-1_task-Dur_events.json # Contains description for \u201csample\u201d, \u201cvalue\u201d, and \u201ctrial_type\u201d | | |-- sub-CF102_ses-1_task-Dur_events.tsv # Contains event-related data during the task and session 1 including onset, duration, trial type, value and sample | | |-- sub-CF102_ses-1_task-Dur_ieeg.eeg # Contains iEEG data during task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.json # Contains metadata for iEEG recorded during the task and session 1 | | |-- sub-CF102_ses-1_task-Dur_ieeg.vhdr # Contains metadata for iEEG recorded during the task and session 1 | | `-- sub-CF102_ses-1_task-Dur_ieeg.vmrk # A marker file containing annotations or event markers corresponding to the events during the task and session 1 | `-- sub-CF102_ses-1_scans.tsv","title":"BIDS iEEG Data Directory Structure"},{"location":"07_bug-reports/","text":"Support and Report Bugs There are various ways the COGITATE team can support you as a Data User, depending on the type of help you are seeking. For general questions, email us at: cogitate-support@ae.mpg.de For reporting issues or bugs: Visit github.com/cogitate-consortium/cogitate-data and ensure you are logged into your GitHub account Don\u2019t have a GitHub account? Create one here . Click on Issues Create New Issue Please provide maximal details possible Click Submit new issue. Your issue will be logged with our Data Release team and dealt with in a timely manner.","title":"Support and Report Bugs"},{"location":"07_bug-reports/#support-and-report-bugs","text":"There are various ways the COGITATE team can support you as a Data User, depending on the type of help you are seeking. For general questions, email us at: cogitate-support@ae.mpg.de For reporting issues or bugs: Visit github.com/cogitate-consortium/cogitate-data and ensure you are logged into your GitHub account Don\u2019t have a GitHub account? Create one here . Click on Issues Create New Issue Please provide maximal details possible Click Submit new issue. Your issue will be logged with our Data Release team and dealt with in a timely manner.","title":"Support and Report Bugs"},{"location":"08_links/","text":"Links and Reference Materials COGITATE Website COGITATE Main Scientific Paper COGITATE Preregistration v4 - December 2022 COGITATE GitHub Repositories COGITATE Wiki Subjects Demography YouTube Channel videos: - COGITATE Experiment 1 - HOW TO create a COGITATE Data User account - HOW TO login and logout of your COGITATE Data User account - HOW TO Download a COGITATE Data Bundle XNAT Support Modality Place of Acquisition Number of Subjects Reference Materials M-EEG University of Birmingham, Center for Human Brain Health (CHBH) 102 M-EEG Wiring Diagram M-EEG Experiment 1 Code M-EEG SOP Peking University (PKU) fMRI Donders Center for Cognitive Neuroimaging (DCCN) 122 Yale Magnetic Resonance Research Center (MRRC) ECoG (iEEG) Harvard University at Boston Children\u2019s Hospital 38 iEEG Wiring Diagram iEEG Experiment 1 Code iEEG SOP New York University Langone (NYU) University of Wisconsin","title":"Links and Reference Materials"},{"location":"08_links/#links-and-reference-materials","text":"COGITATE Website COGITATE Main Scientific Paper COGITATE Preregistration v4 - December 2022 COGITATE GitHub Repositories COGITATE Wiki Subjects Demography YouTube Channel videos: - COGITATE Experiment 1 - HOW TO create a COGITATE Data User account - HOW TO login and logout of your COGITATE Data User account - HOW TO Download a COGITATE Data Bundle XNAT Support Modality Place of Acquisition Number of Subjects Reference Materials M-EEG University of Birmingham, Center for Human Brain Health (CHBH) 102 M-EEG Wiring Diagram M-EEG Experiment 1 Code M-EEG SOP Peking University (PKU) fMRI Donders Center for Cognitive Neuroimaging (DCCN) 122 Yale Magnetic Resonance Research Center (MRRC) ECoG (iEEG) Harvard University at Boston Children\u2019s Hospital 38 iEEG Wiring Diagram iEEG Experiment 1 Code iEEG SOP New York University Langone (NYU) University of Wisconsin","title":"Links and Reference Materials"},{"location":"09_appendices/","text":"Appendices Appendix 1. Screening Form M-EEG Screening Form This questionnaire should be filled by the participant before the experiment. It asks for some information such as name, weight (kg), email, data of birth and phone number of the subject following by a couple of questions including: Have you participated in a MEG study before? Do you suffer from any medical condition that may be relevant (e.g. epilepsy, diabetes, asthma)? Do you suffer from claustrophobia? Have you been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D)? Have you been formally diagnosed with autism spectrum disorder (ASD)? Other information (e.g. spectacle prescription) Appendix 2. Case Report Form This form was for reporting any issues that might have happened during the experiment. After the end of the experiment and saving the data, the operator filled out this form. M-EEG Case Report Form For M-EEG modality, the below items were asked for different sections of the data acquisition (Data for Empty Room, Resting-state, run 1, \u2026, run 5) in the Case Report Form: Eye tracking not working Eye tracking showing bad patterns Problems in task performance Strong head motion Strong body motion Trigger monitoring Bad MEG sensors Bad EEG sensors Notes: explaining about the issue in a more detail iEEG Case Report Form The iEEG case report form contained essential details such as the subject's ID, task name, and the distance from the stimulus interface to the subject in centimeters. It also included comprehensive information about the eye-tracking setup, specifying which eye was tracked, the sampling rate, calibration method, pupil shape, and camera lens used. Additionally, trigger information, including the trigger channels utilized and other relevant parameters, was documented. Furthermore, some fields were provided for notes and miscellaneous information. Here are the specific fields listed in the iEEG case report form: Subject ID Task Name Stimulus Interface Computer Name Distance (cm) Practice Run Pre-Task Equipment Disinfected Post-Task Equipment Disinfected Other Notes Screen Brightness Eye Tracking Sticker Glasses or Lenses Trigger Information: Trigger Channel Trigger Reference Channel Trigger Channel Gain Audio Channels Eye Tracker Setup: Eye Tracked Sampling Rate Calibration Type Pupil Camera Lens Appendix 3. Exit Questionnaire This Form should be filled by the participant after the experiment and asks the below questions: How difficult was it to stay focused for the entire duration of the experiment? Did you notice that the stimuli duration varied? When stimuli were presented for a short period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a medium period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a long period of time, were you able to focus on them for as long as they were presented? Was it difficult to keep the targets in memory for the entirety of a block? For each stimulus category, how hard was it to recognize them among the other stimuli? [Faces] For each stimulus category, how hard was it to recognize them among the other stimuli? [Object] For each stimulus category, how hard was it to recognize them among the other stimuli? [Letters] For each stimulus category, how hard was it to recognize them among the other stimuli? [Symbols] Were specific stimuli orientations harder to recognize than others? If yes, which ones? What did you think of the block length? Would the task have been easier if the stimulus duration didn't vary so much? Do you have any additional comments to share? Appendix 4. MEG Standard Operating Procedure Appendix 5. GDPR Requirements Appendix 6. Terms of Use Appendix 7. Data Curation Standard Operating Procedure Appendix 8. iEEG Standard Operating Procedure Appendix 9. Documentation Changes","title":"Appendices"},{"location":"09_appendices/#appendices","text":"","title":"Appendices"},{"location":"09_appendices/#appendix-1-screening-form","text":"","title":"Appendix 1. Screening Form"},{"location":"09_appendices/#m-eeg-screening-form","text":"This questionnaire should be filled by the participant before the experiment. It asks for some information such as name, weight (kg), email, data of birth and phone number of the subject following by a couple of questions including: Have you participated in a MEG study before? Do you suffer from any medical condition that may be relevant (e.g. epilepsy, diabetes, asthma)? Do you suffer from claustrophobia? Have you been formally diagnosed with attention deficit (hyperactivity) disorder (AD(H)D)? Have you been formally diagnosed with autism spectrum disorder (ASD)? Other information (e.g. spectacle prescription)","title":"M-EEG Screening Form"},{"location":"09_appendices/#appendix-2-case-report-form","text":"This form was for reporting any issues that might have happened during the experiment. After the end of the experiment and saving the data, the operator filled out this form.","title":"Appendix 2. Case Report Form"},{"location":"09_appendices/#m-eeg-case-report-form","text":"For M-EEG modality, the below items were asked for different sections of the data acquisition (Data for Empty Room, Resting-state, run 1, \u2026, run 5) in the Case Report Form: Eye tracking not working Eye tracking showing bad patterns Problems in task performance Strong head motion Strong body motion Trigger monitoring Bad MEG sensors Bad EEG sensors Notes: explaining about the issue in a more detail","title":"M-EEG Case Report Form"},{"location":"09_appendices/#ieeg-case-report-form","text":"The iEEG case report form contained essential details such as the subject's ID, task name, and the distance from the stimulus interface to the subject in centimeters. It also included comprehensive information about the eye-tracking setup, specifying which eye was tracked, the sampling rate, calibration method, pupil shape, and camera lens used. Additionally, trigger information, including the trigger channels utilized and other relevant parameters, was documented. Furthermore, some fields were provided for notes and miscellaneous information. Here are the specific fields listed in the iEEG case report form: Subject ID Task Name Stimulus Interface Computer Name Distance (cm) Practice Run Pre-Task Equipment Disinfected Post-Task Equipment Disinfected Other Notes Screen Brightness Eye Tracking Sticker Glasses or Lenses Trigger Information: Trigger Channel Trigger Reference Channel Trigger Channel Gain Audio Channels Eye Tracker Setup: Eye Tracked Sampling Rate Calibration Type Pupil Camera Lens","title":"iEEG Case Report Form"},{"location":"09_appendices/#appendix-3-exit-questionnaire","text":"This Form should be filled by the participant after the experiment and asks the below questions: How difficult was it to stay focused for the entire duration of the experiment? Did you notice that the stimuli duration varied? When stimuli were presented for a short period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a medium period of time, were you able to focus on them for as long as they were presented? When stimuli were presented for a long period of time, were you able to focus on them for as long as they were presented? Was it difficult to keep the targets in memory for the entirety of a block? For each stimulus category, how hard was it to recognize them among the other stimuli? [Faces] For each stimulus category, how hard was it to recognize them among the other stimuli? [Object] For each stimulus category, how hard was it to recognize them among the other stimuli? [Letters] For each stimulus category, how hard was it to recognize them among the other stimuli? [Symbols] Were specific stimuli orientations harder to recognize than others? If yes, which ones? What did you think of the block length? Would the task have been easier if the stimulus duration didn't vary so much? Do you have any additional comments to share?","title":"Appendix 3. Exit Questionnaire"},{"location":"09_appendices/#appendix-4-meg-standard-operating-procedure","text":"","title":"Appendix 4. MEG Standard Operating Procedure"},{"location":"09_appendices/#appendix-5-gdpr-requirements","text":"","title":"Appendix 5. GDPR Requirements"},{"location":"09_appendices/#appendix-6-terms-of-use","text":"","title":"Appendix 6. Terms of Use"},{"location":"09_appendices/#appendix-7-data-curation-standard-operating-procedure","text":"","title":"Appendix 7. Data Curation Standard Operating Procedure"},{"location":"09_appendices/#appendix-8-ieeg-standard-operating-procedure","text":"","title":"Appendix 8. iEEG Standard Operating Procedure"},{"location":"09_appendices/#appendix-9-documentation-changes","text":"","title":"Appendix 9. Documentation Changes"},{"location":"10_acknowledgements/","text":"Acknowledgements This project and data are made possible through the support of a grant from Templeton World Charity Foundation . The opinions expressed throughout our documentation and data are those of the authors and do not necessarily reflect the views of Templeton World Charity Foundation. Members of the Cogitate Consortium contributed an incredible amount of time, effort and expertise to design the experiments, collect the data and subsequently curate the data and code for public release. We would like to acknowledge the significant contributions of the following team members: Experimental design: Yoav Roll, Alex Lepauvre, Aya Khalaf, and Katarina Bentz, Felix Bernoully Konstantinos Vasileiadis, Aris Semertzidis, Nikos Gregos Data Collection: fMRI Donders (Floris de Lange, David Richter, Sarah Kusch, Monique Smulders) Yale (Hal Blumenfeld, Aya Khalaf, Abdel Sharaf) M-EEG Birmingham (Oscar Ferrante, Ole Jensen, Dorottya Hetenyi, Tara Ghafari) PKU (Ling Liu, Huan Luo, Shujun Yang) iEEG NYU (Sasha Devore, Simon Henin, Stephanie Montenegro, Jay Jeschke, Alia Seedat) Harvard (Gabriel Krieman, Katarina Bendtz, David Mazumder, Marcelo Armendariz) WI (Urszula Gorska) Data Quality Checks: Rony Hirschhorn (Tel Aviv University) Urszula Gorska (WI) Csaba Kozma (WI) Liad Mudrik (Tel Aviv University) Michael Pitts (Reed College) Data Curation: COGITATE\u2019s Data Release Team: Niccol\u00f2 Bonacchi, Tanya Brown, Kyle Kahraman, Lucia Melloni, Praveen Sripad, Fatemeh Taheriyan, Diptyajit Das Flywheel Cloud Services (XNAT): James Dickson, Angela Farrar, Dan Marcus,","title":"Acknowledgements"},{"location":"10_acknowledgements/#acknowledgements","text":"This project and data are made possible through the support of a grant from Templeton World Charity Foundation . The opinions expressed throughout our documentation and data are those of the authors and do not necessarily reflect the views of Templeton World Charity Foundation. Members of the Cogitate Consortium contributed an incredible amount of time, effort and expertise to design the experiments, collect the data and subsequently curate the data and code for public release. We would like to acknowledge the significant contributions of the following team members: Experimental design: Yoav Roll, Alex Lepauvre, Aya Khalaf, and Katarina Bentz, Felix Bernoully Konstantinos Vasileiadis, Aris Semertzidis, Nikos Gregos Data Collection: fMRI Donders (Floris de Lange, David Richter, Sarah Kusch, Monique Smulders) Yale (Hal Blumenfeld, Aya Khalaf, Abdel Sharaf) M-EEG Birmingham (Oscar Ferrante, Ole Jensen, Dorottya Hetenyi, Tara Ghafari) PKU (Ling Liu, Huan Luo, Shujun Yang) iEEG NYU (Sasha Devore, Simon Henin, Stephanie Montenegro, Jay Jeschke, Alia Seedat) Harvard (Gabriel Krieman, Katarina Bendtz, David Mazumder, Marcelo Armendariz) WI (Urszula Gorska) Data Quality Checks: Rony Hirschhorn (Tel Aviv University) Urszula Gorska (WI) Csaba Kozma (WI) Liad Mudrik (Tel Aviv University) Michael Pitts (Reed College) Data Curation: COGITATE\u2019s Data Release Team: Niccol\u00f2 Bonacchi, Tanya Brown, Kyle Kahraman, Lucia Melloni, Praveen Sripad, Fatemeh Taheriyan, Diptyajit Das Flywheel Cloud Services (XNAT): James Dickson, Angela Farrar, Dan Marcus,","title":"Acknowledgements"},{"location":"11_glossary/","text":"Glossary Term Definition Wiring Diagram A diagram that shows how different devices or sensors were connected Standard Operating Procedure (SOP) A document that outlines the steps or procedures to be followed for each modality GDPR (General Data Protection Regulation) A comprehensive data protection and privacy regulation in the European Union (EU) and the European Economic Area (EEA) - It offers instructions on leveraging the data of the users who register to download COGITATE data HIPAA (Health Insurance Portability & Accountability Act) Standards for the protection of sensitive patient health information - HIPAA is used to identify and remove personal identifiers in the curation process BIDS (Brain Imaging Data Structure) A standard format for organizing and describing neuroimaging data - This is one of the formats of the released COGITATE datasets XNAT (Extensible Neuroimaging Archive Toolkit) An open-source software platform designed for managing, sharing, and analyzing neuroimaging and related data in research settings - This is the platform used for COGITATE project to store the data and facilitate accessing it for the users","title":"Glossary"},{"location":"11_glossary/#glossary","text":"Term Definition Wiring Diagram A diagram that shows how different devices or sensors were connected Standard Operating Procedure (SOP) A document that outlines the steps or procedures to be followed for each modality GDPR (General Data Protection Regulation) A comprehensive data protection and privacy regulation in the European Union (EU) and the European Economic Area (EEA) - It offers instructions on leveraging the data of the users who register to download COGITATE data HIPAA (Health Insurance Portability & Accountability Act) Standards for the protection of sensitive patient health information - HIPAA is used to identify and remove personal identifiers in the curation process BIDS (Brain Imaging Data Structure) A standard format for organizing and describing neuroimaging data - This is one of the formats of the released COGITATE datasets XNAT (Extensible Neuroimaging Archive Toolkit) An open-source software platform designed for managing, sharing, and analyzing neuroimaging and related data in research settings - This is the platform used for COGITATE project to store the data and facilitate accessing it for the users","title":"Glossary"},{"location":"12_references/","text":"References Tarr, M. J. (1996). The Object Databank. Carnegie Mellon University, Tarr Lab page Databank direct link . Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O., Gosselin, F., & Tanaka, J. W. (2010). Controlling low-level image properties: the SHINE toolbox. Behavior Research Methods, 42(3), 671-684. https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/index.html https://doi.org/10.1371/journal.pone.0268577 https://osf.io/gm3vd Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial vision, 10(4), 437-442.","title":"References"},{"location":"12_references/#references","text":"Tarr, M. J. (1996). The Object Databank. Carnegie Mellon University, Tarr Lab page Databank direct link . Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O., Gosselin, F., & Tanaka, J. W. (2010). Controlling low-level image properties: the SHINE toolbox. Behavior Research Methods, 42(3), 671-684. https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/index.html https://doi.org/10.1371/journal.pone.0268577 https://osf.io/gm3vd Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial vision, 10(4), 437-442.","title":"References"},{"location":"13_faq/","text":"FAQ Cogitate M-EEG BIDS FAQs Where are the trans files? The trans.fif files are provided under /derivatives/coreg. They are the results of the coregistration. This information is also included as landmarks in sidecar json of the NIFTI file. What is under derivatives/additional_metadata? This directory contains some additional metadata collected along with the data. The top level METADATA directory contains some files common to all subjects: They are analysis.json, devices_MEEG.json, protocols.json, tasks_EXP1_MEEG.json, tasks_RestinEO_MEEG.json, tasks_Rnoise_MEEG.json, wiring_MEEG.pdf. Subject level directories contain three files: CXXXX_CRF.json, CXXXX_demographics.json, CXXXX_EXQU.json which are respectively subject specific demographics, case report form and the exit questionnaire. The demographics information is redundant with the information in the participants.tsv files. What does BATCH1 mean? The M-EEG datasets for COGITATE are being released in two batches in order to facilitate the BIOMAG Connectivity Challenge 2024. This means that one half of the data is initially made available to the participants of the challenge followed by the second half a few months later. This is where BATCH1 and BATCH2 come in. I cannot find the EOG channel in the data? The EOG channel for site A can be found in EOG004 and the EOG channels for site B can be found in BIO002 . Typically in our code, we prefer to rename the channels as below. # Fix EOG001 channel name (required for CA only) if 'EOG004' in raw.ch_names: raw. rename_channels ({ 'EOG004': 'EOG001'}) # Fix EOG001 channel name(required for CB only) eog_ch = raw.copy().pick_types(meg=False, eeg=False, eog=True) if len(eog_ch. ch_names) < 2: raw.set_channel_types({'BIO002': 'eog'}) raw.rename_channels({'BIO002': 'EOG002'}) What do the channels MISC1, MISC2 and MISC3 contain? These channels contain the eye tracker data with MISC1 (X), MISC2 (Y), and MISC3 (pupil) channels containing the X, Y (gaze) and the Pupil information respectively. This information is however also shared separately in the eye tracking data release.","title":"FAQ"},{"location":"13_faq/#faq","text":"","title":"FAQ"},{"location":"13_faq/#cogitate-m-eeg-bids-faqs","text":"","title":"Cogitate M-EEG BIDS FAQs"},{"location":"13_faq/#where-are-the-trans-files","text":"The trans.fif files are provided under /derivatives/coreg. They are the results of the coregistration. This information is also included as landmarks in sidecar json of the NIFTI file.","title":"Where are the trans files?"},{"location":"13_faq/#what-is-under-derivativesadditional_metadata","text":"This directory contains some additional metadata collected along with the data. The top level METADATA directory contains some files common to all subjects: They are analysis.json, devices_MEEG.json, protocols.json, tasks_EXP1_MEEG.json, tasks_RestinEO_MEEG.json, tasks_Rnoise_MEEG.json, wiring_MEEG.pdf. Subject level directories contain three files: CXXXX_CRF.json, CXXXX_demographics.json, CXXXX_EXQU.json which are respectively subject specific demographics, case report form and the exit questionnaire. The demographics information is redundant with the information in the participants.tsv files.","title":"What is under derivatives/additional_metadata?"},{"location":"13_faq/#what-does-batch1-mean","text":"The M-EEG datasets for COGITATE are being released in two batches in order to facilitate the BIOMAG Connectivity Challenge 2024. This means that one half of the data is initially made available to the participants of the challenge followed by the second half a few months later. This is where BATCH1 and BATCH2 come in.","title":"What does BATCH1 mean?"},{"location":"13_faq/#i-cannot-find-the-eog-channel-in-the-data","text":"The EOG channel for site A can be found in EOG004 and the EOG channels for site B can be found in BIO002 . Typically in our code, we prefer to rename the channels as below. # Fix EOG001 channel name (required for CA only) if 'EOG004' in raw.ch_names: raw. rename_channels ({ 'EOG004': 'EOG001'}) # Fix EOG001 channel name(required for CB only) eog_ch = raw.copy().pick_types(meg=False, eeg=False, eog=True) if len(eog_ch. ch_names) < 2: raw.set_channel_types({'BIO002': 'eog'}) raw.rename_channels({'BIO002': 'EOG002'})","title":"I cannot find the EOG channel in the data?"},{"location":"13_faq/#what-do-the-channels-misc1-misc2-and-misc3-contain","text":"These channels contain the eye tracker data with MISC1 (X), MISC2 (Y), and MISC3 (pupil) channels containing the X, Y (gaze) and the Pupil information respectively. This information is however also shared separately in the eye tracking data release.","title":"What do the channels MISC1, MISC2 and MISC3 contain?"}]} \ No newline at end of file