Skip to content

Commit

Permalink
improve indentation of headings and consistency of sections
Browse files Browse the repository at this point in the history
  • Loading branch information
robertoostenveld committed Jan 19, 2024
1 parent 5ea4011 commit a91a843
Show file tree
Hide file tree
Showing 17 changed files with 60 additions and 43 deletions.
2 changes: 0 additions & 2 deletions _includes/shared/tutorial/meg_language.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
### Details of the MEG language dataset

The MEG data set used in the tutorials is from a language study on semantically congruent and incongruent sentences that is described in detail in Wang et al. (2012). Three types of sentences were used in the experiment. In the fully congruent condition (FC) the sentences ended with a high-cloze probability word, e.g., _De klimmers bereikten eindelijk de top van de berg_ (_The climbers finally reached the top of the mountain_) In the fully incongruent condition (FIC) sentences ended with a semantically anomalous word which was totally unexpected given the sentential context, e.g., _De klimmers bereikten eindelijk de top van de tulp_ (_The climbers finally reached the top of the tulip_). The third type of sentences ended with a semantically anomalous word that had the same initial phonemes (and lexical stress) as the high-cloze words from the congruent condition: initially congruent (IC). There were 87 trials per condition for each of the three conditions, and a set of 87 filler sentences were added. From the EEG literature it is known that a stronger negative potential is produced following incongruent compared to congruent sentence endings about 300-500 ms after the word onset. This response is termed the N400 effect¹ ². For more information about the materials take a look at the published EEG experiment using the same sentence materials³.

In the study applied here, the subjects were seated in a relaxed position under the MEG helmet. Their task was to attentively listen to spoken sentences. They were informed that some of the sentences would be semantically anomalous. Acoustic transducers were used to deliver the auditory stimuli. After a 300-ms warning tone, followed by a 1200 ms pause, a sentence was presented. Every next trial began 4100 ms after the offset of the previous sentence. To reduce eye blinks and movements in the time interval in which the sentence was presented, subjects were instructed to fixate on an asterisk presented visually 1000 ms prior to the beginning of the sentence. The asterisk remained on the screen until 1600 ms after the onset of the spoken sentence. Subjects were encouraged to blink when the asterisk was not displayed on the screen.
Expand Down
8 changes: 3 additions & 5 deletions _includes/shared/tutorial/mmfaces.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
## Details on the multimodal faces dataset

The "mmfaces" dataset contains EEG, MEG, functional MRI and structural MRI data from research participants that were recorded in multiple runs of a simple task performed on a large number of Famous, Unfamiliar and Scrambled faces. It is described in more detail the data descriptor publication [doi:10.1038/sdata.2015.1](https://www.nature.com/articles/sdata20151) and analyzed in detail in [doi:10.3389/fnhum.2011.00076](http://journal.frontiersin.org/Journal/10.3389/fnhum.2011.00076/abstract).

The original multimodal dataset included simultaneous MEG/EEG recordings on 19 healthy subjects. In the original study, three subjects (sub001, sub005, sub016) were excluded from further analysis.

The dataset used to be available from the [MRC-CBU FTP server](ftp://ftp.mrc-cbu.cam.ac.uk/personal/rik.henson/wakemandg_hensonrn/), but is nowadays maintained on [OpenNeuro](https://openneuro.org/datasets/ds000117).

### Stimulation details
#### Stimulation details

- The start of a trial was indicated with a fixation cross of random duration between 400 to 600 ms
- The face stimuli was superimposed on the fixation cross for a random duration of 800 to 1,000 ms
Expand All @@ -17,7 +15,7 @@ The dataset used to be available from the [MRC-CBU FTP server](ftp://ftp.mrc-cbu
- For the purposes of our analysis, we treat these two stimulation patterns of stimuli together
- To maintain attention, subjects were asked to judge the symmetry of the image and respond with a keypress

### MEG/EEG acquisition details
#### MEG/EEG acquisition details

The MEG data consist of 102 magnetometers and 204 planar gradiometers from a Neuromag/Elekta/Megin VectorView system. The same system was used to simultaneously record EEG data from 70 electrodes (using a nose reference), which are stored in the same “FIF” format file. The above FTP site includes a raw .fif file for each run/subject, but also a second .fif file in which the MEG data have been “cleaned” using Signal-Space Separation as implemented in MaxFilter 2.1.

Expand All @@ -35,6 +33,6 @@ A Polhemus was used to digitize three fiducial points and a large number of othe
- Two sets of bipolar electrodes were used to measure vertical (left eye; EEG062) and horizontal Electro-oculograms (EEG061). Another set was used to measure ECG (EEG063)
- A fixed 34 ms delay exists between the appearance of a trigger in the trigger channel STI101 and the appearance of the stimulus on the screen

### MRI acquisition details
#### MRI acquisition details

The MRI data were acquired on a 3T Siemens TIM Trio, and include a 1x1x1mm T1-weighted structural MRI (sMRI) as well as a large number of 3x3x4mm T2\*-weighted functional MRI (fMRI) EPI volumes acquired during 9 runs of the same task (performed by same subjects with different set of stimuli on a separate visit). (The FTP site also contains DTI and ME-FLASH MRI images from the same subject, which could be used for improved head modeling for example, but these are not used here.) For full description of the data and paradigm, see README.txt on the FTP site or [Wakeman & Henson](http://journal.frontiersin.org/Journal/10.3389/fnhum.2011.00076/abstract).
3 changes: 2 additions & 1 deletion _includes/shared/workshop/natmeg2014/meg_audodd.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
For the EEG-MEG workshop at NatMEG we recorded a dataset of a single subject to allow you to work through all the different steps involved in EEG-MEG analysis: from event-related averaging to frequency analysis, source modeling and statistics.
For the 2014 EEG-MEG workshop at NatMEG we recorded a dataset of a single subject to allow you to work through all the different steps involved in EEG and MEG analysis: from event-related averaging to frequency analysis, source modeling and statistics.

### The oddball paradigm

Expand All @@ -24,6 +24,7 @@ For a recent overview of sensorimotor rhythms, including the beta rebound, pleas
### Training with feedback and blink trials

Before the recording, the subject performed the experiment in a short training session to get acquainted with the task. Whenever the subject was too late in responding (>2 seconds), or pressed the wrong button, feedback was provided. In the actual experiment the subject was always on time and responded with the correct hand each time.

Finally, after each response, a blink trial is presented in which subjects are asked to blink so that they can remain fixated on the fixation cross - without blinking - throughout the period in time in which we are interested in the brain signal.

### Stimuli
Expand Down
16 changes: 8 additions & 8 deletions tutorial/continuous.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,14 @@ If your experiment consists of a sequence of trials, you may also want to start

Using this approach, you can read all data from the file into memory, apply filters, re-reference (in case of EEG), identify and subtract artifacts using ICA, and subsequently cut the data into segments or trials of interest.

### The datasets used in this tutorial

In this tutorial we will be using two datasets, one with EEG data and one with MEG data.

The [SubjectEEG.zip](https://download.fieldtriptoolbox.org/tutorial/SubjectEEG.zip) EEG dataset was acquired by Irina Siminova in a study investigating semantic processing of stimuli presented as pictures, visually displayed text or as auditory presented words. Data was acquired with a 64-channel BrainProducts BrainAmp EEG amplifier from 60 scalp electrodes placed in an electrode cap, one electrode placed under the right eye; signals "EOGv" and "EOGh" are computed after acquisition using re-referencing. During acquisition all channels were referenced to the left mastoid and an electrode placed at the earlobe was used as the ground. Channels 1-60 correspond to electrodes that are located on the head, except for channel 53 which is located at the right mastoid. Channels 61, 62, 63 are not connected to an electrode at all. Channel 64 is connected to an electrode placed below the left eye. Hence we have 62 channels of interest: 60 from the head + eogh + eogv. More details on the experiment and data can be found [here](/tutorial/eeg_language).

The [Subject01.zip](https://download.fieldtriptoolbox.org/tutorial/Subject01.zip) MEG dataset was acquired by Lin Wang in a language study on semantically congruent and incongruent sentences. Three types of sentences were used in the experiment: fully congruent (FC), fully incongruent (FIC), and initially congruent (IC). There were 87 trials per condition for each of the three conditions, and a set of 87 filler sentences (not used here). Note that the data was originally acquired and written to disk as 3-second trigger-locked epochs with discontinuities between the epochs; this data can therefore not be treated as continuous. More details on the experiment and data can be found [here](/tutorial/meg_language).

## Procedure

The following steps are taken to read data, to apply filters and to reference the data (in case of EEG), and optionally to select interesting segments of data around events or triggers or by cutting the continuous data into convenient constant-length segments.
Expand All @@ -26,14 +34,6 @@ The following steps are taken to read data, to apply filters and to reference th
- segment the continuous data into trials using **[ft_redefinetrial](/reference/ft_redefinetrial)**
- segment the continuous data into one-second pieces using **[ft_redefinetrial](/reference/ft_redefinetrial)**

## The datasets used in this tutorial

In this tutorial we will be using two datasets, one with EEG data and one with MEG data.

The [SubjectEEG.zip](https://download.fieldtriptoolbox.org/tutorial/SubjectEEG.zip) EEG dataset was acquired by Irina Siminova in a study investigating semantic processing of stimuli presented as pictures, visually displayed text or as auditory presented words. Data was acquired with a 64-channel BrainProducts BrainAmp EEG amplifier from 60 scalp electrodes placed in an electrode cap, one electrode placed under the right eye; signals "EOGv" and "EOGh" are computed after acquisition using re-referencing. During acquisition all channels were referenced to the left mastoid and an electrode placed at the earlobe was used as the ground. Channels 1-60 correspond to electrodes that are located on the head, except for channel 53 which is located at the right mastoid. Channels 61, 62, 63 are not connected to an electrode at all. Channel 64 is connected to an electrode placed below the left eye. Hence we have 62 channels of interest: 60 from the head + eogh + eogv. More details on the experiment and data can be found [here](/tutorial/eeg_language).

The [Subject01.zip](https://download.fieldtriptoolbox.org/tutorial/Subject01.zip) MEG dataset was acquired by Lin Wang in a language study on semantically congruent and incongruent sentences. Three types of sentences were used in the experiment: fully congruent (FC), fully incongruent (FIC), and initially congruent (IC). There were 87 trials per condition for each of the three conditions, and a set of 87 filler sentences (not used here). Note that the data was originally acquired and written to disk as 3-second trigger-locked epochs with discontinuities between the epochs; this data can therefore not be treated as continuous. More details on the experiment and data can be found [here](/tutorial/meg_language).

## Reading continuous EEG data into memory

The simplest method for preprocessing and reading the data into memory is by calling the **[ft_preprocessing](/reference/ft_preprocessing)** function with only the dataset as configuration argument.
Expand Down
16 changes: 8 additions & 8 deletions tutorial/ica_artifact_cleaning.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ In the schematic outline of the procedure you can see that we will use either **

Furthermore, the schematic outline shows that we might have to cycle back to **[ft_rejectvisual](/reference/ft_rejectvisual)** or **[ft_databrowser](/reference/ft_databrowser)** in case we identify atypical artifacts or bad channels _after_ having done the ICA. In that case we iterate the initial cleaning and repeat the ICA.

### Preprocessing
## Preprocessing

The MEG data is recorded with a 151-channel CTF system. Although the data is continuous, the CTF dataset is organized in trials of 10 seconds. As there are no discontinuities between trials, we can treat it as a continuous recording.

Expand All @@ -61,11 +61,11 @@ The ICA decomposition can take a long time, especially if you have to do it mult

For the purpose of this tutorial and since we want you to try out various settings, we will continue with the downsampled data. If you apply this on your own data, please do try to work with the original instead the downsampled data; that will improve the quality of the decomposition.

### Rejecting atypical artifacts
## Rejecting atypical artifacts

ICA assumes a mixing of stationary components and cannot estimate more components than the number of channels. If you have a few infrequent and atypical artifacts, these will be represented in components. This comes at the cost of loosingsome components for the interesting stuff, and may lead to a suboptimal decomposition. Hence we firrst want to remove sparse atypical artifacts.

#### Using ft_databrowser
### Using ft_databrowser

We can use **[ft_databrowser](/reference/ft_databrowser)** to have a look at the data. Since we are looking at infrequent artifacts that might happen anywhere in the data, we should look at _all_ channels and at the _complete_ time course. It helps to "zoom out" for the time, so that we see a large time window of the data at once.

Expand Down Expand Up @@ -131,7 +131,7 @@ Note that here we are reusing the `data` and the `data_clean` variables in diffe
To get back to the original data, you can always do `data = data_orig`.
{% include markup/end %}

#### Using ft_rejectvisual
### Using ft_rejectvisual

Another strategy to remove infrequent and atypical artifacts is to use **[ft_rejectvisual](/reference/ft_rejectvisual)**. However, this requires the data to be segmented in trials. In this case we can segment the data in a continuous stream of one-second segments.

Expand Down Expand Up @@ -162,7 +162,7 @@ We can now "stitch" the segmented data back together in a continuous representat

If you inspect `data_clean`in ft_databrowser, you will see that parts of the data are not visible, those are replaced by NaNs.

### ICA decomposition
## ICA decomposition

We use **[ft_componentanalysis](/reference/ft_componentanalysis)** for the ICA decomposition. It has many options, and supports different methods for decomposing the data, including PCA and different ICA algorithms. Here we will be using the Extended Infomax algorithm using the `runica` method from EEGLAB. You do not have to have EEGLAB installed for this, the required functions are included in the `fieldtrip/external` directory.

Expand All @@ -188,7 +188,7 @@ To perform the ICA decomposition, you can use the following code. Here we use a

You should do similar way and that together represent a linear mixture of the underlying sources. If you have EEG and EOG channels that are recorded using the same reference, the EEEG and EOG channels can goin there together. If you have EEG relative to one reference and bipolar EOG channels, you should not combine them in a single ICA decomposition. When you have combined MEG and EEG data, the MEG and EEG channels will both see the brain and artifactual sources, but have different sensitivity to them and to effects due to movement of the head relative to the MEG helmet; in that case we also recommend to use the ICA separately on the MEG and on the EEG channels.

### Identifying artifactual components
## Identifying artifactual components

The decomposed data structure `data_comp` represents the topographies of the components, i.e. how each source projects to the channels, and represents the temporal activation of each source. These can both be used to identify components that correspond to artifacts.

Expand Down Expand Up @@ -218,7 +218,7 @@ Again you write down the components that represent artifacts.

Note that due to eye components in MEG look different than eye components in EEG, that is due to the orientation of the MEG channels: on the left side of the head the channels point to the right and on the right side they point to the left, this cause their "polarity" to flip. With EEG electrodes all frontal electrodes have the same polarity. In combination with the component time series you can figure out which component reflects horizontal movements and which reflects blinks.

### Identifying bad channels
## Identifying bad channels

Both EEG and MEG give a relatively blurry representation of the physiological activity. Components that are very localized in space, i.e., that are only active on a single of very few channels, are not likely to represent physiological sources in the brain or from the heart. If you would not have removed the SQUID jumps earlier, those would show up as very localized artifacts. EEG electrodes that move or that have a sudden change in impedance can also show up as very localized.

Expand All @@ -228,7 +228,7 @@ It can also happen that you find one or a few componens that are very localized

With high-density EEG, you may sometimes see very localized muscle twitches, especially over the temporal region but possibly also elsewhere over the scalp. These can be less or more frequent, depending on your participant and task. Although these are spatially quite compact, they do represent a physiological source and ICA is an appropriate technique to remove them.

### Removing artifactual components
## Removing artifactual components

After identification of the artifactual components you can use **[ft_rejectcomponent](/reference/ft_rejectcomponent)** to back-project all components to the channel-level representation of the data, excluding the artifacts.

Expand Down
2 changes: 2 additions & 0 deletions tutorial/meg_language.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ title: Details of the MEG language dataset
tags: [meg-language]
---

# Details of the MEG language dataset

{% include /shared/tutorial/meg_language.md %}
2 changes: 2 additions & 0 deletions tutorial/mmfaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ title: Details on the multimodal faces dataset
tags: [mmfaces]
---

# Details on the multimodal faces dataset

{% include /shared/tutorial/mmfaces.md %}
5 changes: 3 additions & 2 deletions tutorial/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ Examples for both ways are described in this tutorial, and both ways depend on *

The output of ft_definetrial is a configuration structure containing the field cfg.trl. This is a matrix representing the relevant parts of the raw datafile which are to be selected for further processing. Each row in the `trl` matrix represents a single epoch-of-interest, and the `trl` matrix has at least 3 columns. The first column defines (in samples) the beginpoint of each epoch with respect to how the data are stored in the raw datafile. The second column defines (in samples) the endpoint of each epoch, and the third column specifies the offset (in samples) of the first sample within each epoch with respect to timepoint 0 within that epoch.

### The dataset used in this tutorial

{% include /shared/tutorial/meg_language.md %}

## Procedure
Expand Down Expand Up @@ -90,7 +92,6 @@ Save the preprocessed data to disk

save PreprocData dataFIC dataIC dataFC -append


These functions demonstrate how to extract trials from a dataset based on trigger information. Note that some of these trials will be contaminated with various artifact such as eye blinks or MEG sensor jumps. Artifact rejection is described in [Preprocessing - Visual artifact rejection](/tutorial/visual_artifact_rejection)

## Use your own function for trial selection
Expand Down Expand Up @@ -166,7 +167,7 @@ The output structure `dataMytrialfun` now contains fewer trials than before: onl
grad: [1x1 struct]
cfg: [1x1 struct]

More on the trialinfo field can be found in the [faq](/faq/is_it_possible_to_keep_track_of_trial-specific_information_in_my_fieldtrip_analysis_pipeline).
More on the `trialinfo` field can be found in the [faq](/faq/is_it_possible_to_keep_track_of_trial-specific_information_in_my_fieldtrip_analysis_pipeline).

## Suggested further reading

Expand Down
Loading

0 comments on commit a91a843

Please sign in to comment.