-
Notifications
You must be signed in to change notification settings - Fork 2
25. Sample data: perspective taking fMRI data
These are part of the fMRI in an perspective-taking experiment, where four movies (two themes x two perspectives) were shown to participants. Movie themes were Junk Food Consumption (F) and Pet Abandonment (P). Each movie theme had two versions of movies: either in the first-person or third-person perspective. In addition to fMRI data, each participant also gave their subjective ratings on different questionnaires. Data were collected with supports from Drs. Wen-Jui Kuo (Institute of Neuroscience, National Yang Ming Chiou Tung University) and Ching-Ching Chang (Humanity and social science center, Academia Sinica).
The goal is to predict answers to questionnaires (behaviors) based on individual participant's fMRI data. We are interested in the Partial Least Squares (PLS) regression method. A good review on Partial Least Squares analysis is [here]. The robust PLS has been implemented [here].
Eighteen healthy participants (age: 23 +/- 2.3 years; nine female) gave written informed consents before participating in the study.
Four tailored movie clips were presented to participants. Movies consisted of two themes (“Junk Food Consumption” (F), and Pet Abandonment” (“P”)) in two perspectives (first-person (“1”) and third-person perspective (“3”)). In the following, we used “F1” and “P1” denoting the condition of showing movies “F” and “P” in the first-person perspective, respectively. “F3” and “P3” denoted the condition of showing movies “F” and “P” in the third-person perspective, respectively. “Junk Food Consumption” (F) described a young female video blogger enjoyed eating “junk food”, which eventually led to lipid plagues in vessel and induced a heart attack. The actress fainted during her live streaming and needed assisted living afterward. “Pet Abandonment” (P) described a puppy adopted by a young couple. The actress decided to abandon the dog after breaking up the relationship. Days after the pet abandonment, the actress regretted and decided to look for the dog but in vain. Two movies were filmed for each theme by the same crew in either first-person (1st pp) or third-person perspective (3rd pp). The movie director carefully controlled the duration of scenes in each movie such that the duration of each theme matched between two perspectives without compromising the flow of story-telling. The durations of movies were around 200 s.
After viewing each movie clips inside MRI, participants were instructed to answer two questions in a 10-point Likert scale (1: least agreeable; 10: most agreeable). The first question (Q1) asked the degree of persuasiveness of the movie clips (F: Junk food consumption should be avoided; P: Pet abandonment is not acceptable). The second question (Q2) evaluated the subjective prediction on the related behaviors (F: I will avoid junk food consumption; P: I will not abandon my pets).
All MRI data were measured on a 3T MRI scanner using a 64-channel head-neck receiver coil array (Skyra, Siemens, Erlangen, Germany). Structural images were acquired with the magnetization-prepared rapid gradient echo (MPRAGE) pulse sequence (repetition time TR =2530 ms, echo time TE = 3.03 ms, isotropic voxel resolution = 1 mm, field of view FOV = 256 mm, flip angle = 7o, matrix size = 224 256, generalized auto-calibrating partial parallel acquisition (GRAPPA) acceleration factor = 2). Functional images were acquired with T2*-weighted echo-planar imaging (EPI) (TR/TE = 2400/30 ms, FOV = 216 mm, flip angle = 90°, number of slices = 42, resolution = 3 mm x 3 mm x 3 mm, GRAPPA acceleration = 2). EPI lasted for the duration of each movie.
Behavioral and fMRI data can be accessed by [University of Toronto OneDrive folder].
The following is more detailed explanation of [the analysis script].
Here we defined the behavioral categories and subcategories in the Excel file.
target_category=[2 4]; %????
target_subcategory=[2 1]; %??
file_xls='score_042018.xls';
These behavioral ratings can be read by the following.
for q_idx=1:length(target_category)
[dummy,q]=get_questionnaire_corr(file_xls,target_category(q_idx),target_subcategory(q_idx),target_film);
questionnaire=cat(2,questionnaire,q);
end;
We first read cortical parcellation labels defined by FreeSurfer in the atlas (MNI305) space. Here is the example of reading labels in the left hemisphere.
if(isunix&&~ismac)
[vertices label ctab] = read_annotation('/usr/local/freesurfer/7.1.0-1/subjects/fsaverage/label/lh.aparc.a2009s.annot');
end;
if(ismac)
[vertices label ctab] = read_annotation('/Applications/freesurfer/subjects/fsaverage/label/lh.aparc.a2009s.annot');
end;
Then, we read fMRI time series from STC files. We removed the first and last three time points to avoid transient effects. We further calculated the Z scores for each image voxel's time series.
stc=[];
for subj_idx=1:length(subject)
fn=sprintf('%s_2_fsaverage_sfmcstc-%s.stc',subject{subj_idx},hemi{hemi_idx});
[tmp,a,b,c]=inverse_read_stc(sprintf('%s/%s/epi_data/unpack/%s/%s',root_path,subject{subj_idx},stc_folder{stc_idx},fn));
ll=size(tmp,2);
tmp=tmp(:,4:ll-4);
if(subj_idx>1)
if(size(tmp,2)>size(stc,2))
tmp=tmp(:,1:end-1);
end;
end;
%fMRI signal into percentage change
%tmp=bsxfun(@rdivide,tmp,mean(tmp,2));
tmp=bsxfun(@minus,tmp,mean(tmp,2));
tmp=bsxfun(@rdivide,tmp,std(tmp,0,2));
stc(:,:,subj_idx)=tmp;
fprintf('[%s]::<%s>::<%s>\t%05d time points\n',stc_folder{stc_idx},subject{subj_idx},hemi{hemi_idx},size(stc,2));
end;
The following codes calculated the time series confounds modeled by polynomial and sinusoidal functions.
timeVec=[1:size(stc,2)]';
D_poly=[];
D_sinu=[];
D_poly=ones(length(timeVec),1);
for i=1:confound_polynomial_order
tmp=timeVec.^(i);
D_poly(:,i+1)=fmri_scale(tmp(:),1,0);
end;
for i=1:confound_sinusoidal_order
D_sinu(:,i*2-1)=sin(timeVec.*i./timeVec(end).*pi);
D_sinu(:,i*2)=cos(timeVec.*i./timeVec(end).*pi);
end;
D=cat(2,D_poly,D_sinu);
if(~isempty(D))
D_prep=D*inv(D'*D)*D';
else
D_prep=[];
end;
Then, confounds for the time series at each image voxle were removed by regression. The variable STC{stc_idx,hemi_idx}
represents the time series for a label (roi x time points x subjects) in condition coded by stc_idx
at either left or right hemisphere.
all_roi_counter=1;
roi_counter=1;
stc_tmp=[];
stc_avg_tmp=[];
stc_std_tmp=[];
for roi_idx=1:ctab.numEntries
fprintf('ROI <%s>...[%03d|%03d]...\n',ctab.struct_names{roi_idx},roi_idx,length(ctab.struct_names));
roi_vertex_idx=find(label==ctab.table(roi_idx,5));
[dummy,vv]=intersect(a,vertices(roi_vertex_idx));
roi_vertex=vv;
if(~isempty(roi_vertex))
data=[];
for v_idx=1:length(roi_vertex)
if(~isempty(D_prep))
data(v_idx,:,:)=squeeze(stc(roi_vertex(v_idx),:,:))-D_prep*squeeze(stc(roi_vertex(v_idx),:,:));
else
data(v_idx,:,:)=squeeze(stc(roi_vertex(v_idx),:,:));
end;
end;
stc_avg_tmp(roi_counter,:,:)=squeeze(mean(data,1)); %roi x time x subject
stc_std_tmp(roi_counter,:,:)=squeeze(std(data,0,1)); %roi x time x subject
if(hemi_idx==1)
roi{all_roi_counter}=sprintf('%s_%s','L',ctab.struct_names{roi_idx});
else
roi{all_roi_counter}=sprintf('%s_%s','R',ctab.struct_names{roi_idx});
end;
roi_counter=roi_counter+1;
all_roi_counter=all_roi_counter+1;
end;
end;
%stc_tmp=cat(1,stc_avg_tmp,stc_std_tmp);
stc_tmp=stc_avg_tmp;
STC{stc_idx,hemi_idx}=stc_tmp; %roi x time x subject
As each movie clip was partitioned into 17 scenes. The time series within each scene were temporally averaged. Note that we combined data from both hemispheres first.
for stc_idx=1:length(stc_folder)
tmp=cat(1,STC{stc_idx,1},STC{stc_idx,2});
%tmp=tmp(:,1:115,:);
TMP=[];
for scene_idx=1:length(scene_index{stc_idx}.scene_on_idx);
if(scene_index{stc_idx}.scene_off_idx(scene_idx)<=size(tmp,2))
TMP(:,scene_idx,:)=mean(tmp(:,scene_index{stc_idx}.scene_on_idx(scene_idx):scene_index{stc_idx}.scene_off_idx(scene_idx),:),2);
else
if(scene_index{stc_idx}.scene_on_idx(scene_idx)<=size(tmp,2))
TMP(:,scene_idx,:)=mean(tmp(:,scene_index{stc_idx}.scene_on_idx(scene_idx):end,:),2);
end;
end;
end;
tmp_buffer{stc_idx}=TMP;
end;
NOTE: The RSIMPLS method is described in: Hubert, M., and Vanden Branden, K. (2003), "Robust Methods for Partial Least Squares Regression", Journal of Chemometrics, 17, 537-549.
Before we performed robust Partial Least Squares (rPLS), we first separated data from all subjects into "modeling" and "validation" group. The relationship between fMRI and behavioral data was first estimated by rPLS using modeling group data. Then the relationship was used to predict the behavioral data from the fMRI data in the validation group. Both fMRI and behavioral data were normalized into Z scores before callling rPLS.
Codes of rPLS can be found [here].
index=crossvalind('kfold',size(stc,3)/2,n_fold);
validation_idx=find(index==1);
%validation_idx=cat(1,validation_idx,validation_idx+size(stc,3)/2); % this is to concatenate data from 1st and 3rd perspectives
model_idx=setdiff([1:size(stc,3)],validation_idx);
stc1n=stc0(:,:,model_idx); %data for model identification
stc1=stc0(:,:,validation_idx); %data for validation
stc1n=bsxfun(@minus,stc1n,stc_avg);
stc1n=bsxfun(@rdivide,stc1n,stc_std);
stc1=bsxfun(@minus,stc1,stc_avg);
stc1=bsxfun(@rdivide,stc1,stc_std);
questionnaire_model_tmp=questionnaire(model_idx,:);
questionnaire_validation_tmp=questionnaire(validation_idx,:);
questionnaire_tmp=questionnaire;
q_avg=mean(questionnaire_model_tmp,1);
q_std=std(questionnaire_model_tmp,0,1);
questionnaire_model_tmp=bsxfun(@minus,questionnaire_model_tmp,q_avg);
questionnaire_model_tmp=bsxfun(@rdivide,questionnaire_model_tmp,q_std);
questionnaire_validation_tmp=bsxfun(@minus,questionnaire_validation_tmp,q_avg);
questionnaire_validation_tmp=bsxfun(@rdivide,questionnaire_validation_tmp,q_std);
questionnaire_tmp=bsxfun(@minus,questionnaire_tmp,q_avg);
questionnaire_tmp=bsxfun(@rdivide,questionnaire_tmp,q_std);
We used the robust Partial Least Squares (rPLS) to model between fMRI data and behavioral data. The variable k
controlled the number of the latent variable in the modeling. Bpls
is the estimated regression matrix.
result=rsimpls(stc1n',questionnaire_model_tmp,'k',n_comp(n_comp_idx),'k_max',8,'plots',0,'h',12);
Bpls=cat(1,result.int,result.slope);
Now we used Bpls
to "predict" behaviors. This included the prediction for the modeling data and validation data groups. Note that the predicted behaviors must be scaled and shifted as the modeling data were normalized.
tmp=([ones(size(stc1n.',1),1) stc1n.']*Bpls).';
tmp=bsxfun(@plus,bsxfun(@times,tmp',q_std),q_avg)';
y_orig_pred(:,:,cv_idx,n_comp_idx)=tmp;
y_orig_res(:,:,cv_idx,n_comp_idx)=y_orig_pred(:,:,cv_idx,n_comp_idx)-questionnaire(model_idx,:)';
tmp=([ones(size(stc1.',1),1) stc1.']*Bpls).';
tmp=bsxfun(@plus,bsxfun(@times,tmp',q_std),q_avg)';
y_orig_pred_cv(:,:,cv_idx,n_comp_idx)=tmp;
y_orig_res_cv(:,:,cv_idx,n_comp_idx)=y_orig_pred_cv(:,:,cv_idx,n_comp_idx)-questionnaire(validation_idx,:)';
Finally, we can use permutation to assess the significance of rPLS modeling by scrambling the fMRI time series.
Results of PLSR, including the errors in the modeling and validation processes, the significance of each model, and performance in prediction, can be shown by [the script]. Note that the PLSR and rendering scripts work on the Pet Abandonment (P) movies only. You can duplicate the analysis and rendering to the other set of data about Junk Food Consumption (F) movie.
Synchronized fMRI signal across viewers can be calculated using [this script], where the data of viewing the (F) movie in the 1-person perspective are used to calculate the inter-subject correlation (ISC) values at each brain surface location among six (6) subjects (subj_01 to subj_06). Four output STC files are created for Z-scores of ISC values of all subject-pairs in the left and right hemipheres and the medians of the Z-scores.
FMRI STC files must be prepared before running this script.
User etc_render_fsbrain.m to examine the ISC distribution: