All tasks#

Apart from the MRI data, IBC is also a great resource for fMRI tasks. We have ran over 80 different tasks - gathered from our fellow researchers in the community - that altogether probe a large variety of cognitive domains in the human brain. The following figure depicts how much of the human brain cortex we have covered with these experiments.

The codes and stimuli for all these tasks are openly available on the individual-brain-charting/public_protocols repository. Most of these were implemented with Python, MATLAB or Octave and are hence readily usable. However, some of them were originally implemented with proprietary softwares, and that was what we also used and have provided on the repo. You would still need to have access to these softwares to run those experiments.

Below, you can find the paradigm descriptions, conditions, contrasts as well as the sample stimulation videos for each of these tasks. To help you look for relevant tasks, we have also tagged each of them with some of the broad cognitive_domains they intend to probe. These tags are based on the definitions from Cognitive Atlas.

ArchiStandard#

visual_orientation visual_arithmetic_processing visual_sentence_comprehension visual_word_recognition auditory_word_recognition

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: In-house custom-made sticks featuring one-top button, each one to be used in each hand

  • Audio device: MRConfon MKII

  • See demo

The ARCHI tasks are a battery of localizers comprising a wide range of psychological domains. The ArchiStandard task, described in (Pinel et al., 2007) probes basic functions, such as button presses with the left or right hand, viewing horizontal and vertical checkerboards, reading and listening to short sentences, and mental computations (subtractions). Visual stimuli were displayed in four 250-ms epochs, separated by 100ms intervals (i.e., 1.3s in total). Auditory stimuli were generated from a recorded male voice (i.e., a total of 1.6s for motor instructions, 1.2-1.7s for sentences, and 1.2-1.3s for subtraction). The auditory or visual stimuli were shown to the participants for passive viewing or button response in event related paradigms. Informal inquiries undertaken after the MRI session confirmed that the experimental tasks were understood and followed correctly.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ArchiStandard

Condition

Description

audio_computation

Mental subtraction, indicated by auditory instruction

audio_sentence

Listen to narrative sentences

horizontal_checkerboard

Visualization of flashing horizontal checkerboards

vertical_checkerboard

Visualization of flashing vertical checkerboards

video_computation

Mental subtraction, indicated by visual instruction

video_left_button_press

Left-hand three-times button press, indicated by visual instruction

video_right_button_press

Right-hand three-times button press, indicated by visual instruction

video_sentence

Read narrative sentences

Contrasts for ArchiStandard

Contrast

Description

audio_computation

mental subtraction upon audio instruction

audio_left_button_press

left hand button presses upon audio instructions

audio_right_button_press

right hand button presses upon audio instructions

audio_sentence

listen to narrative sentence

cognitive-motor

narrative/computation vs. button presses

computation

mental subtraction

computation-sentences

mental subtraction vs. sentence reading

horizontal-vertical

horizontal vs. vertical checkerboard

horizontal_checkerboard

watch horizontal checkerboard

left-right_button_press

left vs. right hand button press

listening-reading

listening to sentence vs. reading a sentence

motor-cognitive

button presses vs. narrative/computation

reading-checkerboard

read sentence vs. checkerboard

reading-listening

reading sentence vs. listening to sentence

right-left_button_press

right vs. left hand button press

sentences

read or listen to sentences

sentences-computation

sentence reading vs. mental subtraction

vertical-horizontal

vertical vs. horizontal checkerboard

vertical_checkerboard

watch vertical checkerboard

video_computation

mental subtraction upon video instruction

video_left_button_press

left hand button presses upon video instructions

video_right_button_press

right hand button presses upon video instructions

video_sentence

read narrative sentence

ArchiSpatial#

grasping hand_chirality_recognition visual_tracking visual_orientation hand_side_recognition

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Audio device: MRConfon MKII

  • See demo

The ARCHI tasks are a battery of localizers comprising a wide range of psychological domains. ArchiSpatial includes the performance of (1) ocular saccade, (2) grasping and (3) orientation judgments on objects (the two different tasks were actually made on the same visual stimuli in order to characterize grasping-specific activity), (4) judging whether a hand photograph was the left or right hand or (5) was displaying the front or back. The same input stimuli were presented twice in order to characterize specific reponse to hand side judgment.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ArchiSpatial

Condition

Description

object_grasp

Mimicry of object grasping with right hand, in which the corresponding object was displayed on the screen

object_orientation

Mimic orientation of rhombus, displayed as image background on the screen , using right hand along with fingers

rotation_hand

Mental judgment on whether the hand displayed on the image is a left or a right hand

rotation_side

Mental judgment on the palmar-dorsal direction of a hand displayed as visual stimulus

saccades

Ocular movements were performed according to the displacement of a fixation cross from the center towards peripheral points in the image displayed

Contrasts for ArchiSpatial

Contrast

Description

grasp-orientation

object grasping vs. orientation reporting

hand-side

left or right hand vs. hand palm or back

object_grasp

object grasping

object_orientation

image orientation reporting

rotation_hand

left or right hand

rotation_side

hand palm or back vs. fixation

saccades

saccade vs. fixation

ArchiSocial#

mentalization sound_perception visual_sentence_comprehension auditory_sentence_recognition auditory_perception

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Audio device: MRConfon MKII

  • See demo

The ARCHI tasks are a battery of localizers comprising a wide range of psychological domains. ArchiSocial relies on (1) the interpretation of short stories involving false beliefs or not, (2) observation of moving objects with or without a putative intention, and (3) listening to speech and non-speech sounds.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ArchiSocial

Condition

Description

false_belief_audio

Interpret short stories (presented as auditory stimuli) through mental reply (no active response was involved), featuring a false-belief plot

false_belief_video

Interpret short stories (presented as visual stimuli) through mental reply (no active response was involved), featuring a false-belief plot

mechanistic_audio

Interpret short stories (presented as auditory stimuli) through mental reply (no active response was involved), featuring a cause-consequence plot

mechanistic_video

Interpret short stories (presented as visual stimuli) through mental reply (no active response was involved), featuring a cause-consequence plot

non_speech_sound

Listen passively to short samples of natural sounds

speech_sound

Listen passively to short samples of human voices

triangle_mental

Watch short movies of triangles, which exhibit a putative interaction

triangle_random

Watch short movies of triangles, which exhibit a random movement

Contrasts for ArchiSocial

Contrast

Description

false_belief-mechanistic

false-belief story or tale vs. mechanistic story or tale

false_belief-mechanistic_audio

false-belief tale vs. mechanistic tale

false_belief-mechanistic_video

false-belief story vs. mechanistic story

false_belief_audio

false-belief tale

false_belief_video

false-belief story

mechanistic_audio

listening to a mechanistic tale

mechanistic_video

reading a mechanistic story

non_speech_sound

listen to natural sound

speech-non_speech

listen to voice sound vs. natural sound

speech_sound

listen to voice sound

triangle_mental

mental motion of triangle

triangle_mental-random

mental motion vs. random motion

triangle_random

randomly drifting triangle

ArchiEmotional#

gender_discrimination visual_orientation visual_representation visual_pattern_recognition emotional_expression

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Audio device: MRConfon MKII

  • See demo

The ARCHI tasks are a battery of localizers comprising a wide range of psychological domains. ArchiEmotional includes (1) facial judgments of gender, and (2) trustworthiness plus expression based on complete portraits or photos of eyes’ expressions.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ArchiEmotional

Condition

Description

expression_control

Mental assessment on the slope of a gray-scale grid image (obtained from scrambling an eyes’ image) that may be tilted or not

expression_gender

Gender evaluation of the presented human eye images

expression_intention

Trustworthy evaluation of the presented human eye images

face_control

Mental assessment on the slope of a gray-scale grid image (obtained from scrambling a face’s image) that may be tilted or not

face_gender

Gender evaluation of the presented human faces

face_trusty

Trustworthy evaluation of the presented human faces

Contrasts for ArchiEmotional

Contrast

Description

expression_control

look at scrambled eyes image

expression_gender

guess gender from eyes image

expression_gender-control

guess the gender from eyes image vs. view scrambled image

expression_intention

guess intention from eyes image

expression_intention-control

guess intention from eyes image vs. view scrambled image

expression_intention-gender

guess intention vs. gender from eyes image

face_control

look at scrambled image

face_gender

guess the gender from face image

face_gender-control

guess the gender from face image

face_trusty

assess face trustfulness

face_trusty-control

assess face trustfulness vs. view scrambled image

face_trusty-gender

assess face trustfulness vs. gender

trusty_and_intention-control

assess face trustfulness or guess expression intention vs. scrambled image

trusty_and_intention-gender

assess face trustfulness or guess expression intention vs. guess the gender

HcpEmotion#

feature_comparison emotional_face_recognition visual_form_recognition

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The HCP tasks used herein were reproductions made in a subset of task-fMRI paradigms originally developed for the Human Connectome Project (Barch et al., 2013), but with minor changes. The main purpose of HcpEmotion task was to capture neural activity arising from fear- or angry-response processes. To elicit stronger effects, affective facial expressions were used as visual stimuli due to their importance in adaptive social behavior (Hariri et al., 2002). The paradigm was thus composed by two categories of blocks: (1) the face block and (2) the shape block. All blocks consisted of a series of events, in which images with faces or shapes were displayed, respectively. There were always three faces/shapes per image; one face/shape was shown at the top and two faces/shapes were shown at the bottom. The participants were then asked to decide which face/shape at the bottom, i.e. left or right face/shape, matched the one displayed at the top, by pressing respectively the index or middle finger’s button of the response box. The task was formed by twelve blocks per run, i.e. six face blocks and six shape blocks. The two block categories were alternately presented for each run. All blocks contained six trials and they were always initiated by a cue of three seconds. In turn, the trials included a visual-stimulus period of two seconds and a fixation-cross period of one second; the total duration of the trial was thus three seconds.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HcpEmotion

Condition

Description

face

Images with faces were displayed

shape

Images with shapes were displayed

Contrasts for HcpEmotion

Contrast

Description

face

emotional face comparison

face-shape

emotional face comparison vs. shape comparison

shape

shape comparison

shape-face

shape comparison vs. emotional face comparison

HcpGambling#

reward_processing punishment_processing

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The HCP tasks used herein were reproductions made in a subset of task-fMRI paradigms originally developed for the Human Connectome Project (Barch et al., 2013), but with minor changes. HcpGambling task was adapted from the Incentive processing task-fMRI paradigm of the HCP and its aim was to localize brain structures that take part to the reward system, namely the basal ganglia complex. The paradigm included eight blocks and each block was composed by eight events. For every event, the participants were asked to play a game. The goal was to guess whether the next number to be displayed, which ranged from one to nine, would be more or less than five while a question mark was shown on the screen. The answer was given by pressing the index or middle finger’s button of the response box, respectively. Feedback on the correct number was provided afterwards. There was an equal amount of blocks in which the participants experienced either reward or loss, for most of the events. Concretely, six out of the eight events within a block pertained to one of these two outcomes; the remaining events corresponded to the antagonist or a neutral outcome, i.e. when the correct number was five. The task was constituted by eight blocks per run, in which each half related to reward and loss experience, respectively. The order of the two block categories were pseudorandomized during a single run, but fixed for all participants. A fixation-cross period of fifteen seconds was displayed between blocks. All blocks contained eight trials. The trials included a question-mark visual stimulus lasting up to 1.5 seconds, a feedback period of one second and a fixation-cross period of one second, as well; the total duration of the trial was then 3.5 seconds, approximately.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HcpGambling

Condition

Description

punishment

The participant experiences loss

reward

The participant experiences reward

Contrasts for HcpGambling

Contrast

Description

punishment

negative gambling outcome

punishment-reward

negative vs. positive gambling outcome

reward

gambling with positive outcome

reward-punishment

positive vs. negative gambling outcome

HcpMotor#

response_selection right_toe_response_execution tongue_response_execution response_execution right_hand_response_execution

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

The HCP tasks used herein were reproductions made in a subset of task-fMRI paradigms originally developed for the Human Connectome Project (Barch et al., 2013), but with minor changes. HCP Motor task was designed with the intent of extracting maps on gross motor topography, in particular motor skills associated with movements of the foot, hand and tongue. There were thus five categories of blocks with respect to motor tasks involving (1) the left foot, (2) the right foot, (3) the left hand, (4) the right hand, and (5) the tongue, respectively. The blocks always started with visual cues referring to which part of the body should be moved. The cues were then followed by a set of events, which were in turn indicated by flashing arrows on the screen. The events pertained to the corresponding movements performed by the participants. The task was formed by five blocks per category, with a total of twenty blocks per run. The order of the block categories were pseudo-randomized during each run, but fixed for all participants. A fixation-dot period of fifteen seconds was inserted between some blocks. All blocks contained ten trials. Every trial included a cue of one second and a period of performance of twelve seconds. During the period of performance, arrows flashed ten times on the screen, as an indication of the number of movements that should be performed. The total duration of the trial was then thirteen seconds.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HcpMotor

Condition

Description

cue

Fixation dot

left_foot

Visual cue indicating the left foot should be moved

left_hand

Visual cue indicating the left hand should be moved

right_foot

Visual cue indicating the right foot should be moved

right_hand

Visual cue indicating the right hand should be moved

tongue

Visual cue indicating the tongue hand should be moved

Contrasts for HcpMotor

Contrast

Description

cue

motion cue of motion

left_foot

move left foot

left_foot-avg

move left foot vs. right foot hands and tongue

left_hand

move left hand

left_hand-avg

move left hand vs. right hand feet and tongue

right_foot

move right foot

right_foot-avg

move right foot vs. left foot hands and tongue

right_hand

move right hand

right_hand-avg

move right hand vs. left hand feet and tongue

tongue

move tongue

tongue-avg

move tongue vs. hands and feet

HcpLanguage#

auditory_sentence_recognition auditory_arithmetic_processing narrative_comprehension

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The HCP tasks used herein were reproductions made in a subset of task-fMRI paradigms originally developed for the Human Connectome Project (Barch et al., 2013), but with minor changes. HCP Language task was used as a localizer of brain regions involved in semantic processing, with special focus on the anterior temporal lobe (ATL) (Binder et al., 2011). The paradigm comprised two categories of blocks: (1) story blocks, and (2) math blocks. The math block served as a control task in this context, since it was likely to adress other brain regions during the attentional demands. Both type of blocks exhibited auditory stimuli in short epochs, which in turn finished with a final question followed by two possible answers. During story blocks, participants were presented with stories, whose question targeted their respective topics. Conversely, math blocks showed arithmetic problems for which the correct solution must be selected. The answer was provided after the two possible options were displayed, through pressing the corresponding button of the response box, i.e. the button for the index or middle finger of the response box for the first or second option, respectively. The difficulty levels of the problems, presented for both categories, were adjusted throughout the experiment, in order to keep the participants engaged in the task and, thus, assure accurate performances (Binder et al., 2011). The task was composed by eleven blocks per run. For the first run, six story blocks and five math blocks were interleaved, respectively. The reverse amount and order of blocks were used during the second run. The number of trials per block varied between one and four. Nevertheless, it was assured that both block categories matched their length of presentation at every run. There was a cue of two seconds in the beginning of each block, indicating its category. The duration of the trials within a block varied between ten and thirty seconds. Finally, the presentation of the auditory stimuli was always accompanied by the display of a fixation cross on the screen throughout the entire run.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HcpLanguage

Condition

Description

math

Auditorily-cued mental addition

story

Listening to tales

Contrasts for HcpLanguage

Contrast

Description

math

mental additions

math-story

mental additions vs. listening to tale

story

listening to tale

story-math

listening to tale vs. mental additions

HcpRelational#

visual_pattern_recognition feature_comparison visual_form_recognition relational_comparison

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The HCP tasks used herein were reproductions made in a subset of task-fMRI paradigms originally developed for the Human Connectome Project (Barch et al., 2013), but with minor changes. HCP Relational task employed a relational matching-to-sample paradigm, featuring a second-order comparison of relations between two pairs of objects. It served primarily as a localizer of the rostrolateral prefrontal cortex, since relational matching mechanisms were shown to elicit activation on this region (Smith et al., 2007). Similarly to some previous tasks, two categories of blocks described the paradigm: (1) the relational-processing block, and (2) the control-matching block. All blocks were constituted by a set of events. In the relational-processing block, visual stimuli consisted of images representing two pairs of objects, in which one pair was placed at the top and the other one at the bottom of the image, respectively. Objects within a pair may differ in two dimensions: shape and texture. The participants had to identify whether the pair of objects from the top differed in a specific dimension and, subsequently, they were asked to determine whether the pair from the bottom changed along the same dimension. For the control block, one pair of objects was displayed at the top of the image and a single object at the bottom of the same image. In addition, a cue was shown in the middle of that image referring to one of the two possible dimensions. The participants had thus to indicate whether the object from the bottom was matching either of the two objects from the top, according to the dimension specified as a cue. If there was a match they had to press with the index finger on the corresponding button of the button box; otherwise, they had to press with the middle finger on the corresponding one.

This task was formed by twelve blocks per run. Two groups of six blocks referred to the two block categories, respectively. Block categories were, in turn, interleaved for display within a run. A fixation-cross period of sixteen seconds was inserted between some blocks. All blocks contained six trials and they were always initiated by a cue of two seconds. The trials were described by a visual-stimulus plus response period followed by a fixation-cross period, lasting up to ten seconds. The duration of the former differed in agreement with the type of block, i.e. it lasted nine seconds and 7.6 seconds during the relational-processing block and control-matching block, respectively.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HcpRelational

Condition

Description

match

Simple visual matching

relational

Relational processing of visual objects

Contrasts for HcpRelational

Contrast

Description

match

visual feature matching vs. fixation

relational

relational comparison vs. fixation

relational-match

relational comparison vs. matching

HcpSocial#

mentalization animacy_decision motion_detection animacy_perception

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The HCP tasks used herein were reproductions made in a subset of task-fMRI paradigms originally developed for the Human Connectome Project (Barch et al., 2013), but with minor changes. HCP Social task intended to provide evidence for task-specific activation in brain structures presumably implicated in social cognition. The paradigm included two categories of blocks, in which movies were presented during short epochs. The movies consisted in triangle-shape clip art, moving in a predetermined fashion. Putative social interactions could be drawn from movements referring to the block category on the effect-of-interest. In contrast, objects appeared to be randomly moving the other category, i.e. the control-effect block. Participants were to decide whether the movements of the objects appeared to represent a social interaction (by pressing with the index finger in the corresponding button of the response box) or not (by pressing with the ring finger in the corresponding button of the response box; in case of uncertainty, they had to press with the middle finger. The task was constituted by ten blocks per run. Each half of the blocks corresponded to one of the aforementioned block categories, whose order was pseudo-randomized for every run, but fixed for all participants. There was only one trial present per block. It consisted of a twenty-second period of video-clip presentation plus three seconds maximum of a response period, indicated by a momentary instruction on the screen. Thus, the total duration of a block was approximately twenty three seconds. A fixation-cross period of fifteen seconds was always displayed between blocks.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HcpSocial

Condition

Description

mental

Watching a movie with mental motion

random

Watching a movie with random motion

Contrasts for HcpSocial

Contrast

Description

mental

mental motion vs. fixation

mental-random

mental motion vs. random motion

random

random motion vs. fixation

HcpWm#

updating working_memory face_maintenance visual_place_recognition visual_face_recognition

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The HCP tasks used herein were reproductions made in a subset of task-fMRI paradigms originally developed for the Human Connectome Project (Barch et al., 2013), but with minor changes. HCP Working Memory task was adapted from the classical n-back task to serve as functional localizer for evaluation of working-memory (WM) capacity and related processes. The paradigm integrated two categories of blocks: (1) the “0-back” WM-task block, and (2) the “2-back” WM-task block. They were both equally presented within a run. A cue was always displayed at the beginning of each block, indicating its task-related type. Blocks were formed by set of events, during which pictures of faces, places, tools or body parts were shown on the screen. One block was always dedicated to one specific category of pictures and the four categories were always presented at every run. At each event, the participant were to decide whether the image matched with the reference or not, by pressing respectively on the index or middle finger’s button of the response box. The task was constituted by sixteen blocks per run, splitted into two block categories. Besides, there were four pairs of blocks per category, referring respectively to the four classes of pictures mentioned above. The order of the blocks, regardless their category and corresponding class of pictures, was pseudo-randomized for every run, but fixed for all participants. A fixation-cross period of fifteen seconds was introduced between some blocks. All blocks contained ten trials and they were always initiated by a cue of 2.5 seconds. Trials included in turn the presentation of a picture for two seconds and a very short fixation-cross period for half of a second; the total duration of one trial was thus 2.5 seconds.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HcpWm

Condition

Description

0back_body

0-back, pictures of body parts were displayed

0back_face

0-back, pictures of faces were displayed

0back_place

0-back, pictures of places were displayed

0back_tools

0-back, pictures of tools were displayed

2back_body

2-back, pictures of body parts were displayed

2back_face

2-back, pictures of faces were displayed

2back_place

2-back, pictures of places were displayed

2back_tools

2-back, pictures of tools were displayed

Contrasts for HcpWm

Contrast

Description

0back-2back

0-back vs. 2-back

0back_body

body image 0-back task vs. fixation

0back_face

face image 0-back task vs. fixation

0back_place

place image 0-back task vs. fixation

0back_tools

tool image 0-back task vs. fixation

2back-0back

2-back vs. 0-back

2back_body

body image 2-back task vs. fixation

2back_face

face image 2-back task vs. fixation

2back_place

place image 2-back task vs. fixation

2back_tools

tool image 2-back task vs. fixation

body-avg

body image versus face place tool image

face-avg

face image versus body place tool image

place-avg

place image versus face body tool image

tools-avg

tool image versus face place body image

RSVPLanguage#

word_maintenance string_maintenance visual_word_recognition working_memory visual_string_recognition

Implementation

  • Software: Expyriment 0.7.0 (Python 2.7)

  • Response device: In-house custom-made sticks featuring one-top button, each one to be used in each hand

  • Audio device: MRConfon MKII

  • See demo

The Rapid-Serial-Visual-Presentation (RSVP) language task, adapted from (Humphries et al., 2006) study on syntactic and semantic processing in auditory sentence comprehension, targets similar modules in the context of reading. This adaptation allows for additional insights into visual word recognition, sublexical processing, and other aspects of active reading. The paradigm employs a block-design presentation strategy, with each block representing an epoch within a trial. These epochs correspond to different experimental conditions, involving the consecutive visual presentation of ten constituents composed by letters. All linguistic content elicited from the conditions except “consonant strings”, such as grammar rules, lexicon and phonemes, were part of the french language. To ensure continuous engagement, participants were immediately prompted after each sentence to determine if the current constituent, or ‘probe’, belonged to the preceding sentence. They responded by pressing the left button for ‘yes’ and the right button for ‘no’.

Data were collected in a single session comprising six runs, each consisting of sixty trials. Within each run, ten trials were dedicated to each condition. Trial order was pseudo-randomized within and between runs, ensuring no repeated trials in a session. The presentation order of trials varied across participants. Each trial included several experimental stages: fixation cross display (2 seconds), brief blank screen (0.5 seconds), linguistic stimuli block (4 seconds), variable blank screen jitter (1-1.5 seconds), second fixation cross display (0.5 seconds), probe display (0.5 seconds), and response period (up to 2 seconds). This resulted in a total trial duration of ten seconds. Additionally, three extra seconds of blank screen preceded the first trial in every run. Opposite phase-encoding directions were applied during acquisition of each half of the total runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for RSVPLanguage

Condition

Description

complex

Constituents, i.e. words formed syntactically and semantically congruent sentences with more than one clause grid image that may be tilted or not (high sentence-structure complexity)

consonant_string

Syntactically and semantically non-congruent sentences composed by non-vocable constituents

probe

Presented word, for which one has to assess whether it was in the previously presented sequence or not

pseudoword_list

Syntactically and semantically non-congruent sentences composed by non-lexical vocable constituents

read_jabberwocky

Syntactically congruent sentences composed by non-lexical vocable constituents

simple

Constituents, i.e. words formed syntactically and semantically congruent sentences of one single clause (low_sentence-structure_complexity)

word_list

Syntactically non-congruent sentences but with semantic content

Contrasts for RSVPLanguage

Contrast

Description

complex

read sentence with complex syntax vs. fixation

complex-consonant_string

read complex sentence vs. consonant strings

complex-simple

read sentence with complex vs. simple syntax

consonant_string

read and encode consonant strings vs. fixation

jabberwocky

read jabberwocky vs. fixation

jabberwocky-consonant_string

read jabberwocky vs. consonant strings

jabberwocky-pseudo

read jabberwocky vs. pseudowords

probe

word probe

pseudo-consonant_string

read pseudowords vs. consonant strings

pseudoword_list

read pseudowords vs. fixation

sentence-consonant_string

read sentence vs. consonant strings

sentence-jabberwocky

read sentence vs. jabberwocky

sentence-pseudo

read sentence vs. pseudowords

sentence-word

read sentence vs. words

simple

read sentence with simple syntax vs. fixation

simple-consonant_string

read simple sentence vs. consonant strings

word-consonant_string

read words vs. consonant strings

word-pseudo

read words vs. pseudowords

word_list

read words vs. fixation

RestingState#

Implementation

  • Software: nan

Participants underwent two sessions, each consisting of two 15-minute runs of resting-state fMRI data. This resulted in a total of 1 hour of resting-state data per subject. Participants were instructed to remain still, keep their eyes open, and focus on a crosshair displayed on the screen. For more information on the acquisition parameters used for the resting-state data, refer to Acquisition parameters for resting-state BOLD-contrast images.

MTTWE#

west_cardinal-direction_judgment spatial_localization semantic_categorization auditory_perception east_cardinal-direction_judgment

Implementation

  • Software: Expyriment 0.7.0 / pygame 1.9.3

  • Response device: In-house custom-made sticks featuring one-top button, each one to be used in each hand

  • Repository

  • See demo

The Mental Time Travel (MTT) task battery was built on prior NeuroSpin studies focused on chronosthesia and mental space navigation (Gauthier et al., 2016, Gauthier et al., 2016, Gauthier et al., 2018). These studies involved judging the ordinality of historical events via egocentric mapping. In contrast, our task assessed neural correlates for both mental time and space judgment using narratives and allocentric mapping. To eliminate subject-specific representations, we used fictional scenarios with fabricated stories and characters on different islands.

Each island had two stories plotted in a two-dimensional mesh of nodes, each representing a specific action. The narratives were presented as audio to prevent graphical memory retrieval, and participants learned the stories chronographically, without taking visual notes. The stories of each island evolved both in time and in one single cardinal direction. The cardinal directions, cued in the MTTWE task, were West-East (WE). In addition, the stories of each island evolved spatially in opposite ways. So, the two stories plotted in the West-East island evolved across time from west to east and east to west, respectively.

The task followed a block-design paradigm, featuring three audio stimulus conditions: (1) Reference, providing context for time or space judgment; (2) Cue, instructing the type of mental judgment to be made, i.e. “Before or After?” for the time judgment or “West or East?” for the space judgment; and (3) Event, the action to be judged. Each trial began with a two-second Reference followed by silence, then a two-second Cue with silence, and four Events presented for two seconds each, interspersed by a three-second Response condition. The total trial duration was 39 seconds.

A black fixation cross was always on screen, participants were instructed to keep their eyes open. At the end of each trial, the cross briefly turned red, signaling the next trial. Participants responded by pressing left or right-hand buttons to indicate their judgments based on the Cue, either temporal or spatial. If the Cue hinted at time judgment, the participants were to judge whether the previous Event occurred before or after the Reference. If the Cue concerned with space judgment, participants were to judge whether the Event occurred west or east of the Reference.

One data collection session consisted of three runs, each comprising twenty trials. Half of these trials focused on time navigation, and the other half on space navigation. Both types of navigation shared five different references, resulting in two trials with the same reference for each type of navigation. These two trials differed in the distance between the node of the Reference and the node of each Event, with ‘close’ referring to two consecutive nodes, and ‘far’ indicating two nodes interspersed by another node. Within trials, half of the Events related to past or western actions, and the other half to future or eastern actions with respect to the Reference.

Trial order was shuffled within runs, ensuring each run featured a unique sequence of trials based on reference type (both in time and space) and cue. Given only two types of answers, events were randomized according to their correct answer within each trial. This randomized sequence was consistent across all participants for each run and is available in the task’s Github repository. It’s important to note that the sequence of trials for all runs is predetermined and provided as inputs for a specific session in the protocol.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MTTWE

Condition

Description

we_after_event

Action to be judged whether it takes place before or after this reference, that actually takes place before this reference, in the west-east island

we_all_event_response

Motor responses performed after every event condition in the west-east island

we_all_space_cue

Cue indicating a question about spatial orientation in the west-east island

we_all_time_cue

Cue indicating a question about time orientation in the west-east island

we_average_reference

Action in the story to serve as reference for the time or space judgment in the same trial in the west-east island

we_before_event

Action to be judged whether it takes place before or after this reference, that actually takes place before this reference, in the west-east island

we_eastside_event

Action to be judged whether it takes place west or east from this reference, that actually takes place east from this reference

we_westside_event

Action to be judged whether it takes place west or east from this reference, that actually takes place west from this reference

Contrasts for MTTWE

Contrast

Description

eastside-westside_event

events occuring eastside vs. westside

we_after-before_event

events occuring after vs. before in west-east island

we_after_event

events occuring after vs. fixation in west-east island

we_all_event_response

motor responses performed after every event condition in the west-east island

we_all_space-time_cue

spatial vs. time cues in west-east island

we_all_space_cue

spatial cue of the next event in west-east island

we_all_time-space_cue

time vs. spatial cues in west-east island

we_all_time_cue

time cue of the next event in west-east island

we_average_event

figuring out the space or time of an event in west-east island

we_average_reference

updating ones position in space and time in west-east island

we_before-after_event

events occuring before vs. after in west-east island

we_before_event

events occuring before vs. fixation in west-east island

we_eastside_event

events occuring eastside vs. fixation

we_space-time_event

event in space vs. event in time in west-east island

we_space_event

figuring out the position of an event in west-east island

we_time-space_event

event in time vs. event in space in west-east island

we_time_event

figuring out the time of an event in west-east island

we_westside_event

events occuring westside vs. fixation

westside-eastside_event

events occuring westside vs. eastside

MTTNS#

spatial_localization south_cardinal-direction_judgment semantic_categorization auditory_perception cardinal-direction_judgment

Implementation

  • Software: Expyriment 0.7.0 / pygame 1.9.4

  • Response device: In-house custom-made sticks featuring one-top button, each one to be used in each hand

  • Repository

  • See demo

The Mental Time Travel (MTT) task battery was developed following previous studies conducted at the NeuroSpin platform on chronosthesia and mental space navigation (Gauthier et al., 2016, Gauthier et al., 2016, Gauthier et al., 2018). The MTTNS task is exactly the same as MTTWE task except that the the cardinal directions, cued in the task, were North-South (NS). In addition, the two stories plotted in the South-North island evolved across time from north to south and south to north. The MTTNS task was performed in a separate session from the MTTWE task.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MTTNS

Condition

Description

sn_after_event

Action to be judged whether it takes place before or after this reference, that actually takes place before this reference, in the south-north island

sn_all_event_response

Motor responses performed after every event condition in the south-north island

sn_all_space_cue

Cue indicating a question about spatial orientation in the south-north island

sn_all_time_cue

Cue indicating a question about time orientation in the south-north island

sn_average_reference

Action in the story to serve as reference for the time or space judgment in the same trial in the west-east island

sn_before_event

Action to be judged whether it takes place before or after this reference, that actually takes place before this reference, in the south-north island

sn_northside_event

Action to be judged whether it takes place south or north from this reference, that actually takes place north from this reference

sn_southside_event

Action to be judged whether it takes place south or north from this reference, that actually takes place south from this reference

Contrasts for MTTNS

Contrast

Description

northside-southside_event

events occuring northsife vs. southside

sn_after-before_event

events occuring after vs. before in south-north island

sn_after_event

events occuring after vs. fixation in south-north island

sn_all_event_response

motor responses performed after all event condition in the south-north island

sn_all_space-time_cue

spatial vs. time cues in south-north island

sn_all_space_cue

spatial cue of the next event in south-north island

sn_all_time-space_cue

time vs. spatial cues in south-north island

sn_all_time_cue

time cue of the next event in south-north island

sn_average_event

figuring out the space or time of an event in south-north island

sn_average_reference

updating ones position in space and time in south-north island

sn_before-after_event

events occuring before vs. after in south-north island

sn_before_event

events occuring before vs. fixation in south-north island

sn_northside_event

events occuring northside vs. fixation

sn_southside_event

events occuring southside vs. fixation

sn_space-time_event

event in space vs. event in time in south-north island

sn_space_event

figuring out the position of an event in south-north island

sn_time-space_event

event in time vs. event in space in south-north island

sn_time_event

figuring out the time of an event in south-north island

southside-northside_event

events occuring southside vs. northside

PreferenceFood#

confidence_judgment incentive_salience reward_valuation judgment food_cue_reactivity

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The Preference task battery was adapted from the Pleasantness Rating task described in (Lebreton et al., 2015), in order to capture the neural correlates underlying the decision-making for potentially rewarding outcomes (aka “positive-incentive value”) as well as the level of confidence of such type of action. The whole task battery is composed of four tasks, each of them pertaining to the presentation of items of a certain kind. Therefore, PreferenceFood task was dedicated to “food items”. The task was organized as a block-design experiment with one condition per trial. Every trial started with a fixation cross, whose duration was jittered between 0.5 seconds and 4.5 seconds, after which a picture of an item was displayed on the screen together with a rating scale and a cursor. Participants were to indicate how pleasant the presented stimulus was, by sliding the cursor along the scale. Index and ring finger’s of the response box were to move respectively with low and high speed to the left whereas the middle and little fingers were to move respectively with low and high speed to the right; thumb’s button was used to validate the answer. The scale ranged between 1 and 100. The value 1 corresponded to the choices “unpleasant” or “indifferent”; the middle of the scale corresponded to the choice “pleasant”; and the value 100 corresponded to the choice “very pleasant”. Therefore, the ratings related only to the estimation of the positive-incentive value of the items displayed.

The task was presented twice in two fully dedicated runs. The stimuli were always different between runs of the same task. As a consequence, no stimulus was ever repeated in any trial and, thus, no item was ever assessed more than once by the participants. Although each trial had a variable duration, according to the time spent by the participant in the assessment, no run lasted longer than eight minutes and sixteen seconds. To avoid any selection bias in the sequence of stimuli, the order of their presentation was shuffled across trials and between runs of the same type. This shuffle is embedded in the code of the protocol and, thus, the sequence was determined upon launching it. Consequently, the sequence of stimuli was also random across subjects.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for PreferenceFood

Condition

Description

food_constant

Classify the level of pleasantness of a food item displayed on the screen in terms of willingness to eat it, this condition serves as an occurrence regressor when formulated as visual evaluation of an item vs. fixation

food_linear

Classify the level of pleasantness of a food item displayed on the screen in terms of willingness to eat it. this condition captures the linear effect of pleasantness (akin to judgement effects) when formulated as visual preference vs. no preference

food_quadratic

Classify the level of pleasantness of a food item displayed on the screen in terms of willingness to eat it. this condition captures the quadratic effect of pleasantness (akin to confidence effects) when formulated as confidence in preference vs. no confidence

Contrasts for PreferenceFood

Contrast

Description

food_constant

evaluation of food

food_linear

linear effect of food preference

food_quadratic

quadratic effect of food preference

PreferencePaintings#

incentive_salience confidence_judgment visual_form_discrimination visual_color_discrimination reward_valuation

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • See demo

The Preference task battery was adapted from the Pleasantness Rating task described in (Lebreton et al., 2015), in order to capture the neural correlates underlying the decision-making for potentially rewarding outcomes (aka “positive-incentive value”) as well as the level of confidence of such type of action. The whole task battery is composed of four tasks, each of them pertaining to the presentation of items of a certain kind. Therefore, PreferencePaintings task was dedicated to “paintings”. The task was organized as a block-design experiment with one condition per trial. Every trial started with a fixation cross, whose duration was jittered between 0.5 seconds and 4.5 seconds, after which a picture of an item was displayed on the screen together with a rating scale and a cursor. Participants were to indicate how pleasant the presented stimulus was, by sliding the cursor along the scale. Index and ring finger’s of the response box were to move respectively with low and high speed to the left whereas the middle and little fingers were to move respectively with low and high speed to the right; thumb’s button was used to validate the answer. The scale ranged between 1 and 100. The value 1 corresponded to the choices “unpleasant” or “indifferent”; the middle of the scale corresponded to the choice “pleasant”; and the value 100 corresponded to the choice “very pleasant”. Therefore, the ratings related only to the estimation of the positive-incentive value of the items displayed.

The task was presented twice in two fully dedicated runs. The stimuli were always different between runs of the same task. As a consequence, no stimulus was ever repeated in any trial and, thus, no item was ever assessed more than once by the participants. Although each trial had a variable duration, according to the time spent by the participant in the assessment, no run lasted longer than eight minutes and sixteen seconds. To avoid any selection bias in the sequence of stimuli, the order of their presentation was shuffled across trials and between runs of the same type. This shuffle is embedded in the code of the protocol and, thus, the sequence was determined upon launching it. Consequently, the sequence of stimuli was also random across subjects.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for PreferencePaintings

Condition

Description

painting_constant

Classify the level of pleasantness of a painting displayed on the screen in terms of willingness to possess it, this condition serves as an occurrenceregressor when formulated as visual evaluation of an item vs. fixation

painting_linear

Classify the level of pleasantness of a painting displayed on the screen in terms of willingness to possess it. this condition captures the linear effect of pleasantness (akin to judgement effects) when formulated as visual preference vs. no preference

painting_quadratic

Classify the level of pleasantness of a painting displayed on the screen in terms of willingness to possess it. this condition captures the quadratic effect of pleasantness (akin to confidence effects) when formulated as confidence in preference vs. no confidence

Contrasts for PreferencePaintings

Contrast

Description

painting_constant

evaluation of paintings

painting_linear

linear effect of paintings preference

painting_quadratic

quadratic effect of paintings preference

PreferenceFaces#

incentive_salience confidence_judgment reward_valuation facial_attractiveness_recognition face_perception

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The Preference task battery was adapted from the Pleasantness Rating task described in (Lebreton et al., 2015), in order to capture the neural correlates underlying the decision-making for potentially rewarding outcomes (aka “positive-incentive value”) as well as the level of confidence of such type of action. The whole task battery is composed of four tasks, each of them pertaining to the presentation of items of a certain kind. Therefore, PreferenceFaces task was dedicated to “human faces”. All tasks were organized as a block-design experiment with one condition per trial. Every trial started with a fixation cross, whose duration was jittered between 0.5 seconds and 4.5 seconds, after which a picture of an item was displayed on the screen together with a rating scale and a cursor. Participants were to indicate how pleasant the presented stimulus was, by sliding the cursor along the scale. Index and ring finger’s of the response box were to move respectively with low and high speed to the left whereas the middle and little fingers were to move respectively with low and high speed to the right; thumb’s button was used to validate the answer. The scale ranged between 1 and 100. The value 1 corresponded to the choices “unpleasant” or “indifferent”; the middle of the scale corresponded to the choice “pleasant”; and the value 100 corresponded to the choice “very pleasant”. Therefore, the ratings related only to the estimation of the positive-incentive value of the items displayed.

The task was presented twice in two fully dedicated runs. The stimuli were always different between runs of the same task. As a consequence, no stimulus was ever repeated in any trial and, thus, no item was ever assessed more than once by the participants. Although each trial had a variable duration, according to the time spent by the participant in the assessment, no run lasted longer than eight minutes and sixteen seconds. To avoid any selection bias in the sequence of stimuli, the order of their presentation was shuffled across trials and between runs of the same type. This shuffle is embedded in the code of the protocol and, thus, the sequence was determined upon launching it. Consequently, the sequence of stimuli was also random across subjects.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for PreferenceFaces

Condition

Description

face_constant

Classify the level of pleasantness of a human face displayed on the screen in terms of willingness to meet the person portrayed, this condition serves as an occurrence regressor when formulated as visual evaluation of an item vs. fixation

face_linear

Classify the level of pleasantness of a human face displayed on the screen in terms of willingness to meet the person portrayed. this condition captures the linear effect of pleasantness (akin to judgement effects) when formulated as visual preference vs. no preference

face_quadratic

Classify the level of pleasantness of a human face displayed on the screen in terms of willingness to meet the person portrayed. this condition captures the quadratic effect of pleasantness (akin to confidence effects) when formulated as confidence in preference vs. no confidence

Contrasts for PreferenceFaces

Contrast

Description

face_constant

evaluation of faces

face_linear

linear effect of face preference

face_quadratic

quadratic effect of face preference

PreferenceHouses#

confidence_judgment incentive_salience reward_valuation visual_place_recognition judgment

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The Preference task battery was adapted from the Pleasantness Rating task described in (Lebreton et al., 2015), in order to capture the neural correlates underlying the decision-making for potentially rewarding outcomes (aka “positive-incentive value”) as well as the level of confidence of such type of action. The whole task battery is composed of four tasks, each of them pertaining to the presentation of items of a certain kind. Therefore, PreferenceHouses task was dedicated to “houses”. All tasks were organized as a block-design experiment with one condition per trial. Every trial started with a fixation cross, whose duration was jittered between 0.5 seconds and 4.5 seconds, after which a picture of an item was displayed on the screen together with a rating scale and a cursor. Participants were to indicate how pleasant the presented stimulus was, by sliding the cursor along the scale. Index and ring finger’s of the response box were to move respectively with low and high speed to the left whereas the middle and little fingers were to move respectively with low and high speed to the right; thumb’s button was used to validate the answer. The scale ranged between 1 and 100. The value 1 corresponded to the choices “unpleasant” or “indifferent”; the middle of the scale corresponded to the choice “pleasant”; and the value 100 corresponded to the choice “very pleasant”. Therefore, the ratings related only to the estimation of the positive-incentive value of the items displayed.

The task was presented twice in two fully dedicated runs. The stimuli were always different between runs of the same task. As a consequence, no stimulus was ever repeated in any trial and, thus, no item was ever assessed more than once by the participants. Although each trial had a variable duration, according to the time spent by the participant in the assessment, no run lasted longer than eight minutes and sixteen seconds. To avoid any selection bias in the sequence of stimuli, the order of their presentation was shuffled across trials and between runs of the same type. This shuffle is embedded in the code of the protocol and, thus, the sequence was determined upon launching it. Consequently, the sequence of stimuli was also random across subjects.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for PreferenceHouses

Condition

Description

house_constant

Classify the level of pleasantness of a house displayed on the screen in terms of willingness to live in that house. this condition serves as an occurrenceregressor when formulated as visual evaluation of an item vs. fixation

house_linear

Classify the level of pleasantness of a house displayed on the screen in terms of willingness to live in that house. this condition captures the linear effect of pleasantness (akin to judgement effects) when formulated as visual preference vs. no preference

house_quadratic

Classify the level of pleasantness of a house displayed on the screen in terms of willingness to live in that house. this condition captures the quadratic effect of pleasantness (akin to confidence effects) when formulated as confidence in preference vs. no confidence

Contrasts for PreferenceHouses

Contrast

Description

house_constant

evaluation of houses

house_linear

linear effect of houses preference

house_quadratic

quadratic effect of houses preference

TheoryOfMind#

theory_of_mind semantic_processing narrative_comprehension

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • Repository

  • See demo

This battery of tasks was adapted from the original task-fMRI localizers of Saxe Lab, that intended to identify functional regions-of-interest in the Theory-of-Mind network and Pain Matrix regions. Minor changes were employed in the present versions of the tasks herein described. Because the cohort of this dataset is composed solely of native French speakers, the verbal stimuli were thus translated to French. Therefore, the durations of the reading period and the response period within conditions were slightly increased. The TheoryOfMind task was a localizer was intended to identify brain regions involved in theory-of-mind and social cognition, by contrasting activation during two distinct story conditions: belief judgments, reading a false-belief story that portrayed characters with false beliefs about their own reality; and fact judgments, reading a story about a false photograph, map or sign (Dodell-Feder et al., 2011). The task was organized as a block-design experiment with one condition per trial. Every trial started with a fixation cross of twelve seconds, followed by the main condition that comprised a reading period of eighteen seconds and a response period of six seconds. During this response period, participants were to judge whether a statement about the story previously displayed is true or false by pressing respectively with the index or middle finger in the corresponding button of the response box. The total duration of the trial amounted to thirty six seconds. There were ten trials in a run, followed by an extraperiod of fixation cross for twelve seconds at the end of the run. Two runs were dedicated to this task in one single session. The designs, i.e. the sequence of conditions across trials, for two possible runs were pre-determined by the authors of the original study and hard-coded in the original protocol. The IBC-adapted protocols contain the exactly same designs. For all subjects, design 1 was employed for the PA-run and design 2 for the AP-run.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for TheoryOfMind

Condition

Description

belief

Read a false-belief story

photo

Read a false-photograph story

Contrasts for TheoryOfMind

Contrast

Description

belief

manipulation of belief judgments

belief-photo

belief vs. factual judgments

photo

manipulation of fact judgments

EmotionalPain#

empathy imagined_emotional_pain imagined_physical_pain narrative_comprehension

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task also belongs to the battery of tasks was adapted from the original task-fMRI localizers of Saxe Lab, that intended to identify functional regions-of-interest in the Theory-of-Mind network and Pain Matrix regions. The EmotionalPain was an emotional pain localizer that was intended to identify brain regions involved in theory-of-mind and Pain Matrix areas, by contrasting activation during two distinct story conditions: reading a story that portrayed characters suffering from emotional pain and physical pain (Jacoby et al., 2016). The experimental design of this task is identical to the one employed for the TheoryOfMind localizer, except that the reading period lasted twelve seconds instead of eighteen seconds. During the response period, the participant had to the judge the amount of pain experienced by the character(s) portrayed in the previous story. For no pain, they had to press with their thumb on the corresponding button of the response box; for mild pain, they had to press with their index finger; for moderate pain, they had to press with the middle finger; and for a strong pain, they had to press with the ring finger.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for EmotionalPain

Condition

Description

emotional_pain

Read story about fictional characters suffering from emotional pain

physical_pain

Read story about fictional characters suffering from physical pain

Contrasts for EmotionalPain

Contrast

Description

emotional-physical_pain

emotional vs. physical pain story

emotional_pain

reading emotional pain story

physical_pain

reading physical pain story

PainMovie#

empathy mentalization imagined_emotional_pain imagined_physical_pain theory_of_mind

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Audio device: MRConfon MKII

This task also belongs to the battery of tasks was adapted from the original task-fMRI localizers of Saxe Lab, that intended to identify functional regions-of-interest in the Theory-of-Mind network and Pain Matrix regions. The PainMovie task was a pain movie localizer and consisted displaying “Partly Cloudy”, a 6 minutes movie from Disney Pixar, in order to study the responses implicated in theory-of-mind and Pain Matrix brain regions (Jacoby et al., 2016, Richardson et al., 2018). Two main conditions were thus hand-coded in the movie, according to (Richardson et al.), as follows: mental movie, in which characters were “mentalizing”; and physical pain movie, in which characters were experiencing physical pain. Such conditions were intended to evoke brain responses from theory-of-mind and pain-matrix networks, respectively. All moments in the movie not focused on the direct interaction of the main characters were considered as a baseline period.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for PainMovie

Condition

Description

movie_mental

Watch movie-scene wherein characters experience changes in beliefs, desires, and/or emotions

movie_pain

Watch movie-scene wherein characters experience physical pain

Contrasts for PainMovie

Contrast

Description

movie_mental

movie with events about changes in beliefs desires and emotions

movie_mental-pain

mental events vs. physically painful events

movie_pain

movie with physically painful events

VSTM#

visual_orientation shape_recognition visual_form_discrimination visual_buffer visual_working_memory

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • See demo

This battery of tasks was adapted from the control experiment described in (Knops et al., 2014). Minor changes were employed for the IBC implementation of this battery which have been described later in this section. In the Visual Short-Term Memory (VSTM) task, participants were presented with a certain number of bars, varying from one to six. Every trial started with the presentation of a black fixation dot in the center of the screen for 0.5 seconds. While still on the screen, the black fixation dot was then displayed together with a certain number of tilted bars - variable between trials from one to six - for 0.15 seconds. Afterwards, a white fixation dot was shown for 1 second. It was next replaced by the presentation of the test stimulus for 1.7 seconds, displaying identical number of tilted bars in identical positions together with a green fixation dot. The participants were to remember the orientation of the bars from the previous sample and answer with one of the two possible button presses, i.e. respectively with the index or middle finger, depending on whether one of the bars in the current display had changed orientation by 90◦ or not, which was the case in half of the trials. The test display was replaced by another black fixation dot for a fixed duration of 3.8 seconds. Thus, the trial was 7.15 seconds long. There were seventy two trials in a run and four runs in one single session. Pairs of runs were launched consecutively. To avoid selection bias in the sequence of stimuli, the order of the trials was shuffled according to numerosity and change of orientation within runs and across participants. Both the response period and the period of the fixation dot at the end of each trial were made constant.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for VSTM

Condition

Description

vstm_constant

Judge whether any bar changed orientation within two consecutive displays of bar sets on the screen, response to numerosity vs. fixation

vstm_linear

Judge whether any bar changed orientation within two consecutive displays of bar sets on the screen, linear response to numerosity

vstm_quadratic

Judge whether any bar changed orientation within two consecutive displays of bar sets on the screen, response to quadratic numerosity effect

Contrasts for VSTM

Contrast

Description

vstm_constant

visual orientation

vstm_linear

linear effect of numerosity in visual orientation

vstm_quadratic

quadratic effect of numerosity in visual orientation

Enumeration#

shape_recognition enumeration visual_working_memory visual_buffer numerosity

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • See demo

The Enumeration task was also a part of battery of tasks was adapted from the control experiment described in (Knops et al., 2014). Minor changes were employed for the IBC implementation of this battery which have been described later in this section. In this task, participants were presented with a certain number of tilted dark-gray bars on a light-gray background, varying from one to eight. Every trial started with the presentation of a black fixation dot in the center of the screen for 0.5 seconds. While still on the screen, the black fixation dot was then displayed together with a certain number of tilted bars for 0.15 seconds. It was followed by a response period of 1.7s, in which only a green fixation dot was being displayed on the screen. The participants were to remember the number of the bars that were shown right before and answer accordingly, by pressing the corresponding button: once with the thumb’s button for one bar; once with the index finger’s button for two bars; once with the middle finger’s button for three bars; once with the ring finger’s button for four bars; twice with the thumb’s button for five bars; twice with the index finger’s button for six bars; twice with the middle finger’s button for seven bars; twice with the ring finger’s button for eight bars. Afterwards, another black fixation dot was displayed for a fixed duration of 7.8 seconds. The trial length was thus 9.95 seconds. There were ninety six trials in a run and two (consecutive) runs in one single session. To avoid selection bias in the sequence of stimuli, the order of the trials was shuffled according to numerosity within runs and across participants. Both the response period and the period of the fixation dot at the end of each trial were made constant. The answers were registered via a button-press response box instead of an audio registration of oral responses as in the original study.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Enumeration

Condition

Description

enumeration_constant

Occurrence regressor for the enumeration response to constant numerosity when compared against fixation

enumeration_linear

Capture the linear effect of enumeration response to numerosity

enumeration_quadratic

Capture the quadratic effect of enumeration response to numerosity interaction

Contrasts for Enumeration

Contrast

Description

enumeration_constant

enumeration

enumeration_linear

linear effect of numerosity in enumeration

enumeration_quadratic

quadratic effect of numerosity in enumeration

Self#

episodic_memory reading self-reference_effect judgment recognition

Implementation

  • Software: Expyriment 0.7.0 (Python 2.7)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • See demo

The Self task was adapted from (Genom et al., 2014), originally developed to investigate the Self-Reference Effect in older adults. This effect pertains to the encoding mechanism of information referring to the self, characterized as a memory-advantaged process. Consequently, memory-retrieval performance is also better for information encoded in reference to the self than to other people, objects or concepts. The present task was thus composed of two phases, each of them relying on encoding and recognition procedures. The encoding phase was intended to map brain regions related to the encoding of items in reference to the self, whereas the recognition one was conceived to isolate the memory network specifically involved in the retrieval of those items. The phases were interspersed, so that the recognition phase was always related to the encoding phase presented immediately before. The encoding phase had two blocks. Each block was composed of a set of trials pertaining to the same condition. For both conditions, a different adjective was presented at every trial on the screen. The participants were to judge whether or not the adjective described themselves – self-reference encoding condition– or another person – other-reference encoding condition– by pressing with the index finger on the corresponding button of the response box for “yes” and with the middle finger for “no”. The other person was a public figure in France around the same age range as the cohort, whose gender matched the gender of every participant.

Two public figures were mentioned, one at the time, across all runs; four public figures –two of each gender– were selected beforehand. By this way, we ensured that all participants were able to successfully characterize the same individuals, holding equal the levels of familiarity and affective attributes with respect to these individuals. In the recognition phase, participants were to remember whether or not the adjectives had also been displayed during the previous encoding phase, by pressing with the index finger on the corresponding button of the response box for “yes” and with the middle finger for “no”. This phase was composed of a single block of trials, pertaining to three categories of conditions. New adjectives were presented during one half of the trials whereas the other half were in reference to the adjectives displayed in the previous phase. Thus, trials referring to the adjectives from self-reference encoding were part of the self-reference recognition category and trials referring to the other-reference encoding were part of the other-reference recognition category.

There were four runs in one session. The first three ones had three phases; the fourth and last run had four phases. Their total durations were twelve and 15.97 seconds, respectively. Blocks of both phases started with an instruction condition of five seconds, containing a visual cue. The cue was related to the judgment that should be performed next, according to the type of condition featured in that block. A set of trials, showing different adjectives, were presented afterwards. Each trial had a duration of five seconds, in which a response was to be provided by the participant. During the trials of the encoding blocks, participants had to press the button with their left or right hand, depending on whether they believed or not the adjective on display described someone (i.e. self or other, respectively for self-reference encoding or other-reference encoding conditions). During the trials of the recognition block, participants had to answer in the same way, depending on whether they believed or not the adjective had been presented before. A fixation cross was always presented between trials, whose duration was jittered between 0.3 seconds and 0.5 seconds. A rest period was introduced between encoding and recognition phases, whose duration was also jittered between ten and fourteen seconds. Long intervals between these two phases, i.e. longer than ten seconds, ensured the measurement of long-term memory processes during the recognition phase, at the age range of the cohort (Newell et al., 1972, Ericsson et al., 1995). Fixation-cross periods of three and fifteen seconds were also introduced in the beginning and end of each run, respectively. Lastly, all adjectives were presented in the lexical form according to the gender of the participant. There were also two sets of adjectives. One set was presented as new adjectives during the recognition phase and the other set for all remaining conditions of both phases.

To avoid cognitive bias across the cohort, sets were switched for the other half of the participants. Plus, adjectives never repeated across runs but their sequence was fixed for the same runs and across participants from the same set. Yet, pseudo-randomization of the trials for the recognition phase was pre-determined by the authors of the original study, according to their category (i.e. self-reference recognition, other-reference recognition or new), such that no more than three consecutive trials of the same category were presented within a block.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Self

Condition

Description

instructions

Presentation of a question related to the succeeding block

memory

Successful identification with an overt response that a new adjective has never been presented before

no_recognition

Unsuccessful identification with an overt response that a new adjective has been presented before

other-reference_encoding

Judging with overt response whether a certain adjective, displayed on the screen, qualifies someone else

other-reference_recognition

Successful recognition with an overt response of an adjective, displayed on the screen, as having been already presented during one “other-reference encoding” trial of the preceding encoding phase

self-reference_encoding

Judge with overt response whether or not a certain adjective, displayed on the screen, qualifies oneself

self-reference_recognition

Successful recognition with an overt response of an adjective, displayed on the screen, as having been already presented during one “self-reference encoding” trial of the preceding encoding phase

Contrasts for Self

Contrast

Description

correct_rejection

identification of a new adjective

encode_other

encoding of adjectives processed with other-reference

encode_self

encoding of adjectives processed with self-reference

encode_self-other

self-reference effect

false_alarm

erroneous response

instructions

read instruction in form of a question

recognition_hit

recognition of adjectives previously displayed

recognition_hit-correct_rejection

recognition of an adjective previously displayed

recognition_other_hit

recognition of adjectives previously displayed with other-reference

recognition_self-other

memory retrieval of encoded information with self-reference

recognition_self_hit

recognition of adjectives previously displayed with self-reference

Bang#

audiovisual_perception speech_perception speech_processing action_perception language_processing

Implementation

  • Software: Expyriment 0.9.0 (Python 2.7)

  • Audio device: MagnaCoil (Magnacoustics)

The Bang task was adapted from the study (Campbell et al., 2015), dedicated to investigate aging effects on neural responsiveness during naturalistic viewing. The task relies on watching - viewing and listening - of an edited version of the episode “Bang! You’re Dead” from the TV series “Alfred Hitchcock Presents”. The original black-and-white, 25-minute episode was condensed to seven minutes and fifty five seconds while preserving its narrative. The plot of the final movie includes scenes with characters talking to each other as well as scenes with no verbal communication. This task was performed during a single run in one unique session. Participants were never informed of the title of the movie before the end of the session. Ten seconds of acquisition were added at the end of the run. The total duration of the run was thus eight minutes and five seconds.

Note: We used the MagnaCoil (Magnacoustics) audio device for all subjects except for subject-08, for whom we employed MRConfon MKII.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Bang

Condition

Description

no_talk

Watch contiguous scenes with no speech

talk

Speech: watch contiguous scenes of speech. No speech: watch contiguous scenes with no speech

Contrasts for Bang

Contrast

Description

no_talk

non-speech section in movie watching

talk

speech sections in movie watching

talk-no_talk

speech vs. non-speech sections in movie watching

Clips#

Implementation

  • Software: Python 2.7

  • Audio device: MRConfon MKII

  • See demo

The Clips battery stands for an adaptation of (Nishimoto et al., 2011), in which participants were to visualize naturalistic scenes edited as video clips of ten and a half minutes each. Each run was always dedicated to the data collection of one video clip at a time. As in the original study, runs were grouped in two tasks pertaining to the acquisition of training data and test data, respectively. Scenes from training-clips (ClipsTrn) task were shown only once. Contrariwise, scenes from the test-clips (ClipsVal) task were composed of approximately one-minute-long excerpts extracted from the clips presented during training. Excerpts were concatenated to construct the sequence of every ClipsVal run; each sequence was predetermined by randomly permuting many excerpts that were repeated ten times each across all runs. The same randomized sequences, employed across ClipsVal runs, were used to collect data from all participants.

There were twelve and nine runs dedicated to the collection of the ClipsTrn and ClipsVal tasks, respectively. Data from nine runs of each task were interspersedly acquired in three full sessions; the three remaining runs devoted to train-data collection were acquired in half of one last session, before the Retinotopy tasks. To assure the same topographic reference of the visual field for all participants, a colored fixation point was always presented at the center of the images. Such point was changing three times per second to ensure that it was visible regardless the color of the movie. Ten and twenty extra seconds of acquisition were respectively added at the beginning and end of every run. The total duration of each run was thus ten minutes and fifty seconds. Note that images from the test-clips task (ClipsVal) were presented three times to each participant. More precisely, in a given session, three test runs showing the same images were acquired, with the order of images varying between runs. Regardless of the session, one can find the order of images on our GitHub repository for the first, second and third test-clips runs. Lastly, the order of images for the training-clips is the same in all training runs and can be found on our GitHub repository.

WedgeClock#

lower-left_vision visual_color_discrimination upper-right_vision upper-left_vision lower-right_vision

Implementation

  • Software: Psychopy (Python 2.7)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. Within the Wedge protocol, the Wedge Clock task consists of visual stimuli of a slowly rotating clockwise checkerboard. The phase of the periodic response at the rotation frequency, measured at each voxel, corresponds to the assessment of perimetric parameters related to the polar angle (Sereno et al., 1995). Under IBC, two runs were dedicated to this task (one run for each phase-encoding direction). Each run was five-and-a-half minutes long. They were programmed for the same session following the last three training-data runs of the Clips task. Similarly to the Clips task, a point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after each run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of every run.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for WedgeClock

Condition

Description

left_meridian

Visual representation in the left half-plane of the visual field delimited by its vertical meridian

lower_left

Visual representation in the lower-left quadrant of the visual field delimited by its vertical and horizontal meridians

lower_meridian

Visual representation in the lower half-plane of the visual field delimited by its horizontal meridian

lower_right

Visual representation in the lower-right quadrant of the visual field delimited by its vertical and horizontal meridians

right_meridian

Visual representation in the right half-plane of the visual field delimited by its vertical meridian

upper_left

Visual representation in the upper-left quadrant of the visual field delimited by its vertical and horizontal meridians

upper_meridian

Visual representation in the upper half-plane of the visual field delimited by its horizontal meridian

upper_right

Visual representation in the upper-right quadrant of the visual field delimited by its vertical and horizontal meridians

Contrasts for WedgeClock

Contrast

Description

left_meridian

visual representation in the left half-plane of the visual field delimited by its vertical meridian

lower_left

visual representation in the lower-left quadrant of the visual field delimited by its vertical and horizontal meridians

lower_meridian

visual representation in the lower half-plane of the visual field delimited by its horizontal meridian

lower_right

visual representation in the lower-right quadrant of the visual field delimited by its vertical and horizontal meridians

right_meridian

visual representation in the right half-plane of the visual field delimited by its vertical meridian

upper_left

visual representation in the upper-left quadrant of the visual field delimited by its vertical and horizontal meridians

upper_meridian

visual representation in the upper half-plane of the visual field delimited by its horizontal meridian

upper_right

visual representation in the upper-right quadrant of the visual field delimited by its vertical and horizontal meridians

WedgeAnti#

lower-left_vision visual_color_discrimination upper-right_vision upper-left_vision lower-right_vision

Implementation

  • Software: Psychopy (Python 2.7)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. Within the Wedge protocol, the Wedge Anticlock task consists of visual stimuli of a slowly rotating counterclockwise checkerboard. The phase of the periodic response at the rotation frequency, measured at each voxel, corresponds to the assessment of perimetric parameters related to the polar angle (Sereno et al., 1995). Under IBC, two runs were dedicated to this task (one run for each phase-encoding direction). Each run was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after each run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of every run.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for WedgeAnti

Condition

Description

left_meridian

Visual representation in the left half-plane of the visual field delimited by its vertical meridian

lower_left

Visual representation in the lower-left quadrant of the visual field delimited by its vertical and horizontal meridians

lower_meridian

Visual representation in the lower half-plane of the visual field delimited by its horizontal meridian

lower_right

Visual representation in the lower-right quadrant of the visual field delimited by its vertical and horizontal meridians

right_meridian

Visual representation in the right half-plane of the visual field delimited by its vertical meridian

upper_left

Visual representation in the upper-left quadrant of the visual field delimited by its vertical and horizontal meridians

upper_meridian

Visual representation in the upper half-plane of the visual field delimited by its horizontal meridian

upper_right

Visual representation in the upper-right quadrant of the visual field delimited by its vertical and horizontal meridians

Contrasts for WedgeAnti

Contrast

Description

left_meridian

visual representation in the left half-plane of the visual field delimited by its vertical meridian

lower_left

visual representation in the lower-left quadrant of the visual field delimited by its vertical and horizontal meridians

lower_meridian

visual representation in the lower half-plane of the visual field delimited by its horizontal meridian

lower_right

visual representation in the lower-right quadrant of the visual field delimited by its vertical and horizontal meridians

right_meridian

visual representation in the right half-plane of the visual field delimited by its vertical meridian

upper_left

visual representation in the upper-left quadrant of the visual field delimited by its vertical and horizontal meridians

upper_meridian

visual representation in the upper half-plane of the visual field delimited by its horizontal meridian

upper_right

visual representation in the upper-right quadrant of the visual field delimited by its vertical and horizontal meridians

ContRing#

visual_color_discrimination foveal_vision mid-peripheral_vision far-peripheral_vision

Implementation

  • Software: Psychopy (Python 2.7)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. The Contracting Ring task consists of visual stimuli depicting a thick, contracting ring. The phase of the periodic response at the contraction frequency, measured at each voxel, corresponds to the assessment of the perimetric parameters related to eccentricity (Sereno et al., 1995). Under IBC, one run was dedicated to this task (ap phase-encoding direction), which was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after the run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of the run.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ContRing

Condition

Description

foveal

Visual representation in the fovea

middle

Visual representation in the mid-periphery of the visual field

peripheral

Visual representation in the far-periphery of the visual field

Contrasts for ContRing

Contrast

Description

foveal

visual representation in the fovea

middle

visual representation in the mid-periphery of the visual field

peripheral

visual representation in the far-periphery of the visual field

ExpRing#

visual_color_discrimination foveal_vision mid-peripheral_vision far-peripheral_vision

Implementation

  • Software: Psychopy (Python 2.7)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The Retinotopy protocols on IBC include classic retinotopic paradigms, namely the Wedge and the Ring tasks. The Expanding Ring task consists of visual stimuli depicting a thick, dilating ring. The phase of the periodic response at the dilation frequency, measured at each voxel, corresponds to the assessment of the perimetric parameters related to eccentricity (Sereno et al., 1995). Under IBC, one run was dedicated to this task (pa phase-encoding direction), which was five-and-a-half minutes long. A point was displayed at the center of the visual stimulus in order to keep constant the perimetric origin in all participants. Participants were thus to fixate continuously this point whose color flickered between red, green, blue and yellow throughout the entire run. To keep the participants engaged in the task, they were instructed that after the run, they would be asked which color had most often been presented. Additionally, ten seconds of a non-flickering, red fixation cross were displayed at the end of the run.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ExpRing

Condition

Description

foveal

Visual representation in the fovea

middle

Visual representation in the mid-periphery of the visual field

peripheral

Visual representation in the far-periphery of the visual field

Contrasts for ExpRing

Contrast

Description

foveal

visual representation in the fovea

middle

visual representation in the mid-periphery of the visual field

peripheral

visual representation in the far-periphery of the visual field

Raiders#

Implementation

  • Software: Expyriment 0.9.0 (Python 2.7)

  • Audio device: MRConfon MKII

The Raiders task was adapted from (Haxby et al., 2011), in which the full-length action movie Raiders of the Lost Ark was presented to the participants. The main goal of the original study was the estimation of the hyperalignment parameters that transform voxel space of functional data into feature space of brain responses, linked to the visual characteristics of the movie displayed. Similarly, herein, the movie was shown to the IBC participants in contiguous runs determined according to the chapters of the movie defined in the DVD. This task was completed in two sessions. In order to use the acquired fMRI data in train-test split and cross-validation experiments, we performed three extra-runs at the end of the second session in which the three first chapters of the movie were repeated. To account for stabilization of the BOLD signal, ten seconds of acquisition were added at the end of the run. Note: there was some lag between the onset of each run and the initiation of the stimuli (movie), which might vary between runs and subjects. This lag should probably be considered when analyzing the data. Find more details in the section Lags in Raiders movie.

Lec2#

reading word_maintenance working_memory inhibition language_processing

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. Originally described in (Perrone-Bertolotti et al., 2012), this task focuses on silent reading. During the task, participants were presented with two intermixed stories, shown word by word at a rapid rate. One of the stories was written in black (on a gray screen) and the other in white. Consecutive words with the same color formed a meaningful and simple short story in French. Participants were instructed to read the black story to report it at the end of the block, while ignoring the white one. Each block comprised 400 words, with 200 black words (attend condition) and 200 white words (ignore condition) for the two stories. The time sequence of colors within the 400 words series was randomized, so that participants could not predict whether the subsequent word was to be attended or not; however, the randomization was constrained to forbid series of more than three consecutive words with the same color. Data was acquired in two runs, and each word was presented for 100 ms, with a jittered inter-stimulus interval centered around 700 ms.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Lec2

Condition

Description

attend

A black word is rapidly presented and the participant must silently read it to form a short story together with the rest of black words

unattend

A white word is rapidly presented and the participant must ignore it

Contrasts for Lec2

Contrast

Description

attend

response to attended text

attend-unattend

response to attended vs. unattended text

unattend

response to unattended text

Audi#

sound_perception auditory_perception auditory_sentence_recognition music_perception auditory_attention

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Audio device: MagnaCoil (Magnacoustics)

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. This task was originally described in (Perrone-Bertolotti et al., 2012) together with the Lec2 localizer. Participants listened to sounds of several categories with the instruction that three of them would be presented again at the end of the task, together with three novel sounds and that they should be able to detect previously played items. There were three speech and speech-like categories, including sentences told by a computerized voice in a language familiar to the participant (French) or unfamiliar (Suomi), and reversed speech, originally in French (the same sentences as the “French” category, played backwards). These categories were compared with nonspeech-like human sounds (coughing and yawning), music, environmental sounds, and animal sounds. Participants were instructed to close their eyes while listening to three sounds of each category, with a duration of 12s each, along with three 12 s intervals with no stimulation, serving as a baseline (Silence). Consecutive sounds were separated by a 3 s silent interval. The sequence was pseudorandom, to ensure that two sounds of the same category did not follow each other.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Audi

Condition

Description

alphabet

French voice saying the alphabet

animals

Real-life animal sounds

cough

Concatenated sounds of people coughing

environment

Real-life complex environmental sounds

human

Other human sounds

laugh

Concatenated sounds of people laughing

music

Real-life complex musical sounds

reverse

French speech stimuli played in reverse

silence

Silence, used as a baseline

speech

French speech stimuli

suomi

Suomi speech stimuli

tear

Concatenated sounds of people crying

yawn

Concatenated sounds of people yawning

Contrasts for Audi

Contrast

Description

alphabet

listen to letters

alphabet-silence

listen to letters

animals

listen to animals

animals-silence

listen to animals

cough

listen to coughing

cough-silence

listen to coughing

environment

listen to environment sounds

environment-silence

listen to environment sounds

human

listen to human sounds

human-silence

listen to human sounds

laugh

listen to laugh

laugh-silence

listen to laugh

music

listen to music

music-silence

listen to music

reverse

listen to reversed speech

reverse-silence

listen to reversed speech

silence

listen to silence

speech

listen to speech

speech-silence

listen to speech

suomi

listen to unknown language

suomi-silence

listen to unknown language

tear

listen to tears

tear-silence

listen to tears

yawn

listen to yawning

yawn-silence

listen to yawning

Visu#

visual_representation visual_perception object_categorization reading visual_word_recognition

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. This task, described in (Vidal et al., 2010), is a visual odd-ball paradigm, in which participants were instructed to press a button (index finger) every time they see a fruit. Images of the target category and other non-target categories were rapidly presented in a pre-randomized order. Stimuli were presented for a duration of 200ms every 1000-1200ms in series of 5 pictures interleaved by 3-second pause periods during which patients could freely blink. Each non-target category was presented 50 times during the experiment, and data was acquired in two separated runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Visu

Condition

Description

animal

Viewing the image of an animal

characters

Viewing a string of random characters

face

Viewing the image of a human face

fruit

Viewing the image of a fruit

house

Viewing the image of a house

pseudoword

Viewing a string that conforms a pseudoword

scene

Viewing the image of a naturalistic scene

scrambled

Scrambled image, used as baseline

tool

Viewing the image of a tool

Contrasts for Visu

Contrast

Description

animal

view an animal

animal-scrambled

view an animal

characters

view characters

characters-scrambled

view characters

face

view a face image

face-scrambled

view a face image

house

view a house

house-scrambled

view a house

pseudoword

view a pseudoword

pseudoword-scrambled

view a pseudoword

scene

view a scene

scene-scrambled

view a scene

scrambled

view a scrambled image

target_fruit

view a target object

tool

view a tool

tool-scrambled

view a tool

Lec1#

reading visual_word_recognition visual_pseudoword_recognition visual_string_recognition language_processing

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • Audio device: MagnaCoil (Magnacoustics)

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. This task, described in (Saignavong et al., 2017), was originally used to test whether brain activity can be deteted in single trials with intracerebral EEG-fMRI recordings. During the task, participants were presented with three vertically-arranged lines, indicated by the presence of two “+” symbols at both sides, and empty space between them. For each row, a different type of verbal stimulli was presented, and the participant was instructed to make a decission depending on the type of stimuli. The top row presented words, and the decision was an animacy decision (“Is it a living entity?”). The middle row presented pseudowords, and the decision was whether the pseudoword had one or two syllabes. Finally, the bottom row presented consonant strings, and participants were instructed to answer if the string was all-uppercase or all-lowercase. First option was selected by pressing with the index finger on the response box whereas second option was given with the middle finger. The trials were presented in blocks, and each block contained a sequence of 5 stimuli for each of the three conditions. The order of this conditions inside each block was randomized across blocks, but fixed for all participants. The “+” symbols for the row corresponding to the next condition turned white to indicate which condition was next. There were two runs with 6 blocks each, each block comprising 15 trials, which were presented for 2000 ms, with an inter-stimulus interval of 500 ms.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Lec1

Condition

Description

pseudoword

A pseudoword in presented and the participant has to answer whether it has one or two syllables

random_string

A string of random consonants is presented and the participant has to answer if it is all-uppercase or all-lowercase

word

A word is presented and the participant has to decide whether it is a living entity or not

Contrasts for Lec1

Contrast

Description

pseudoword

read a pseudoword

pseudoword-random_string

read a pseudoword vs. a random string

random_string

read a random string

word

read a word

word-pseudoword

read a word vs. a pseudoword

word-random_string

read a word vs. a random string

MVEB#

string_maintenance visual_attention visual_buffer visual_working_memory numerosity

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. This task, described in (Hamamé et al., 2012), aims to assess verbal working memory (the name stands for “verbal working memory” task). In this case, the participants were presented with a string of 6 characters, from where two, four or six of them can be letters (the rest are “#” symbols). After the string disappears, a single letter appears in screen. The participant had then to indicate if this single letter was part of the previously presented string. This was indicated by the participant with a 5-button response box, with one button for “yes” (index finger) and another for “no” (middle finger). The cognitive load was manipulated with the number of letters, and one condition was included where all the letters of the initial string would be the same one. Each trial commenced with the presentation of a 1500 ms fixation cross, followed by the array of characters (probe) for 1500 ms. After an intermediate period of 3000 ms, and the cue character was presented for 1500 ms. 36 trials were presented in each run. Data was acquired in two separated runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MVEB

Condition

Description

2_letters_different

The subject must remember 2 characters from a presented string of different letters

2_letters_same

The subject must remember the presented character from a string of 2 identical letters

4_letters_different

The subject must remember 4 characters from a presented string of different letters

4_letters_same

The subject must remember the presented character from a string of 4 identical letters

6_letters_different

The subject must remember 6 characters from a presented string of different letters

6_letters_same

The subject must remember the presented character from a string of 6 identical letters

letter_occurrence_response

Subject’s index finger response, indicating whether the letter was part of of the previously presented string

Contrasts for MVEB

Contrast

Description

2_letters_different

maintaining two letters

2_letters_different-same

maintaining two letters vs. one

2_letters_same

maintaining one letter

4_letters_different

maintaining four letters

4_letters_different-same

maintaining four letters vs. one

4_letters_same

maintaining one letter

6_letters_different

maintaining six letters

6_letters_different-2_letters_different

maintaining six letters vs. two

6_letters_different-same

maintaining six letters vs. one

6_letters_same

maintaining one letter

letter_occurrence_response

respond by button pressing whether the letter currently displayed was presented before or not

MVIS#

visual_attention spatial_working_memory visual_working_memory numerosity

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. This task, described in (Hamamé et al., 2012), and whose name stands for visuo-spatial working memory task, consists on a series of events in which the participant will be presented with a 4x4 grid in which two, four or six dots will appear at different positions, after that, the grid would become empty and finally a single dot would appear on it. The participant had then to indicate if this single dot was in the same position than any of the previously presented ones. This was indicated by the participant with a 5-button response box, with one button for “yes” (index finger) and another for “no” (middle finger). The cognitive load was manipulated with the number of dots, and one condition was included where one of the dots would be highlighted, signifying that was the only position to retain. Each trial commenced with the presentation of a 1500 ms fixation cross, followed by the array of dots (probe) for 1500 ms. The empty grid was presented for 3000ms, and the cue dot was presented for 1500 ms. 36 trials were presented on each run. The data was acquired in two runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MVIS

Condition

Description

2_dots

2 positions to remember

2_dots_control

1 position to remember because of the highlighted dot

4_dots

4 positions to remember

4_dots_control

1 position to remember because of the highlighted dot

6_dots

6 positions to remember

6_dots_control

1 position to remember because of the highlighted dot

dot_displacement_response

Subject’s index finger response, indicating whether the dot was in the same position as any of the previously presented ones

Contrasts for MVIS

Contrast

Description

2_dots-2_dots_control

maintain position of two dots vs. one

4_dots-4_dots_control

maintain position of four dots vs. one

6_dots-2_dots

maintain position of six dots vs. two

6_dots-6_dots_control

maintain position of six dots vs. one

dot_displacement_response

respond by button pressing whether the dot currently displayed share the same location as any of those shown before

dots-control

maintain position of two to six dots vs. one

Moto#

reading saccadic_eye_movement

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: In-house custom-made sticks featuring one-top button, each one to be used in each hand

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. This task is a basic motor localizer for several body parts. The participants are presented with three small gray squares over a black background image. At the beginning of each block, a text prompt will appear on screen to indicate the body part that will be moved next. Afterwards, the left and right squares will turn white to indicate movement of the corresponding part. For example, for the hands condition, the participant is required to perform a small movement of the left hand when the left square turns white, and likewise for the right hand. Ten movements were prompted for each block, five for the right body part and five for the left, consecutively for each direction and always in the same order. There were two distinct blocks for each body part. For each trial, the white square was presented during 1000 ms, with 1500 ms between trials, for a total duration of 25 s per block, with a total of 12 blocks. Data was acquired in two separated runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Moto

Condition

Description

finger_left

Movement of the left index finger, indicated by a button-press

finger_right

Movement of the right index finger, indicated by a button-press

fixation

Gaze fixation on the central square

foot_left

Movement of the left foot

foot_right

Movement of the right foot

hand_left

Movement of the left hand

hand_right

Movement of the right hand

saccade_left

Movement of the eyes to the left

saccade_right

Movement of the eyes to the right

tongue_left

Movement of the tongue to the left

tongue_right

Movement of the tongue to the right

Contrasts for Moto

Contrast

Description

finger_left-fixation

left finger tapping vs. any movement

finger_right-fixation

right finger tapping vs. any movement

foot_left-fixation

move left foot vs. any movement

foot_right-fixation

move right foot vs. any movement

hand_left-fixation

move left hand vs. any movement

hand_right-fixation

move right hand vs. any movement

instructions

read instructions

saccade-fixation

saccade vs. any movement

tongue-fixation

move tongue vs. any movement

MCSE#

lower-right_vision lower-left_vision upper-right_vision visual_search upper-left_vision

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • See demo

This task belongs to a battery of 8 different localizers that tap on a wide array of cognitive functions provided to us by the Labex Cortex group at the University of Lyon. This task described in (Ossandón et al., 2012) was originally used to study whether visual search processes of a salient target can be thought as a purely bottom-up process, or if it requires action from top-down attentional processess. The task consisted in the presentation of an array of 35 “L” letters, rotated at different angles, together with a target “T” letter (total 36 stimuli in each trial). Subjects were instructed to search for the target and indicate whether it was on the left or right side of the grid, by pressing respectively with the index or middle finger on a 5-button response box. There were two conditions: high-salience (the target is gray while the other stimuli is black) and low-salience (all stimuli are gray). The two conditions were presented alternatively in blocks, with 6 blocks of 10 trials each. Each trial was presented for 3 s with an inter-stimulus interval of 1 s. There was also a 20 s fixation cross between blocks. Data was acquired in two separated runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MCSE

Condition

Description

high_salience_left

Looking for a salient letter in the left visual field

high_salience_right

Looking for a salient letter in the right visual field

low_salience_left

Looking for a non-salient letter in the left visual field

low_salience_right

Looking for a non-salient letter in the right visual field

Contrasts for MCSE

Contrast

Description

high-low_salience

looking for a high-salient symbol

high_salience_left

looking for a salient symbol in left visual field

high_salience_right

looking for a salient symbol in right visual field

low+high_salience

looking for a symbol

low-high_salience

looking for a low-salient symbol

low_salience_left

looking for a low-salient symbol in left visual field

low_salience_right

looking for a low-salient symbol in right visual field

salience_left-right

looking for a symbol in left vs. right visual field

salience_right-left

looking for a symbol in right vs. left visual field

Audio#

sound_perception auditory_perception music_perception auditory_attention speech_perception

Implementation

  • Software: Expyriment 0.9.0 (Python 3.6)

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • Audio device: MagnaCoil (Magnacoustics)

This task, originally described in (Santoro et al., 2017), is an auditory localizer. During each run, the participants were presented with sounds from different categories, and were instructed to press a button with the index finger whenever two consecutive sounds were identical. From a group of 288 sounds, divided into 6 different categories, 4 sets were created. Each set contained 72 sounds of each of the categories, and each one was present only in one of the sets. Furthermore, each set was pre-randomized in 3 different orders, and the same sequences were used for all participants. On top of the 72 sounds, each run also included 5 silences and 5 repeated sounds from the original 72. In total, each run consisted of 82 trials of 2 seconds each. It is important to note that the data for this task was acquired using an interrupted acquisition sequence, to minimize the effect that scanner noise can have in the auditory processing targeted by the experiment. To this end, the inter-stimulus interval was programmed in a sequence of 4, 4, and 6 seconds, meaning that the interval between stimuli would be 4s for the first trial, 4s for the second, 6s for the third, and then the sequence repeats until the end of the run. The variability of the ISI and the silence trials avoided stimulus’ presentation to be predictable in time.

Note: We used the MagnaCoil (Magnacoustics) audio device for all subjects except for subject-08, for whom we employed Optoacoustics.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Audio

Condition

Description

animal

Sound of animal noises

catch

Repetition of the previous sound

music

Musical sound

nature

Naturalistic sound

silence

No sound

speech

Human speech sound

tool

Sound of tool usage

voice

Non-speech human sound

Contrasts for Audio

Contrast

Description

animal

listen to animals

animal-others

listen to animals vs. other sounds

animal-silence

listen to animals vs. silence

mean-silence

listening to sounds vs. silence

music

listen to music

music-others

listen to music vs. other sounds

music-silence

listen to music vs. silence

nature

listen to nature

nature-others

listen to nature vs. other sounds

nature-silence

listen to nature vs. silence

speech

listen to speech

speech-others

listen to speech vs. other sounds

speech-silence

listen to speech vs. silence

tool

listen to tool

tool-others

listen to tool vs. other sounds

tool-silence

listen to tool vs. silence

voice

listen to voice

voice-others

listen to voice vs. other sounds

voice-silence

listen to voice vs. silence

Attention#

spatial_attention attentional_focusing selective_attention saccadic_eye_movement saccadic_eye_mocement

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

The Attention is a version of the classical flanker task (Eriksen and Eriksen, 1974), where the participant has to judge the direction the target flanker (an arrow) is pointing to (left/right). The target flanker is surrounded by other 4 flankers that can be congruent or incongruent with the target one, thus capturing selective attention and inhibitory processes. Two different buttons (index and middle fingers’ button, respectively) were assigned to left/right responses, and the participant had to indicate the direction of the central arrow from an horizontal group of 5 arrows. In each trial, one or two positional cues were presented above and below the center of the screen. When one cue was given, the flankers would appear centered around it, whereas when two cues where presented, the flankers would appear centered around one of them. The four flankers surrounding the target would always point to the same direction, and can be congruent or incongruent with the direction the target flanker is facing. The task was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions.

For the original version of this task, the authors provide a simulator, which contains the original design.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Attention

Condition

Description

congruent

The stimulus is congruent (same direction) with the rest of the arrows shown.

double_cue

The stimulus is not spatially cued, so the subject doesn’t know where the arrows will be shown (both stars appear).

incongruent

The stimulus is not congruent (opposite direction) with the rest of the arrows shown.

spatial

The stimulus is spatially cued, so the subject knows where the arrows will be shown (only one star appears).

Contrasts for Attention

Contrast

Description

double_congruent

no spatial cue + no distractors in the probe

double_cue

cues appear in both possible location of the probe at the same time

double_incongruent

no spatial cue + distractors in the probe

double_incongruent-double_congruent

ignore distractors vs. no distractors without spatial cue

incongruent-congruent

ignore distractors vs. no distractors

spatial_congruent

cued probe no distractors

spatial_cue

cued probe

spatial_cue-double_cue

cued vs. uncued probe

spatial_incongruent

cued probe with distractors in the probe

spatial_incongruent-spatial_congruent

ignore distractors vs. no distractors with spatial cue

StopSignal#

shape_recognition proactive_control shape_perception

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning. StopSignal task was originally used to localize activation relative to inhibition of a prominent motor response (Bissett and Logan, 2011).

Four different polygonal shapes composed the set from which one of them was presented in each trial. Two of them were assigned to the button corresponding to the index finger, and two of them to the button corresponding to the middle finger. The participants were instructed to press the correct button as fast as possible, except if a red-colored star appeared on top of the target stimulus. There were 12 practice trials followed by 123 test trials divided in 3 blocks of 41 trials each, with a resting period of 9 seconds in between blocks. During practice, feedback was provided to indicate correct and incorrect responses, as well as to indicate if the responses were too slow. No stop trials (red star) were present during practice, although the instructions pertaining the red star were presented before practice. This was to build a predominant motor response in order to better capture inhibitory processes. There was a jittered delay between the stop signal and the target stimulus in stop trials that ranged from 400 to 1000 ms. The duration of the stop signal was fixed to 500 ms, the duration of the target stimulus was 850 ms and there was a fixation cross between trials centered around 2250 ms. The task was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions.

For the original version of this task, the authors provide a simulator which contains the original design.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for StopSignal

Condition

Description

go

Answer to the stim

stop

Hold motor response

Contrasts for StopSignal

Contrast

Description

go

shape recognition

stop

shape recognition, stopped response

stop-go

response inhibition

TwoByTwo#

visual_perception cue_switch

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the Experiment Factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

TwoByTwo protocol aimed to study the responses to task-switching and cue-switching in every trial, with the aim to asses the activity elicted by switching either or both task and cue, and how switching one affects the response to the other. It consisted of presenting colored single-digit numbers from 1 to 9, preceded by a cue string indicating which task must be performed. For each trail, the task could either be to judge if the number is greater or less than 5; or to judge whether the digit shown is colored in blue or orange. For each of the two tasks, two different strings could be used as cue: for the first, the cue could display either ‘Magnitude’ or ‘High/Low’, both strings indicating the participant must judge the quantity; for the second task, the subject could read either ‘Color’ or ‘Orange/Blue’ as cues, both strings indicating the task is to judge the color. Two different buttons (index/middle finger) were assigned to the orange/high and blue/low options, respectively. The task is composed by 16 practice trials, followed by 240 trials divided in 3 blocks of 80 trials each. The order of cue and task switching was randomized. The task was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions.

For the original version of this task, the authors provide a simulator, it contains a slightly different version of the task in which they switch between three different tasks instead of two.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for TwoByTwo

Condition

Description

cue_taskstay_cuestay

Appearance of the cue on screen when both the task and the cue are the same with respect to the previous trial

cue_taskstay_cueswitch

Appearance of the cue on screen when only the cue switches with respect of the previous trial, for example the color task is repeated but the cue changes from ‘Color’ to ‘Orange/Blue’

cue_taskswitch_cuestay

Appearance of the cue on screen when the task switches but the cue stays the same it was the previous trial for that task. For example, the task switches from color to number and the presented cue is the same as the previous number trial

cue_taskswitch_cueswitch

Appearance of the cue on screen when both the task and the cue switch, for example the task goes from color to number and the cue changes from ‘Magnitude’ to ‘High/Low’ compared to the previous number trial

stim_taskstay_cuestay

Appearance of the stimulus on screen when both the task and the cue are the same with respect to the previous trial

stim_taskstay_cueswitch

Appearance of the stimulus on screen when only the cue switches with respect of the previous trial, for example the color task is repeated but the cue changes from ‘Color’ to ‘Orange/Blue’

stim_taskswitch_cuestay

Appearance of the stimulus on screen when the task switches but the cue stays the same it was the previous trial for that task. For example, the task switches from color to number and the presented cue is the same as the previous number trial

stim_taskswitch_cueswitch

Appearance of the stimulus on screen when both the task and the cue switch, for example the task goes from color to number and the cue changes from ‘Magnitude’ to ‘High/Low’ compared to the previous number trial

Contrasts for TwoByTwo

Contrast

Description

cue_switch-stay

effect of cur switch

cue_taskstay_cuestay

both task and cue repeats

cue_taskstay_cueswitch

task repeats cue switch

cue_taskswitch_cuestay

both task and cue switch

cue_taskswitch_cueswitch

both task and cue switch

stim_taskstay_cuestay

both task and cue repeats

stim_taskstay_cueswitch

task repeats cue switch

stim_taskswitch_cuestay

both task and cue switch

stim_taskswitch_cueswitch

both task and cue switch

task_switch-stay

effect of task switch

Discount#

incentive_salience selective_control

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

Discount is a a decision-making task, where the participant has to decide whether to take a figurative amount of 20 dollars today or a bigger amount in a set number of days. The task is composed by 1 practice trial, followed by 120 test trials divided in 2 blocks of 60 trials each. The amount of money and the number of days is different for each trial. Each trial lasts for 4 seconds. The task was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions.

For the original version of this task, the authors provide a simulator, it contains a slightly different version of the task in which they ask participants to choose between two different amounts on different periods, instead of the set 20-dollars-today set-up.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Discount

Condition

Description

amount

Effect of reward gain

delay

Effect of reward delay

Contrasts for Discount

Contrast

Description

amount

effect of reward gain

delay

effect of delay on reward

SelectiveStopSignal#

shape_recognition proactive_control shape_perception

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

Similar to the StopSignal task, SelectiveStopSignal task required participants to refrain from responding if a red star appears after the target stimulus is presented. In this task, however, the red star only indicates the need to inhibit the motor response in one of the two sides (critical side), while it should be ignored for the other (noncritical side). Motor response is to be given by pressing with the index finger on the corresponding button of the response box. The task is composed by 12 practice trials, followed by 250 test trials divided in 5 blocks of 50 trials each. The task was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions.

For the original version of this task, the authors provide a simulator which contains the original design.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for SelectiveStopSignal

Condition

Description

go_critical

Answer to the visual stimulus (critical side)

go_noncritical

Answer to the visual stimulus (noncritical side)

ignore

Answer regardless of the stop signal

stop

Hold motor response

Contrasts for SelectiveStopSignal

Contrast

Description

go_critical

respond with the correct finger depending on the image displayed (side instructed to stop if the stop signal appears)

go_critical-stop

inhibit the motor response

go_noncritical

respond with the correct finger depending on the image displayed (side instructed to ignore the stop signal)

go_noncritical-ignore

ignore stop signal vs. simply respond

ignore

respond anyway even if the stop signal appears

ignore-stop

ignore stop signal vs. inhibit motor response

stop

stop the response if the stop signal appears

stop-ignore

inhibit motor response vs. ignore stop signal

Stroop#

visual_perception conflict_detection proactive_control

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

In this adaptation of the classic Stroop task (Stroop, 1935), the participants must press one of three buttons depending on the color of the presented word. In contrast to the classic pen and paper version of the task, the congruent and incongruent trials are intermixed. The three words/colors presented were red, green and blue, whose button presses corresponded on the response box respectively to the index, middle and ring fingers. The amount of money and the number of days is different for each trial.

For the original version of this task, the authors provide a simulator which contains the original design.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Stroop

Condition

Description

congruent

Color and word are the same

incongruent

Color and word are different

Contrasts for Stroop

Contrast

Description

congruent

word and word color are the same

incongruent

word and color are not the same

incongruent-congruent

conflict between automatic and instructed response

ColumbiaCards#

risk_aversion risk_processing reward_processing

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

The ColumbiaCards task is a gambling task in where the participants are presented with a set of cards facing down. In each trial, a different number of cards appear and the participant is informed of the amount gained per good card uncovered, the amount loss when uncovering the bad card, and the number of bad cards in the set. The participant can uncover as many cards as they want, by pressing the index finger’s button on the response box, before pressing the middle finger’s button to end the trial and start the next one. Uncovering a bad card automatically ends the trial. In each trial, the number of total cards, the number of bad cards, the amount gained per card uncovered and the amount lost if a bad card was uncovered changed. The order in which the cards is pre-determined for each trial, but the participant does not know it. The task is composed by 88 trials divided in 4 blocks of 22 trials each and was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions.

For the original version of this task, the authors provide a simulator which contains the original design.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ColumbiaCards

Condition

Description

gain

Expected gain in gambling

loss

Expected loss in gambling

num_loss_cards

Probability of losing in gambling

Contrasts for ColumbiaCards

Contrast

Description

gain

expected gain

loss

expected loss

num_loss_cards

probability of losing

DotPatterns#

shape_recognition proactive_control

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

DotPatterns task presents the participant with pairs of stimuli, separated by a fixation cross. The participant has to press a button (index finger) as fast as possible after the presentation of the probe, and only one specific combination of cue-probe is instructed to be responded to differently. This task was designed to capture activation relative to the expectancy of the probe elicited by the correct cue. The task is composed by 160 trials divided in 4 blocks of 40 trials each. Each cue and probe lasted for 500ms, with a fixation cross that separates both lasting for 2000ms. It was acquired in two runs, within the same session as other tasks from the battery and using different phase-encoding directions.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for DotPatterns

Condition

Description

correct_cue_correct_probe

Target pair, captures expectancy after correct cue

correct_cue_incorrect_probe

Nontarget pair that also captures expectancy after correct cue

cue

Look at the stimulus to provide a description of the cue

incorrect_cue_correct_probe

Incorrect pair. The probe is correct but the cue is not

incorrect_cue_incorrect_probe

Incorrect pair, both are incorrect

Contrasts for DotPatterns

Contrast

Description

correct_cue-incorrect_cue

effect of cognitive control

correct_cue_correct_probe

both cue and probe are correct (AX)

correct_cue_incorrect_probe

the cue is correct but the probe is not (AY)

correct_cue_incorrect_probe-correct_cue_correct_probe

incorrect vs. correct probe with correct cue

correct_cue_incorrect_probe-incorrect_cue_correct_probe

effect of cognitive control

cue

attend to cue

incorrect_cue_correct_probe

cue is incorrect but probe is correct (BX)

incorrect_cue_incorrect_probe

both cue and probe are incorrect (BY)

incorrect_cue_incorrect_probe-correct_cue_incorrect_probe

effect of cognitive control

incorrect_cue_incorrect_probe-incorrect_cue_correct_probe

shape recognition

incorrect_probe-correct_probe

shape recognition

WardAndAllport#

goal_hierarchy visual_perception search_depth planning working_memory

Implementation

  • Software: JavaScript, Python 2.7

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

This task is a part of a battery of several tasks coming from the experiment factory published in (Eisenberg et al., 2017) and presented using expfactory-python package. The battery was used to capture several aspects of self-regulation, including behavioral inhibition, decision making and planning abilities, among others. The adjustments concerned the translation to all written stimuli and instructions into french, as well as fixing a total time limit for experimentsthat allowed the participants their own pace for responding. All these modifications were done with extreme care of not altering the psychological state that the original tasks were designed to capture during scanning.

WardAndAllport task is a digital version of the WATT3 task (Ward, Allport, 1997, Shallice, 1982), and its main purpose is to capture activation related to planning abilities. For this, the task uses a factorial manipulation of 2 task parameters: search depth and goal hierarchy. Search depth involves mentally constructing the steps necessary to reach the goal state, and the interdependecy between steps in order to do so. This is expressed by the presence or absence of intermediate movements necessary for an optimal solution of each problem. Goal hierarchy refers to whether the order in which the three balls have to be put in their goal positions can be completely extracted from looking at the goal state or if it requires the participant to integrate information between goal and starting states (which result in unambiguous or partially ambiguous goal states, respectively). Detailed explanations and examples of each one of the four categories can be found in Kaller et al., 2011.

The task was divided in 4 practice trials, followed by 48 test trials divided in 3 blocks of 14 trials each, separated by 10 seconds of resting period. Data was only acquired during the test trials, although the practice trials were also performed inside the scanner with its corresponding equipment. In each trial, the participant would see two configurations of the towers: the test towers on the left, and the target towers on the right. The towers of the right showed the final configuration of balls required to complete the trial. Three buttons were assigned to the left (index finger’ button), middle (middle finger’s button) and right (ring finger’s button) columns respectively, and each button press would either take the upper ball of the selected column or drop the ball in hand at the top of the selected column. On the upper left corner, a gray square with the text “Ball in hand” would show the ball currently in hand. All trials could be solved in 3 movements, considering taking a ball and putting it elsewhere as a single movement. The time between the end of one trial and the beginning of the next one was 1000 ms.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for WardAndAllport

Condition

Description

planning_ambiguous_direct

Partially ambiguous goal state without intermediate movement

planning_ambiguous_intermediate

Partially ambiguous goal state with intermediate movement

planning_unambiguous_direct

Unambiguous goal state without intermediate movement

planning_unambiguous_intermediate

Unambiguous goal state with intermediate movement

Contrasts for WardAndAllport

Contrast

Description

ambiguous-unambiguous

effect of goal hierarchy

intermediate-direct

effect of search depth

move_ambiguous_direct

complex goal hierarchy + simple search depth

move_ambiguous_intermediate

complex goal hierarchy + complex search depth

move_unambiguous_direct

simple goal hierarchy + simple search depth

move_unambiguous_intermediate

simple goal hierarchy + complex search depth

planning_ambiguous_direct

complex goal hierarchy + simple search depth

planning_ambiguous_intermediate

complex goal hierarchy + complex search depth

planning_unambiguous_direct

simple goal hierarchy + simple search depth

planning_unambiguous_intermediate

simple goal hierarchy + complex search depth

BiologicalMotion1#

local_motion_coherence vertical_flip biological_motion global_motion_coherence motion_detection

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • See demo

The phenomenon known as biological motion was first introduced in (Johansson, 1973), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (Chang et al., 2018). During the task, the participants were shown a point-light “walker”, and they had to decide if the walker’s orientation was to the left or right, by pressing on the response box respectively on the index finger’s button or the middle finger’s button. The stimuli were divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see Chang et al., 2018 for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. There were 2 different types of runs: type 1 and 2. This section refers to run type 1, which contained both global types (natural and inverted) and both local naturals. For run type 2 refer to BiologicalMotion2.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for BiologicalMotion1

Condition

Description

global_inverted

Global walker, inverted upside-down

global_upright

Structural information is preserved, but individual local trajectories are mirror-symmetric. Global-only in the original paper

natural_inverted

Local natural walker, inverted along the horizontal axis

natural_upright

Local information is preserved, but the points are randomly shuffled along the X-axis, rendering global cues uninformative. “Local-natural” in the original experiment

Contrasts for BiologicalMotion1

Contrast

Description

global-natural

effect of global information on motion perception

global_inverted

global reversed biological motion vs. fixation

global_upright

global biological motion vs. fixation

global_upright-global_inverted

effect of orientation on motion perception

global_upright-natural_upright

effect of global information on motion perception

inverted-upright

effect of orientation on motion perception

natural-global

Negative effect of global information on motion perception

natural_inverted

local reversed biological motion vs. fixation

natural_upright

local biological motion vs. fixation

natural_upright-natural_inverted

effect of orientation on motion perception

BiologicalMotion2#

local_motion_coherence vertical_flip biological_motion scrambled_motion motion_detection

Implementation

  • Software: Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

The phenomenon known as biological motion was first introduced in (Johansson, 1973), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (Chang et al., 2018). During the task, the participants were shown a point-light “walker”, and they had to decide if the walker’s orientation was to the left or right, by pressing on the response box respectively on the index finger’s button or the middle finger’s button. The stimuli was divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see Chang et al., 2018 for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. This section refers to run type 2, which contained both local naturals and both local modified.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for BiologicalMotion2

Condition

Description

modified_inverted

Local modified walker, inverted along the horizontal axis

modified_upright

Neither structural or local information is carried out by this type of walker, it uses both types of modifications used for the previous two categories. “Local modified” in the original study

natural_inverted

Local natural walker, inverted along the horizontal axis

natural_upright

Local information is preserved, but the points are randomly shuffled along the X-axis, rendering global cues uninformative. “Local-natural” in the original experiment

Contrasts for BiologicalMotion2

Contrast

Description

inverted-upright

effect of orientation on motion perception

modified-natural

Negative effect of local information on motion perception

modified_inverted

no motion information reversed vs. fixation

modified_upright

scrambled motion information vs. fixation

modified_upright-modified_inverted

effect of orientation on motion perception

natural-modified

effect of local information on motion perception

natural_inverted

local reversed biological motion vs. fixation

natural_upright

local biological motion vs. fixation

natural_upright-modified_upright

effect of local information on motion perception

natural_upright-natural_inverted

effect of orientation on motion perception

LePetitPrince#

Implementation

  • Software: Expyriment 0.9.0 (Python 3.6)

  • Audio device: OptoACTIVE (Optoacoustics)

This experiment is a natural language comprehension protocol, originally implemented by (Bhattasali et al., 2019, Hale et al., 2022). The use of complex naturalistic language stimuli has been used to study other processes, like semantic maps (Huth et al., 2016). The data was acquired in two different sessions, each one comprising five and four runs, respectively. Each run comprised three chapters of the “Le Petit Prince” story in french. During each run, the participant was presented with the audio of the story. In between runs, the experimenters would ask some multiple choice questions, as well as two or three open ended questions about the contents of the previous run, in order to keep the participants engaged. The length of the runs varied between nine and thirteen minutes. There was also a six-minutes localizer at the end of the second acquisition, in order to accurately map language areas for each participant.

Note: We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for subject-08, for whom we employed MRConfon MKII.

MathLanguage#

visual_arithmetic_processing visual_perception visual_sentence_comprehension auditory_sentence_recognition auditory_perception

Implementation

  • Software: Expyriment 0.9.0 (Python 3.6)

  • Response device: In-house custom-made sticks featuring one-top button, each one to be used in each hand

  • Audio device: OptoACTIVE (Optoacoustics)

  • Repository

  • See demo

The Mathematics and Language protocol was taken from (Amalric et al., 2016). This task aims to comprehensively capture the activation related with several types of mathematical and other types of facts, presented as sentences. During the task, the participants are presented a series of sentences, each one in either of two modalities: auditory or visual. Some of the categories include theory of mind statements, arithmetic facts and geometry facts. After each sentence, the participant has to indicate whether they believe the presented fact to be true or false, by respectively pressing the button in the left or right hand. A second version of each run (runs B) was generated reverting the modality for each trial, so those being visual in the original runs (runs A), would be auditory in their corresponding B version, and vice-versa. Each participant performed four A-type runs, followed three B-type runs due to time constraints. Each run had an equal number of trials of each category, and the order of the trials was the same for all subjects.

Note: We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for subject-05 and subject-08, who completed the session using MRConfon MKII.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MathLanguage

Condition

Description

arithmetic_fact_auditory

Listen to arithmetic fact

arithmetic_fact_visual

Read arithmetic fact

arithmetic_principle_auditory

Listen to arithmetic principle

arithmetic_principle_visual

Read arithmetic principle

colorlessg_auditory

Jabberwocky sentence presented as auditory stimulus

colorlessg_visual

Jabberwocky sentence presented as visual stimulus

context_auditory

Beep sound indicating that the following stimuli will be audio

context_visual

Red cross indicating that the following stimuli will be visual

general_auditory

Listen to sentence

general_visual

Read sentence

geometry_fact_auditory

Listen to geometric fact

geometry_fact_visual

Read geometric fact

theory_of_mind_auditory

Listen to false-belief tale

theory_of_mind_visual

Read false-belief tale

wordlist_auditory

Listen to word list

wordlist_visual

Read word list

Contrasts for MathLanguage

Contrast

Description

arithmetic_fact-othermath

arithmetic fact vs other maths

arithmetic_fact_auditory

listen to arithmetic fact

arithmetic_fact_visual

read arithmetic fact

arithmetic_principle-othermath

arithmetic principle vs other maths

arithmetic_principle_auditory

listen to arithmetic principle

arithmetic_principle_visual

read to arithmetic principle

auditory-visual

list to vs read instruction

colorlessg-wordlist

jabberwocky vs word list

colorlessg_auditory

auditory jabberwocky sentence parsing

colorlessg_visual

visual jabberwocky sentence parsing

context-general

cue vs language statement

context-theory_of_mind

cue vs false belief

context_auditory

audio cue

context_visual

visual cue

general-colorlessg

listen to sentence vs jabberwocky

general_auditory

listen to sentence

general_visual

read sentence

geometry-othermath

geometry vs other maths

geometry_fact_auditory

listen to geometric fact

geometry_fact_visual

read geometric fact

math-nonmath

math vs others

nonmath-math

others vs math

theory_of_mind-context

false belief vs cue

theory_of_mind-general

false belief vs general statement

theory_of_mind_and_context-general

false belief and cue vs general statement

theory_of_mind_auditory

auditory false-belief tale

theory_of_mind_visual

read false-belief tale

visual-auditory

read vs to listen to instruction

wordlist_auditory

listen to word list

wordlist_visual

read word list

SpatialNavigation#

spatial_localization spatial_memory visual_search spatial_working_memory navigation

Implemented using proprietary software

  • Software: Vizard 6

  • Response device: Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4)

  • Repository

This protocol, an adaptation from the one used in (Diersch et al., 2021), was originally designed to capture the effects of spatial encoding and orientation learning in different age groups. The task demands subjects to navigate and orientate themselves in a complex virtual environment that resembled a typical German historic city center, consisting of town houses, shops and restaurant. There are three parts of this task: introduction (outside of the scanner), encoding (in scanner) and retrieval (in scanner). Before entering the scanner, the participants went through an introduction phase, during which they had the freedom to navigate the virtual environment with the objective of collecting eight red balls scattered throughout various streets of the virtual city. During this part, the participants could familiarize themselves with the different buildings and learn the location of the two target buildings: Town Hall and Church. After they collect all the red balls, a short training of the main task was performed to ensure the correct understanding of the instructions.

Then, participants went to the scanner. The task began with the encoding phase. During this period, the participant had to passively watch the camera move from one target building to the other, in such a way that every street of the virtual environment is passed through in every direction possible. Participants were instructed to pay close attention to the spatial layout of the virtual environment and the location of the target landmarks. Passive transportation instead of self-controlled traveling was chosen to ensure that every participant experienced the virtual environment for the same amount of time. After the encoding phase, the retrieval phase started, which consisted of 8 experimental trials and 4 control trials per run. In each trial, the participant was positioned near an intersection within the virtual environment, which was enveloped in a dense fog, limiting visibility. Subsequently, the camera automatically approached the intersection and centered itself. The participant’s task was to indicate the direction of the target building, which was displayed as a miniature picture at the bottom of the screen. Control and experimental trials were identical, but during control trials the participant had to point to one of the buildings of the intersection that had been colored in blue instead of the target building. All of the runs, except the first one, began with the encoding phase, followed by the retrieval phase. In the initial run, a control trial of the retrieval phase preceded the standard design of the encoding phase followed by the retrieval phase.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for SpatialNavigation

Condition

Description

control

Camera approaches intersection in an control trial

encoding_phase

Encode location of key building

experimental

Camera approaches intersection in an experimental trial

intersection

Camera approaches intersection during encoding phase

navigation

Camera navigates through the streets during encoding phase

pointing_control

Participant rotates camera to point to blue building in control trial

pointing_experimental

Participant rotates camera to point to key building in experimental trial

Contrasts for SpatialNavigation

Contrast

Description

control

spatial navigation

experimental

spatial navigation

experimental-control

spatial navigation

experimental-intersection

spatial navigation

intersection

spatial localization

navigation

spatial navigation

pointing_control

pointing a landmark

pointing_experimental

pointing a landmark

retrieval

retrieving a landmark

GoodBadUgly#

Implementation

  • Software: Expyriment 0.9.0 (Python 2.7)

GoodBadUgly task was adapted from the study (Mantini et al., 2012), dedicated to investigate correspondence between monkey and human brains using naturalistic stimuli. The task relies on watching - viewing and listening - of the whole movie “The Good, the Bad and the Ugly” by Sergio Leone. The original, 177-minute movie was cut into 10-minute segments (except the first two and the last ones) to adjust to the segment length of the original study, which presented only three 10-min segments of the middle of the movie. This resulted in a total of 18 segments. For IBC, the French-dubbed version “Le Bon, la Brute et le Truand” was presented. This task was performed during three acquisition sessions with seven segments each, one segment per run. The first three segments were repeated during the last acquisition after the movie was completed. The total duration of the run ten minutes for the majority of segments, around eight minutes for the first two runs, and four minutes and a half for the last run. Note: there was some lag between the onset of each run and the initiation of the stimuli (movie), which might vary between runs and subjects. This lag should probably be considered when analyzing the data. Find more details in the section Lags in GoodBadUgly movie.

EmoMem#

visual_perception positive_emotion negative_emotion imagination visual_cue

Implementation

  • Software: Octave 4.4 + Psychtoolbox 3.0

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This task is a part of the CamCAN (Cambridge Centre for Ageing and Neuroscience) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox mrisync that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The Emotional Memory task was designed to provide an assessment of implicit and explicit memory, and how it is affected by emotional valence. At the IBC we only conducted the encoding part of the task the Study phase as mentioned in (Shafto et al., 2014) but not the Test phase that happened outside the scanner in the original study. In each trial, participants were presented with a background picture for 2 seconds, followed by a foreground picture of an object superimposed on it. Participants were instructed to imagine a “story” linking the background and foreground picture, and after an 8-second presentation, the next trial began. The manipulation of emotional valence exclusively affected the background image, which could be negative, neutral, or positive. Participants were asked to indicate the moment they thought of a story or a connection between the object and the background image by pressing a button. In all, 120 trials were presented over 2 runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for EmoMem

Condition

Description

negative_image

Negative background image

neutral_image

Neutral background image

object

Neutral object

positive_image

Positive background image

Contrasts for EmoMem

Contrast

Description

negative-neutral_image

negative vs neutral image

negative_image

viewing a negative image

neutral_image

viewing a neutral image

object

foreground object and imagination task

positive-neutral_image

positive vs neutral image

positive_image

viewing a positive image

EmoReco#

negative_emotion face_perception gender_perception emotional_expression

Implemented using proprietary software

  • Software: E-Prime 2.0

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • See demo

This task is a part of the CamCAN (Cambridge Centre for Ageing and Neuroscience) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox mrisync that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The Emotion Recognition task compares brain activity when observing angry versus neutral expressions, and assesses how individuals differ in how they regulate responses to negative emotional expressions (Shafto et al., 2014). The expressions were presented on female and male faces (15 each), and each face had an angry and a neutral expression version. Emotions were presented in blocks of angry and neutral, with equal numbers of female and male faces in each block. In each trial, participants were asked to report the gender of the face by pressing the corresponding button. There were 12 blocks of each emotion and each block consisted of 5 trials. In all, 60 trials were presented in each of the 2 runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for EmoReco

Condition

Description

angry_female

Angry emotion on female face

angry_male

Angry emotion on male face

neutral_female

Neutral emotion on female face

neutral_male

Neutral emotion on male face

Contrasts for EmoReco

Contrast

Description

angry

angry face perception

angry-neutral

angry vs neutral face perception

angry_female

angry female face perception

angry_male

angry male face perception

female-male

female vs male face perception

male-female

male vs female face

neutral

neutral face perception

neutral-angry

neutral vs angry face perception

neutral_female

neutral female face perception

neutral_male

neutral male face perception

StopNogo#

shape_recognition proactive_control shape_perception

Implemented using proprietary software

  • Software: Presentation

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • See demo

This task is a part of the CamCAN (Cambridge Centre for Ageing and Neuroscience) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox mrisync that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The StopNogo task assesses systems involved in action restraint and action cancellation by randomly interleaving Go, Stop and No-Go trials (Shafto et al., 2014). On Go trials, participants viewed a black arrow pointing left or right for 1000 ms, and indicated the direction of the arrow by pressing left/right buttons with their right hand. On Stop trials, the black arrow changed color (from black to red), after a short variable stop-signal delay. Participants were instructed that to not respond to the red arrow, so stop signal trials required canceling the initial response to the black arrow. The Stop-Signal delay varied trial-to-trial in steps of 50 ms, and a staircase procedure was used to maintain a performance level of 66% successful inhibition. Finally, in No-Go trials, the arrow was colored in red since the start of the trial (stop-signal delay of 0) and participants were required to make no response.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for StopNogo

Condition

Description

go

Arrow stays black; press button corresponding to arrow direction

nogo

Arrow starts out red so do not press button

successful_stop

Arrow starts out black but turns red; motor response inhibited

unsuccessful_stop

Arrow starts out black but turns red; motor response not inhibited

Contrasts for StopNogo

Contrast

Description

go

shape recognition

nogo

no response

nogo-go

response inhibition

successful+nogo-unsuccessful

failed to inhibit response

successful_stop

shape recognition, stopped response

unsuccessful-successful_stop

effect of failed inhibition

unsuccessful_stop

shape recognition, failed stopped response

Catell#

visual_form_discrimination oddball_detection

Implementation

  • Software: Octave 4.4 + Psychtoolbox 3.0

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • See demo

This task is a part of the CamCAN (Cambridge Centre for Ageing and Neuroscience) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox mrisync that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The Catell task was used to provide a measure of neural activity underpinning fluid intelligence (Shafto et al., 2014). On each trial, participants were presented with 4 images and had to identify the “odd one out”. While some trials presented easily identifiable differences between the oddball and the other images, others were more challenging, requiring participants to detect abstract patterns to identify the oddball. Participants completed alternating blocks of easy and difficult trials, each lasting 30 seconds. In total, they performed four blocks of easy problems and four blocks of difficult problems. In each trial, a stimulus appeared on the screen and remained until the participant responded, with the block automatically ending after 30 seconds and the next block beginning immediately. Participants were encouraged to take as much time as needed and were advised to respond only when confident in their answers. This design led to variable number of trials per block among individuals, while maintaining a consistent duration for each type of problem (easy and hard).

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Catell

Condition

Description

easy

Easy oddball trial where the non-oddball images are similar and very different from the oddball

hard

Difficult oddball trial where all images are similar

Contrasts for Catell

Contrast

Description

easy

easy oddball task

hard

hard oddball task

hard-easy

easy vs hard oddball task

FingerTapping#

motor_control motor_planning

Implementation

  • Software: Octave 4.4 + Psychtoolbox 3.0

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • See demo

This task is a part of the CamCAN (Cambridge Centre for Ageing and Neuroscience) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox mrisync that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The FingerTapping task studied executive control and action decisions in aging and neurodegenerative diseases (Shafto et al., 2014). Participants were presented with an image of a right hand and were instructed to press a button with one of their four right hand fingers in response to a cue. The cue was either a specified cue in which a single opaque circle indicates which finger to press, or a chosen cue in which 3 circles appeared opaque indicating participants must choose on of the 3 opaque fingers to press. Cues were presented for 1 second with a stimulus onset asynchrony of 2.5 seconds, and were pseudorandomly ordered so that participants did not see four or more responses of the same condition (action selection, specified or null) in a row. The task included 40 specified trials (10 for each finger) and 40 chosen trials, interspersed with 40 blank trials in which no cue is presented.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for FingerTapping

Condition

Description

chosen

Participant chooses 1 out of 3 highlighted fingers to tap

null

No finger tap

specified

Finger to tap is highlighted

Contrasts for FingerTapping

Contrast

Description

chosen

uncued finger tapping

chosen-null

uncued vs inhibited finger tapping

chosen-specified

uncued vs cued finfer tapping

null

inhibited finger tapping

specified

cued finger tapping

specified-null

cued vs inhibited finger tapping

VSTMC#

visual_attention spatial_working_memory

Implementation

  • Software: Octave 4.4 + Psychtoolbox 3.0

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • See demo

This task is a part of the CamCAN (Cambridge Centre for Ageing and Neuroscience) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox mrisync that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The Visual Short-Term Memory task was designed to assess the neural process underlying visual short-term memory. In each trial, participants saw three arrays of colored dots: one red, one yellow, and one blue. The dot displays were presented in rapid succession beginning with a 250 ms fixation period followed by a 500 ms presentation of the dot display. To manipulate set size, one, two, or three of the dot displays moved in a single direction, which had to be remembered. The remaining displays rotated around a central axis and served as distractors, which had to be ignored. After the presentation of the third display, an 8-second delay followed, during which participants had to remember the direction(s) of motion for the non-rotating dots. Subsequently, the probe display appeared, with a colored circle indicating which dot display to recall (red, yellow, or blue). Within the circle, there was a pointer that had to be adjusted to indicate the direction in which the target dot display had been moving. Participants were given 5 seconds to adjust the pointer to match the direction of the to-be-remembered dot display. On 90% of trials the probed movements were in one of three directions: 7, 127, or 247 degrees.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for VSTMC

Condition

Description

resp_load1

Response period of stim_load1 trials

resp_load2

Response period of stim_load2 trials

resp_load3

Response period of stim_load3 trials

stim_load1

Dots in only one colour move coherently in a given direction

stim_load2

Dots in two colours move coherently in two different directions

stim_load3

Dots in all three colours move coherently in 3 different directions

Contrasts for VSTMC

Contrast

Description

resp

response to motion

resp_load1

response to motion direction of one set of points

resp_load2

response to motion direction of two sets of points

resp_load3

response to motion direction of three sets of points

resp_load3-load1

difference in response to one vs three sets of points

stim

attending to sets of points

stim_load1

attending to one set of points

stim_load2

attending to two sets of points

stim_load3

attending to three set of points

stim_load3-load1

difference in attending to motion of one vs three sets of points

RewProc#

repetition risk_aversion visual_perception reward_valuation reward_processing

Implementation

  • Software: Psychopy 2021.1.3 (Python 3.8.5)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Repository

  • See demo

The Reward Processing protocol was adapted from O’Doherty et al., 2001 and O’Doherty et al., 2003, which aimed at discerning the role of the orbitofrontal cortex (OFC) using a similar emotion-related visual reversal-learning task in which choice of the correct stimulus led to a probabistically determined “monetary” reward and choice of the incorrect stimulus led to a monetary loss.

In each trial of a run of this protocol, two unfamiliar and easily discriminable fractal patterns were displayed on a gray background, positioned to the left and right of a central fixation cross. At the beginning of the task, one of these two patterns was arbitrarily designated as “correct” and the other as “incorrect”. The task for the participants was to select one of these two patterns. Selecting the correct pattern led to a monetary gain with a 70% probability, and a monetary loss with a 30% probability. Selecting the incorrect pattern led to a monetary gain with a 30% probability and a monetary loss with a 70% probability (reversed gain-loss probability contingencies). After selecting either pattern, a black box appeared around the chosen pattern, followed by feedback indicating the amount of symbolic money (either 20 or 10 units) that was gained or lost in the particular trial which could be either 20 or 10 units. The probability of receiving either 10 or 20 units of money was equal. Furthermore, if the participant consecutively selected the correct pattern for a specified criterion, i.e. 5 consecutive times, a reversal of the gain-loss probability contingencies occurred after a Poisson process. This meant that there was a 25% probability that a reversal took place in the gain-loss probabilities on any post-criterion trial.

The data was acquired in 2 runs during one scanning session. Each run comprised 85 trials. he timing of trial events in the IBC implementation of the task differed from those in the two aforementioned studies. This adjustment was made after a discussion with the authors, who believed that the timing in the final IBC-implementation version was more appropriate for achieving adequately separated events to minimize temporal correlations while maintaining a reasonable total trial length. Specifically, the pre-fixation cross was displayed for a duration ranging from 500 to 1500 ms. The stimuli remained on the screen for less than 3000 ms for participant selection, and the outcome feedback was presented with a 1750 ms delay, lasting for 1750 ms.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for RewProc

Condition

Description

green

Subject selected the green pattern

left

Selected pattern was in the left side of the screen

minus_10

Lost 10 units of reward as a result of the selection

minus_20

Lost 20 units of reward as a result of the selection

plus_10

Gained 10 units of reward as a result of the selection

plus_20

Gained 20 units of reward as a result of the selection

purple

Subject selected the purple pattern

right

Selected pattern was in the right side of the screen

stay

Selected pattern was the same as the one selected in previous trial

switch

Selected pattern was different from the one selected in previous trial

Contrasts for RewProc

Contrast

Description

gain

gained 20 or 10 units of reward

gain-loss

gained vs lost 20 or 10 units of reward

green-purple

green vs purple pattern selected

left-right

selected pattern on the left vs right side

loss

lost 20 or 10 units of reward

loss-gain

lost vs gained 20 or 10 units of reward

minus_10

lost 10 units of reward

minus_20

lost 20 units of reward

plus_10

gained 10 units of reward

plus_20

gained 20 units of reward

purple-green

purple vs green pattern selected

right-left

selected pattern on the right vs left side

stay

selected the same pattern than previous trial

stay-switch

selected the same vs different pattern

stim

appearance of the cue images

switch

selected a different pattern than previous trial

switch-stay

selected a different vs same pattern

NARPS#

risk_aversion reward_valuation reward_processing loss_aversion reward_anticipation

Implementation

  • Software: Psychtoolbox-3 (Octave 5.2.0)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Repository

  • See demo

This protocol is more commonly know as the mixed gambles task and was adapted from the Neuroimaging Analysis Replication and Prediction Study (NARPS) (Botvinik-Nezer et al., 2019) study, that aimed to estimate the variability of neuroscientific results across analysis teams. The mixed gambles task though, is originally from (Tom et al., 2007) that studied the neural basis of loss aversion, which is the phenomenon that suggests that people tend to be more sensitive to losses as compared to equal-sized gains. The study therefore, investigated whether potential losses elicit negative emotions, which then drive loss aversion or rather the same neural systems, encoding subjective value, asymmetrically respond to losses compared to gains.

In each trial, participants were presented with a mixed gamble where they had a 50% chance of either gaining one amount of symbolic money or losing another amount. The possible gains and losses both ranged between 5-20 units (equal range condition), in increments of 1 unit and all 256 possible combinations of gains and losses were presented to each subject in the same sequence. The stimulus consisted of a circle presented on a gray screen and divided into two halves: on one side the gain amount was presented in green with a plus (+) sign before the number, and on the other side the loss amount was presented in red with a minus (-) sign before the number. Subjects were then asked to decide whether or not they would like to accept the gambles presented to them, with four possible responses for each gamble: strongly accept, weakly accept, weakly reject or strongly reject. :raw-html:`<br />`The data was acquired in four runs during one scanning session. Each run comprised 64 trials. The gamble was presented on the screen until the participant responded or four seconds have passed, followed by a grey screen until the onset of the next trial. In the aforementioned NARPS study, the same amount of data was also acquired for an equal indifference condition where the possible gains ranged between 10-40 units while losses ranged between 5-20 units. This was not done for the IBC implementation, as no significant differences were observed between the two task designs in the NARPS study.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for NARPS

Condition

Description

gain

Significant parametric increase in BOLD signal to increasing potential gains

loss

Significant parametric decrease in BOLD signal to increasing potential losses

stim

Mixed gamble stimulus with given units of potential gain and loss (amounts could vary between 5-20)

strongly_accept

Subject accepted the gamble with high confidence

strongly_reject

Subject rejected the gamble with high confidence

weakly_accept

Subject accepted the gamble with low confidence

weakly_reject

Subject rejected the gamble with low confidence

Contrasts for NARPS

Contrast

Description

accept-reject

gambles accepted vs gambles rejected

gain

potential gains during stim events

loss

potential losses during stim events

reject-accept

gambles rejected vs gambles accepted

strongly_accept

accept the gamble with high confidence

strongly_reject

reject the gamble with high confidence

weakly_accept

accept the gamble with low confidence

weakly_reject

reject the gamble with low confidence

FaceBody#

visual_object_recognition updating visual_pseudoword_recognition working_memory face_maintenance

Implementation

  • Software: Psychtoolbox-3 (Octave 5.2.0)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Repository

  • See demo

This protocol was adapted from (Stigliani A 2015), where it was used to define category-selective cortical regions that respond preferentially to different categories. A detailed description and code for the original protocol is available here. In the IBC implementation, participants were presented with images of the following categories: faces, places, bodies, objects and characters. Each of the five stimulus categories was associated with two related subcategories with 144 images per subcategory, see conditions table. The protocol used a mini-block design in which 12 stimuli of the same subcategory were presented in each block. The sequence of the blocks was randomized over the ten subcategories and a blank baseline condition, and each subject was presented with the same sequence. To ensure that the subjects remained alert throughout the experiment, they were asked to press a button when an image is repeated as a mirrored image (flipped 1-back task). Data were acquired in four runs during one scanning session. Each run comprised of 76 blocks each associated with the conditions given in this table that were equally represented. Each block consisted of 12 images and was 6 seconds long (500 ms/image).

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for FaceBody

Condition

Description

bodies_body

Images of body parts (category) with full bodies without faces (subcategory)

bodies_limb

Images of body parts (category) with just limbs (subcategory)

characters_number

Images of printed characters (category) with just numbers (subcategory)

characters_word

Images of printed characters (category) with just words (subcategory)

faces_adult

Images of faces (category) of adults (subcategory)

faces_child

Images of faces (category) of children (subcategory)

objects_car

Images of objects (category) with just cars (subcategory)

objects_instrument

Images of objects (category) with just musical instruments (subcategory)

places_corridor

Images of places (category) with just corridors (subcategory)

places_house

Images of places (category) with just houses (subcategory)

Contrasts for FaceBody

Contrast

Description

bodies-others

body image 1-back task vs. rest of categories

bodies_body

body image 1-back task vs. fixation

bodies_limb

body image 1-back task vs. fixation

characters-others

character images 1-back vs. rest of categories

characters_number

character images 1-back vs fixation

characters_word

words images 1-back vs fixation

faces-others

face image 1-back task vs. rest of categories

faces_adult

face image 1-back task vs. fixation

faces_child

face image 1-back task vs. fixation

objects-others

object image 0-back task vs. rest of categories

objects_car

object image 0-back task vs. fixation

objects_instrument

object image 1-back task vs. fixation

places-others

place image 1-back task vs. rest of categories

places_corridor

place image 1-back task vs. fixation

places_house

place image 1-back task vs. fixation

Scene#

lower-right_vision oddball_detection spatial_attention lower-left_vision upper-right_vision

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • See demo

This protocol was adapted from (Douglas et al., 2017) and was designed to identify how the brain combines spatial elements to form a coherent percept. To this end, participants judged whether Escher-like scenes were possible or impossible. 56 scenes were designed so that they appeared spatially incoherent when viewed from a particular angle, and were termed impossible scenes. Possible counterparts were created to each impossible scene, and these were termed possible scenes. For comparison, baseline non-scene images were created by scrambling the scenes and matched for low-level visual properties. A partially transparent circle was overlaid at a pseudo-random location on each of the scrambled scenes, such that half of these dots were found on the left and half on the right of the baseline scrambled images. On these scrambled image dot trials, participants indicated the left/right location of the dot. There were easy and hard versions that depended on the transparency of the overlaid circle. The data were acquired in four runs during one scanning session.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Scene

Condition

Description

dot_easy_left

More opaque dot on left

dot_easy_right

More opaque dot on right

dot_hard_left

More transparent dot on left

dot_hard_right

More transparent dot on right

scene_impossible_correct

Impossible scene trial that the subject identified correctly

scene_impossible_incorrect

Impossible scene trial that the subject identified incorrectly

scene_possible_correct

Possible scene trial that the subject identified correctly

scene_possible_incorrect

Possible scene trial that the subject identified incorrectly

Contrasts for Scene

Contrast

Description

dot_easy_left

looking for a salience dot in left visual field

dot_easy_right

looking for a salience dot in right visual field

dot_hard-easy

looking for low-salience vs high-salience dot

dot_hard_left

looking for a low-salience dot in left visual field

dot_hard_right

looking for a low-salience dot in right visual field

dot_left-right

looking for a dot in left vs right visual field

scene_correct-dot_correct

assessing scenes vs detecting a dot

scene_impossible_correct

successful identification of an impossible scene

scene_impossible_incorrect

failed identification an impossible scene

scene_possible_correct

successful identification of an possible scene

scene_possible_correct-scene_impossible_correct

successful identification of an possible vs impossible scene

BreathHolding#

breath_holding self_monitoring

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This task was a part of the Function Biomedical Informatics Research Network (FBIRN) (Keator et al., 2016) battery of protocols designed to, among other goals, assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The BreathHolding task was designed to measure vascular response. In a block design, the participant alternated between breathing normally for 20 s and holding their breath for 16 s. They were given a warning 2 s before the hold breath signal was given, so they could prepare to hold their breath. This cycle was repeated 10 times. No response was required in this task.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for BreathHolding

Condition

Description

breathe

Breathe normally

get_ready

Prepare to hold breath

hold_breath

Hold breath

Contrasts for BreathHolding

Contrast

Description

breathe

breathe normally

breathe-hold

breathe normally vs hold breath

hold-breathe

hold breath vs breathe normally

hold_breath

hold breath

Checkerboard#

visual_perception preattentive_processing central_fixation

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This task was a part of the Function Biomedical Informatics Research Network (FBIRN) (Keator et al., 2016) battery of protocols designed to, among other goals, assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The Checkerboard task is a block design sensorimotor task with alternating 16s long blocks of rest and visual stimulation with a checkerboard stimulus. In the checkerboard block, a checkerboard filling the visual field was presented for a period of 200 ms at random intervals (avg. ISI=762 ms, range: 500-1000 ms), and the subject pressed a button each time the checkerboard appeared on screen. The run starts and ends with fixation blocks, and 11 blocks of checkerboard stimulation are presented.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Checkerboard

Condition

Description

checkerboard

Checkerboard block

fixation

Fixation block

Contrasts for Checkerboard

Contrast

Description

checkerboard

checkerboard

checkerboard-fixation

checkerboard vs baseline

fixation

period in between checkerboards

FingerTap#

preattentive_processing central_fixation

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This task was a part of the Function Biomedical Informatics Research Network (FBIRN) (Keator et al., 2016) battery of protocols designed to, among other goals, assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The FingerTap task is a block design reaction time task in which subjects press one of the four keypad buttons when they see a corresponding visual cue (‘1’ for button1, ‘2’ for button2 and so on). The stimuli appear at 1 s intervals and subjects get 2 s to make their response. The run starts and ends with task blocks, with 4 task blocks per run and 64 trials per task block. The task blocks are interleaved with rest blocks lasting 15 s.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for FingerTap

Condition

Description

fingertap

Press button corresponding to visual stimulus

rest

Rest block

Contrasts for FingerTap

Contrast

Description

fingertap

button press in response to a cue

fingertap-rest

button press vs rest

rest

rest period

ItemRecognition#

visual_number_recognition visual_attention spatial_working_memory numerosity

Implemented using proprietary software

  • Software: E-Prime 2.0 Professional (Psychological Software Tools, Inc.)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • See demo

This task was a part of the Function Biomedical Informatics Research Network (FBIRN) (Keator et al., 2016) battery of protocols designed to, among other goals, assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The Item Recognition task is a working memory (WM) task with load 1, 3 and 5. There were four conditions in this task; on three of them, participants were shown series of either one, three or five targets (digits), displayed in red, and were asked to memorize them. They were then presented with probes (also digits) displayed in green, and were required to indicate whether the probe matched one of the targets or not. In the fourth condition, participants were shown a series of arrows and were asked to indicate the direction of the arrows (left or right). This task followed a block-design format with 2 blocks for each of the 3 working memory conditions, along with 2 blocks for the arrow condition.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for ItemRecognition

Condition

Description

arrow_left

Leftward pointing arrow

arrow_right

Rightward pointing arrow

encode1

Encode digit of load 1 blocks

encode3

Encode digit of load 3 blocks

encode5

Encode digit of load 5 blocks

load1_instr

Instruction signaling start of load 1 blocks

load3_instr

Instruction signaling start of load 3 blocks

load5_instr

Instruction signaling start of load 5 blocks

probe1_mem

Probe digit that was encoded at the start of load 1 blocks

probe1_new

Probe digit that is new for load 1 blocks

probe3_mem

Probe digit that was encoded at the start of load 3 blocks

probe3_new

Probe digit that is new for load 3 blocks

probe5_mem

Probe digit that was encoded at the start of load 5 blocks

probe5_new

Probe digit that is new for load 5 blocks

Contrasts for ItemRecognition

Contrast

Description

arrow_left

leftward pointing arrow

arrow_left-arrow_right

identifying a left vs right pointing arrow

arrow_right

rightward pointing arrow

encode

encoding 1 3 and 5 items

encode1

memorize 1 digit

encode3

memorize 3 digits

encode5

memorize 5 digits

encode5-encode1

encoding 5 vs 1 item

prob-arrow

probing digits vs trials of pointing arrows

probe1_mem

probe encoded digit from load 1

probe1_new

probe new digit from load 1

probe3_mem

probe encoded digit from load 3

probe3_new

probe new digit from load 3

probe5_mem

probe encoded digit from load 5

probe5_mem-probe1_mem

probing an encoded digit in a load of 5 vs 1

probe5_new

probe new digit from load 5

probe5_new-probe1_new

probing a new digit in a load 5 vs 1

VisualSearch#

visual_pattern_recognition visual_form_discrimination visual_search visual_attention visual_working_memory

Implementation

  • Software: Expyriment 0.10.0 (Python 3.8.5)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Repository

  • See demo

The Visual search, Working memory protocol was adapted from (Kuo BC et al., 2016). It aimed to elaborate the neurophysiological mechanism underlying the spatially specific activation of sensory codes while searching for a visual or remembered target. A set of eight stimuli items was selected from a set of 100 novel and difficult to verbalize closed shape contours previously developed by (Endo N et al. 2003) in the original as well as in the IBC implementation of the study. Each run of the protocol involved two kinds of trials - visual search and working memory search. In visual search trials, the participants were first shown an abstract item (sample item) and then they had to search for that item in a set of two or four items (search array). In the working memory search trials, the participants were first shown a set of two or four items (memory array) and then they had to tell whether a subsequently shown item (probe item) was present in the previously shown set of items. Thus, in addition to the type of search (visual or working memory) and search response (target present or absent), the array load (two or four items) was also varied in each trial.

The data was acquired in four runs during one scanning session. Each run comprised forty-eight trials. In the original study, the participants also performed a separate session for a visual localizer task, where they viewed the stimuli passively without making any responses. This session was excluded from the IBC implementation of the protocol. Furthermore, the response period was also increased from 1000 msec to 2000 msec and the stimuli size from 1.72 to 1.80 degrees of visual angle, following the feedback from the pilot sessions. Apart from these changes, the rest of the task design was similar to that of the original study.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for VisualSearch

Condition

Description

delay_vis

Delay period between sample item and search array in visual search trials

delay_wm

Delay period between memory array and probe item in working memory trials

memory_array_four

Array of four items, with or without the item to search for (probe item) - in working memory trials

memory_array_two

Array of two items, with or without the item to search for (probe item) - in working memory trials

probe_item_four_absent

Item to search for but was absent in memory array of four items - in working memory trials

probe_item_four_present

Item to search for and was present in memory array of four items - in working memory trials

probe_item_two_absent

Item to search for but was absent in memory array of two items - in working memory trials

probe_item_two_present

Item to search for and was present in memory array of two items - in working memory trials

response_hit

Subject responded correctly

response_miss

Subject responded incorrectly

sample_item

Item to search for in an array of two or four items (search array) - in visual search trials

search_array_four_absent

Array of four items, without sample item - in visual search trials

search_array_four_present

Array of four items, with sample item - in visual search trials

search_array_two_absent

Array of two items, without sample item - in visual search trials

search_array_two_present

Array of two items, with sample item - in visual search trials

Contrasts for VisualSearch

Contrast

Description

delay_vis

delay period on visual search

delay_vis-delay_wm

delay period on visual search vs on working memory

delay_wm

delay period on working memory

memory_array_four

array of four items with or without the item to search for

memory_array_two

array of two items with or without the item to search for

probe_item

probing an item absent or present

probe_item_absent

probing an absent item from array of two or four

probe_item_absent-probe_item_present

probing an absent vs present item

probe_item_four

probing an absent or present item from array of four

probe_item_four-probe_item_two

probing an item from an array of four vs two

probe_item_four_absent

probing an absent item from array of four

probe_item_four_present

probing a present item from array of four

probe_item_present

probing a present item from array of two or four

probe_item_two

probing an absent or present item from array of two

probe_item_two_absent

probing an absent item from array of four

probe_item_two_present

probing a present item from array of four

response_hit

subject’s correct response

response_miss

subject’s incorrect response

sample_item

item to search for in an array of two or four items

search_array

array of items four or two

search_array_absent

array of two or four items without sample item

search_array_absent-search_array_present

array of items without vs with sample item

search_array_four

array of four items with or without the sample item

search_array_four-search_array_two

array of four vs two items

search_array_four_absent

array of four items without sample item

search_array_four_present

array of four items with sample item

search_array_present

array of two or four items with sample item

search_array_two

array of two items with or without the sample item

search_array_two_absent

array of two items without the sample item

search_array_two_present

array of two items with the sample item

MonkeyKingdom#

Implementation

  • Software: Expyriment 0.9.0 (Python 2.7)

  • Audio device: MRConfon MKII

The movie Monkey Kingdom (in french Au royaume des singes) task was adapted from a study done in Wim Vanduffel’s Laboratory for Neuro- and Psychophysiology at the Leuven University, dedicated to investigate correspondence between monkey and human brains using naturalistic stimuli. The task relies on watching –viewing and listening– of the whole Disney movie “Monkey Kingdom”. The original, 81-minute movie was cut into 15-minute segments that corresponded to the segments in the original study. This resulted in a total of 5 segments. This acquisition was conducted in one session. Note: there was some lag between the onset of each run and the initiation of the stimuli (movie), which might vary between runs and subjects. This lag should probably be considered when analyzing the data. Find more details in the section Lags in MonkeyKingdom movie.

Color#

color_perception working_memory

Implementation

  • Software: Psychopy 2021.1.3 (Python 3.8.5)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Repository

This protocol was adapted from (McKeefry et al., 1997), that aimed at exploring the position and variability of the color center in the human brain. The protocol used a mini-block design, in which 12 stimuli of the same type (either chromatic or achromatic) were presented consecutively. These stimuli were Mondrian patterns - abstract images with no recognizable objects - each composed of 20 cicular blobs of different isoluminant colors. Each run consisted of two kinds of blocks - chromatic and achromatic. During chromatic blocks, colored Mondrian patterns were presented while during achromatic blocks, grayscaled or achromatic versions of those patterns were presented. Both the conditions were equally represented in each run and the same randomized sequence of these conditions alternating with a baseline fixation cross was presented to each subject. To ensure that the subjects remained alert throughout the experiment, they were asked to press a button when an image repeated (1-back task). The data was acquired in four runs during one scanning session. Each run comprised of 36 blocks. Each block consisted of 12 images, was 7.2 seconds long (500 ms/image + 100 ms delay after each image) and was followed by a inter-block fixation cross that stayed on screen for 5 seconds. The images presented were 16 x 16 degrees of visual angle.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Color

Condition

Description

achromatic

Achromatic Mondrian patterns

chromatic

Chromatic Mondrian patterns

response

Subject’s response to 1-back task i.e. when the same color pattern was presented twice consecutively

Contrasts for Color

Contrast

Description

achromatic

attending to achromatic mondrian patterns

chromatic

attending to chromatic mondrian patterns

chromatic-achromatic

chromatic vs achromatic mondrian patterns

response

response to repeated mondrian patterns

Motion#

random_motion lower-left_vision visual_awareness upper-left_vision coherent_motion

Implementation

  • Software: Psychopy 2021.1.3. (Python 3.8.5)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Repository

This protocol was adapted from (Helfrich et al., 2013), that aimed at delineating areas of the visual cortex that responded to coherent visual motion under conditions of controlled attention and fixation. In this protocol, the stimulus was composed of a rectangular random dot pattern with white dots on a dark background. Each run consisted of trials with three different conditions, namely: stationary, coherent and incoherent motion. In coherent motion condition, the motion direction was same for all dots, while in incoherent motion condition, the dots moved independently in all possible directions. For both these motion conditions, the motion direction was changed every 2 seconds in steps of 60 degrees. Due to this, the coherent motion condition was further divided into types - one where the motion direction changed clockwise and the other where it changed anti-clockwise. During the stationary condition, which was the baseline, the random dot pattern was presented with a limited dot lifetime of 1000 ms as in the motion conditions. In addition to the motion conditions, the field of presentation of the stimuli was also varied during the experiment. This means that some of the stimuli in a run were presented only on the right, others on the left hand side and the rest on the full screen. During all the runs the subjects were asked to maintain a fixation on the central fixation point. This fixation point changed colors at a rate of 2 Hz selected randomly out of six colors (red, yellow, blue, green, magenta, white). To ensure that the subjects remained alert throughout the experiment, they were asked to press a button when this fixation point turned blue.

The conditions were counterbalanced and were presented in the same randomized sequence to each subject. The randomized sequence of the changing colors of the fixation point was also the same for each subject. The data was acquired in four runs during one scanning session. Each run comprised of 32 trials. Each trial was 12 seconds long with changes in motion direction (only in the motion conditions) after every 2 seconds in steps of 60 degrees. Each trial was followed by an inter-trial fixation cross that stayed on the screen for 2 seconds. The fixation point remained on the screen throughout each trial and changed colors randomly at a rate of 2 Hz (i.e. after every 500 ms). The stimuli were extended to 40 degrees in the horizontal and 20 degrees in the vertical direction. The central visual area of 3 x 3 degrees was not stimulated. Each dot (including the fixation dot) had a diameter of 8.6 arc min and moved at 6 degrees/sec. All the dots had a limited lifetime of 1000 ms and dot density was 6 dots/degree^2 throughout all trials.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Motion

Condition

Description

anti

Trials with direction of coherent motion changing in the anti-clockwise direction

clock

Trials with direction of coherent motion changing in the clockwise direction

coherent

Motion condition when dots were moving coherently in the same direction

incoherent

Motion condition when dots were moving incoherently in random directions

left

Trials where dot pattern was presented only in the left visual field

right

Trials where dot pattern was presented only in the right visual field

stationary

Motion condition when dots stayed stationary but each dot was respawned in a different location after 1 sec

Contrasts for Motion

Contrast

Description

anti

anti-clockwise motion

clock

motion in clockwise direction

clock-anti

clockwise vs anti-clockwise motion

coherent

dots moving coherently

coherent-incoherent

dots moving coherently vs coherently

coherent-stationary

dots moving coherently vs staying stationary

incoherent

dots moving incoherently

incoherent-stationary

dots moving incoherently vs staying stationary

left-right

dot pattern in left vs right visual field

response

fixation point turning blue

stationary

stationary dots appearing in different locations

OptimismBias#

episodic_future_thinking self-reference_effect future_time past_time temporal_categorization

Implementation

  • Software: Psychopy 2021.1.3. (Python 3.8.5)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Repository

This protocol was adapted from (Sharot et al. 2007), that aimed at examining the neurobiological basis of optimism. The subjects were presented with a series of events as text that described a life episode alongwith the word “past” or “future” to indicate that the subjects had to think of the given event such that it occurred in the past or might occur in the future. They were instructed to press a button once the memory or projection of that event was beginning to form in their mind. Following that, they had to rate the memory or projection for whether the event was (very, a little or not at all) emotionally arousing and also its valence (whether it was negative or positive). Each event was displayed for 14 seconds on the screen and they had 2 seconds for each rating (emotional arousal and valence) task. In the original study, 80 unique events were presented over 4 runs (20 events in each run). For IBC, we added a fifth run where the events were picked randomly out of the given 80 and the past and future contingencies were reversed. Each run was 10 minutes and 2 seconds long. Each trial was labeled with one of the conditions given in this table based on the rating received for emotional arousal and valence from the subjects. Trials were labeled negative when they received high (“very”) or medium (“a little”) arousal rating and negative valence. Similarly, they were labeled positive when they received high (“very”) or medium (“a little”) arousal rating and positive valence. In case of all other combinations of responses, trials were labeled neutral and in absence of either or both responses they were labeled inconclusive. Past and future part of the label of course depended upon whether the presented event was that of past or future.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for OptimismBias

Condition

Description

future_negative

Future, negative valence and very or a little arousing event

future_neutral

Future, negative or positive valence and not at all arousing event

future_positive

Future, positive valence and very or a little arousing event

inconclusive

Absence of either or both responses

past_negative

Past, negative valence and very or a little arousing event

past_neutral

Past, negative or positive valence and not at all arousing event

past_positive

Past, positive valence and very or a little arousing event

Contrasts for OptimismBias

Contrast

Description

all_events

all events

future_positive_vs_negative

future positive vs negative

future_vs_past

future vs past events

interaction

interaction of (future vs past) and (positive vs negative)

optimism_bias

future negative vs other events

past_positive_vs_negative

past positive vs negative

positive_vs_negative

positive vs negative events

MovieAomic#

Implementation

  • Software: Expyriment 0.9.0 (Python 2.7)

  • Audio device: MRConfon MKII

This was a passive movie watching task. The movie clip presented was about 11 minutes long and consisted of continuous compilation of 22 natural scenes taken from the movie Koyaanisqatsi (Reggio G. Koyaanisqatsi, 1982) with music composed by Philip Glass. As mentioned in (Snoek et al., 2021): “the scenes were selected because they broadly sample a set of visual parameters (textures and objects with different sizes and different rates of movement). Importantly, the focus on variation of visual parameters means, in this case, that the movie lacks a narrative and thus may be inappropriate to investigate semantic or other high-level processes”. The resolution was adjusted to subtend a 16 degrees of visual angle (as in the orginal study) for the IBC setup.

HarririAomic#

visual_orientation shape_recognition emotional_expression emotional_face_recognition face_perception

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This protocol is a part of the AOMIC (Amsterdam Open MRI Collection) battery and is published in (Snoek et al., 2021). HarririAomic explores the processes related to (facial) emotion processing. The subjects were shown three images each trial positioned in the form of a triangle - one on the top and two on the bottom. Their task was to say that which one of the two bottom images matched with the top one and respond accordingly. During a shape condition trial, they had to match the shape of the images i.e. whether the oval shape was vertically or horizontally oriented. While during a emotion condition trial, they had to match the emotion/facial expression (either fear or anger) in the images. The stimulus disappeared after 4.8 seconds or as soon as the subject responded and new trial always appeared 5 seconds after the onset of each trial. This task was done for 2 runs and the trials were presented in a block-design with alternating shape and emotion blocks consisting of six stimuli of 5 seconds each. There were four blocks for each condition, making each run 270 seconds long.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for HarririAomic

Condition

Description

emotion

When the presented trial was that with emotions

index_response

When subject responded with index finger, meaning the image on left matched with image on top

middle_response

When subject responded with middle finger, meaning the image on right matched with image on top

shape

Viewing a shape

Contrasts for HarririAomic

Contrast

Description

emotion

match the facial expression

emotion-shape

match facial expression vs the shape of image

index_response

matching left image to top cue

middle_response

matching right image to top cue

shape

match the shape of images

FacesAomic#

emotional_expression negative_emotion feature_integration face_perception visual_face_recognition

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This protocol is a part of the AOMIC (Amsterdam Open MRI Collection) battery and is published in (Snoek et al., 2021). FacesAomic explores the processes related to (emotional) facial perception. The stimuli here are videos of people’s facial expressions, male or female, northern european or mediterranean, where they were expressing certain emotion (pride, contempt, anger, joy or no expression). For IBC, this protocol was implemented slightly differently from what is mentioned in Snoek et al., 2021. The run duration was extended from about 4 minutes to 6 minutes and an additional run of 6 minutes was done by adding a few more of the said video stimuli from the Amsterdam Dynamic Facial Expression Set (ADFES) (Schalk et al., 2011). More specifically, in addition to the female models, we also used the videos with male models and added a post-acquisition task to control for attention after each run, instead of just passive viewing as in the original study. The subjects were instructed to try and remember the faces as well as expressions they had seen during the acquisition run and then say post-acquisition whether a given video was presented before. Each video was 4 seconds long, with 5 seconds of inter-trial interval and 8 videos in each run. Each video was associated with three conditions - emotions, sex and ethnicity which were counterbalanced in each and across runs.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for FacesAomic

Condition

Description

anger

Video of a face expressing anger

contempt

Video of a face expressing contempt

european

Video with an european ethnicity model expressing some emotion

female

Video with female face expressing some emotion

joy

Video of a face expressing joy

male

Video with male face expressing some emotion

mediterranean

Video with a mediterranean ethnicity model expressing some emotion

neutral

Baseline, when no emotion was expressed

pride

Video of a face expressing pride

Contrasts for FacesAomic

Contrast

Description

all-neutral

attending to expressive vs neutral faces

anger

attending to face expressing anger

anger-neutral

attending to angry vs neutral face

contempt

attending to face expressing contempt

contempt-neutral

attending to contempt vs neutral face

european-mediterranean

attending to european vs mediterranean ethnicity face

female-male

attending to female vs male face

joy

attending to face expressing joy

joy-neutral

attending to joyful vs neutral face

male-female

attending to male vs female face

mediterranean-european

attending to mediterranean vs european ethnicity face

neutral

attending to neutral face

pride

attending to face expressing pride

pride-neutral

attending to pride vs neutral face

StroopAomic#

conflict_detection visual_word_recognition face_perception gender_perception

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This protocol is a part of the AOMIC (Amsterdam Open MRI Collection) battery and is published in (Snoek et al., 2021). StroopAomic explores the processes related to cognitive conflict and control. The subjects were presented with images of faces of male and female models in greyscale with certain words associated with each sex in red overlayed on top of these images. The words used were French ones for “man”, “sir”, “woman”, and “lady” in either lower or upper case. Their task was to say whether the image was that of a male or a female model while ignoring the word overlayed on top of the image. Everything was implemented the same way for IBC as in the original study except for the images of faces which were not available. The images were hence taken from another stimulus set used in Morrison2017. In addition, two runs were done instead of just one as in the original study. Each face-word composite stimulus was presented for 0.5 seconds in an event-related design, and was either congruent (same sex of face and word) or incongruent (different sex of face and word). Total 96 such stimuli were presented making each run 270 seconds long and there were 2 runs. The congruent and incongruent conditions were counterbalanced in each run. The response condition was inserted for each trial post-run based on subject responses to make the contrasting easier.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for StroopAomic

Condition

Description

congruent

The presented word matches the face shown

face_female

Female face shown

face_male

Male face shown

incongruent

The presented word did not match the face shown

word_female

The presented word corresponds to a female

word_male

The presented word corresponds to a male

Contrasts for StroopAomic

Contrast

Description

congruent-incongruent

word and face matched vs did not match

congruent_word_female_face_female

attending to female face while reading ‘female’

congruent_word_male_face_male

attending to male face while reading ‘male’

face_male-face_female

male vs female face

incongruent-congruent

word and face did not match vs matched

incongruent_word_female_face_male

attending to male face while reading ‘female’

incongruent_word_male_face_female

attending to female face while reading ‘male’

index-middle

indicate the face is of male vs of female

index_response

identifying a male face

middle-index

indicate the face is of female vs of male

middle_response

identifying a female face

word_male-word_female

word ‘male’ vs ‘female’

WMAomic#

visual_attention visual_orientation visual_working_memory

Implemented using proprietary software

  • Software: Presentation (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This protocol is a part of the AOMIC (Amsterdam Open MRI Collection) battery and is published in (Snoek et al., 2021). Working Memory Aomic explores the processes related to visual working memory. The trials were presented in a fixed event-related in a design and each consisted of six phases: an alert phase (1 second), an encoding phase (1 second), a retention phase (2 seconds), a test phase (1 second), a response phase (1 second) and an inter-stimulus interval (0-4 seconds). In the retention phase, the subjects were shown a set of six white bars arranged in a circle around a fixation cross. Each of these bars had a random orientation (either 0, 45, 90, or 135 degrees). Then in the test phase, one of these six blocks appeared again - either with same orientation or a different one. The subject’s task was to say whether or not the bar had the same orientation during response phase and respond accordingly. Each trial was associated with one of three conditions: active_change, active_no_change or passive. In total, there were 8 passive trials, 16 active_change and active_no_change trials, in addition to 20 null trials of 6 seconds (which are equivalent to an additional inter-stimulus interval of 6 seconds). Each run was 324 seconds long and there were two runs of this task.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for WMAomic

Condition

Description

active_change

The probe had a different orientation than it had on the array

active_no_change

The probe had the same orientation than it had on the array

passive

Passive trials, the bars were not displayed

Contrasts for WMAomic

Contrast

Description

active-passive

assess probe orientation vs null event

active_change

probe did not match previous orientation

active_change-active_no_change

probe did not match vs matched orientation

active_no_change

probe matched previous orientation

passive

null event

AbstractionLocalizer#

visual_object_recognition visual_word_recognition face_perception visual_pseudoword_recognition visual_attention

Implementation

  • Software: Psychtoolbox-3 (MATLAB 2021b)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This protocol was adapted from an ongoing study from our colleagues at Neurospin, CEA Saclay, France. The goal of the study is to understand the neural representations of real-world things from different semantic categories at various levels of abstraction/rendering, and with that aim, they encountered the need to have a special run to localize areas or regions specific to different categories before presenting them on different levels of abstraction. The localizer was different from the four runs in that the images were from eight different categories - faces, human body, words, non-sense words, numbers, places, objects and checkerboards. Each category in the localizer was presented in a block of 6 seconds with each image being displayed for 100 ms followed by a 200 ms inter-stimuli interval.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for AbstractionLocalizer

Condition

Description

localizer_checkerboards

Checkerboards images

localizer_faces

Face images

localizer_humanbody

Body images

localizer_nonsensewords

Non-sense words images

localizer_numbers

Number images

localizer_objects

Object Images

localizer_places

Place images

localizer_words

Words images

response

Subject’s button press when they saw a star

Contrasts for AbstractionLocalizer

Contrast

Description

localizer_checkerboards

localizer for checkerboards

localizer_checkerboards-other

checkerboards vs other categories

localizer_faces

localizer for human faces

localizer_faces-other

human faces vs other categories

localizer_humanbody

localizer for human bodies

localizer_humanbody-other

human bodies vs other categories

localizer_nonsensewords

localizer for nonsense words

localizer_nonsensewords-other

nonsense words vs other categories

localizer_numbers

localizer for numbers

localizer_numbers-other

numbers vs other categories

localizer_objects

localizer for objects

localizer_objects-other

objects vs other categories

localizer_places

localizer for places

localizer_places-other

places vs other categories

localizer_words

localizer for words

localizer_words-other

words vs other categories

response

response to star image as control

Abstraction#

visual_representation visual_object_recognition edge_detection Naturalistic_Scenes face_perception

Implementation

  • Software: Psychtoolbox-3 (MATLAB 2021b)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This protocol was adapted from an ongoing study from our colleagues at Neurospin, CEA Saclay, France. The goal of the study is to understand the neural representations of real-world things from different semantic categories at various levels of abstraction/rendering. So to achieve that, the subjects were presented with images belonging to six different semantic categories - human body, animals, faces, flora, objects and places, all rendered at three different levels of detail namely - geometry, edges and photos (in an ascending order of detail). To control for the attention there were five images of a star and the subjects were required to press a button when they saw them. There were four different examples from each category making a total of (6 categories x 4 examples x 3 renderings = 72 + 5 star probes =) 77 images. Each image was presented twice, for 300 ms with a variable inter-stimulus durations of 4, 6 or 8 seconds. There were 8 such runs and a localizer. The localizer was different from the four runs in that the images were from eight different categories - faces, human body, words, non-sense words, numbers, places, objects and checkerboards. Each category in the localizer was presented in a block of 6 seconds with each image being displayed for 100 ms followed by a 200 ms inter-stimuli interval. Each category block was presented 5 times (8 categories x 5 = 40 blocks) and the inter-block intervals were jittered for 4, 6 and 8 seconds (mean = 6 seconds).

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Abstraction

Condition

Description

animals_bird_edge

Images of a bird presented with the edge render

animals_bird_geometry

Images of a bird presented with the geometry render

animals_bird_photo

Images of a bird presented with the photo render

animals_butterfly_edge

Images of a butterfly presented with the edge render

animals_butterfly_geometry

Images of a butterfly presented with the geometry render

animals_butterfly_photo

Images of a butterfly presented with the photo render

animals_fish_edge

Images of a fish presented with the edge render

animals_fish_geometry

Images of a fish presented with the geometry render

animals_fish_photo

Images of a fish presented with the photo render

animals_giraffe_edge

Images of a giraffe presented with the edge render

animals_giraffe_geometry

Images of a giraffe presented with the geometry render

animals_giraffe_photo

Images of a giraffe presented with the photo render

faces_cat_edge

Image of a cat face presented with the edge render

faces_cat_geometry

Image of a cat face presented with the geometry render

faces_cat_photo

Image of a cat face presented with the photo render

faces_eyes_edge

Image of eyes presented with the edge render

faces_eyes_geometry

Image of eyes presented with the geometry render

faces_eyes_photo

Image of eyes presented with the photo render

faces_face2_edge

Image of a different face presented with the edge render

faces_face2_geometry

Image of a different face presented with the geometry render

faces_face2_photo

Image of a different face presented with the photo render

faces_face_edge

Image of a face presented with the edge render

faces_face_geometry

Image of a face presented with the geometry render

faces_face_photo

Image of a face presented with the photo render

flora_carrot_edge

Image of a carrot presented with the photo render

flora_carrot_geometry

Image of a carrot presented with the photo render

flora_carrot_photo

Image of a carrot presented with the photo render

flora_cherry_edge

Image of a cherry presented with the photo render

flora_cherry_geometry

Image of a cherry presented with the photo render

flora_cherry_photo

Image of a cherry presented with the photo render

flora_flower_edge

Image of a flower presented with the photo render

flora_flower_geometry

Image of a flower presented with the photo render

flora_flower_photo

Image of a flower presented with the photo render

flora_tree_edge

Image of a tree presented with the photo render

flora_tree_geometry

Image of a tree presented with the photo render

flora_tree_photo

Image of a tree presented with the photo render

humanbody_hand_edge

Edge rendering of hands

humanbody_hand_geometry

Geometry rendering of hands

humanbody_hand_photo

Photo rendering of hands

humanbody_legs_edge

Edge rendering of legs

humanbody_legs_geometry

Geometry rendering of legs

humanbody_legs_photo

Photo rendering of legs

humanbody_standing_edge

Edge rendering of standing human

humanbody_standing_geometry

Geometry rendering of standing human

humanbody_standing_photo

Photo rendering of standing human

humanbody_walking_edge

Edge rendering of walking human

humanbody_walking_geometry

Geometry rendering of walking human

humanbody_walking_photo

Photo rendering of walking human

objects_camera_edge

Image of a camara presented with the photo render

objects_camera_geometry

Image of a camara presented with the photo render

objects_camera_photo

Image of a camara presented with the photo render

objects_key_edge

Image of a key presented with the photo render

objects_key_geometry

Image of a key presented with the photo render

objects_key_photo

Image of a key presented with the photo render

objects_truck_edge

Image of a truck presented with the photo render

objects_truck_geometry

Image of a truck presented with the photo render

objects_truck_photo

Image of a truck presented with the photo render

objects_watch_edge

Image of a watch presented with the photo render

objects_watch_geometry

Image of a watch presented with the photo render

objects_watch_photo

Image of a watch presented with the photo render

places_house_edge

Image of a house presented with the photo render

places_house_geometry

Image of a house presented with the photo render

places_house_photo

Image of a house presented with the photo render

places_mountain_edge

Image of a mountain presented with the photo render

places_mountain_geometry

Image of a mountain presented with the photo render

places_mountain_photo

Image of a mountain presented with the photo render

places_road_edge

Image of a road presented with the photo render

places_road_geometry

Image of a road presented with the photo render

places_road_photo

Image of a road presented with the photo render

places_windmill_edge

Image of a windmill presented with the photo render

places_windmill_geometry

Image of a windmill presented with the photo render

places_windmill_photo

Image of a windmill presented with the photo render

response

Subject’s button press when they saw a star

Contrasts for Abstraction

Contrast

Description

animals-other

renders of animals vs of rest of categories

animals_edge

edge renders of an animals

animals_edge-animals_other

edge vs geometry and photo render of animals

animals_geometry

geometry renders of animals

animals_geometry-animals_other

geometry vs edge and photo render of animals

animals_photo

photo of an animal

animals_photo-animals_other

photo vs geometry and edge render of animals

edge-other

edge vs geometry and photo render

faces-other

renders of faces vs of rest of categories

faces_edge

edge renders of human faces

faces_edge-faces_other

edge vs geometry and photo render of faces

faces_geometry

geometry renders of human faces

faces_geometry-faces_other

geometry vs edge and photo render of faces

faces_photo

photos of human faces

faces_photo-faces_other

photo vs geometry and edge render of faces

flora-other

renders of flora vs of rest of categories

flora_edge

edge renders of flora

flora_edge-flora_other

edge vs geometry and photo render of flora

flora_geometry

geometry renders of flora

flora_geometry-flora_other

geometry vs edge and photo render of flora

flora_photo

photos of flora

flora_photo-flora_other

photo vs geometry and edge render of flora

geometry-other

geometry vs edge and photo render

humanbody-other

renders of human bodies vs of rest of categories

humanbody_edge

edges renders of human bodies

humanbody_edge-humanbody_other

edge vs geometry and photo render of human bodies

humanbody_geometry

geometry renders of human bodies

humanbody_geometry-humanbody_other

geometry vs edge and photo render of human bodies

humanbody_photo

photos of human bodies

humanbody_photo-humanbody_other

photo vs geometry and edge render of human bodies

objects-other

renders of objects vs of rest of categories

objects_edge

edge renders of objects

objects_edge-objects_other

edge vs geometry and photo render of objects

objects_geometry

geometry renders of objects

objects_geometry-objects_other

geometry vs edge and photo render of objects

objects_photo

photos of objects

objects_photo-objects_other

photo vs geometry and edge render of objects

photo-other

photo vs geometry and edge render

places-other

renders of places vs of rest of categories

places_edge

edge renders of places

places_edge-places_other

edge vs geometry and photo render of places

places_geometry

geometry renders of places

places_geometry-places_other

geometry vs edge and photo render of places

places_photo

photos of places

places_photo-places_other

photo vs geometry and edge render of places

response

button press to star

MDTB#

procedural_memory action_perception combinatorial_semantics object_maintenance visual_sentence_comprehension

Implementation

  • Software: Psychopy 2021.1.3. (Python 3.8.5)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

The Multi-Domain Task Battery was adapted from a study conducted by (King et al., 2019), where they aimed to investigate the functional organization of the cerebellum cortex by running a fMRI study with a collection of more than 20 tasks. The authors made the paradigm’s code and parameters openly available for 9 of those tasks at the time here, which allowed us to integrate them in the IBC project. The implementation was different than usual; here we presented all 9 tasks in one run, instead of dedicating a separated run for each task.

The protocol consisted of a short training session outside the scanner and 4 runs inside the scanner. In every run each task was performed twice in blocks of 35 seconds. In every run each task was performed twice in blocks of 35 seconds. At the beginning of each block, the instructions were displayed for 5 seconds so that subjects remember the instructions and expected actions. Immediately after, the task was performed continuously for 30 seconds, therefore each run lasted around 10 minutes and 30 seconds. If the task involved response from the subjects, they received feedback on their performance, which was given in form of a green check mark or a red cross, for correct or incorrect answers. At the end of each run, the success rates for each task were displayed followed by a video of a knot being tied as a part of an attention control task during the action observation task (described below).

Following are detailed descriptions for each task:

1) Visual search: Several ‘L’ shaped characters rotated at different angles were shown on each trial and subjects were asked to search for the standard (correct) orientation and press with their index finger if it was present, or with their middle finger if it was not. On each run, this task was performed twice, and for each time there were 12 trials, half of them being True (the correct ‘L’ shape was present). The order of True and False trials was randomized for each block on each run.

2) Action observation: Videos of knots being tied were displayed along with their name tags, and subjects were asked to remember the knot and its name. Two different knots were presented per run, and at the end of each run, another video of a knot was shown, this time without the name tag. We then asked subjects if this particular knot was displayed during the run, and if so, say the name. Only for run 3 the knot displayed at end was presented during the run.

3) Flexion - extension: Alternating cues with the words ‘Extension’ and ‘Flexion’ were presented, to indicate the participants to do so with their toes.

4) Finger sequence: A sequence of 6 digits from 1 to 4 were displayed and subjects were asked to press the keys corresponding to the numbers in the shown sequence. The mapping went from index being 1 to pinky being 4. Each block consisted of 8 trials and two blocks were presented during each run. The trials could be either simple or complex: the simple trials involving one or two consecutive fingers, and the complex involving three or four fingers, not necessarily consecutive. As the subject pressed the buttons, the digits became green if the correct key was pressed or red if not. At the end of each trial, if all the digits on the sequence were accurately followed, a green check appeared as feedback, if one or more was incorrect, a red cross appeared. Each trial lasted for 3.5 seconds, if the subject didn’t complete the sequence before the end of the trial, it was counted as incorrect and the red cross appeared.

5) Theory of mind: The subject was presented with a short paragraph narrating a story, followed by a related statement. Subjects must decide whether the statement is true based on the initial paragraph by pressing with their index finger, or false by pressing with their middle finger. Four trials in total were performed per run, half of them being true. If the subject answered correctly, a green check appeared, and on the contrary, a red cross appeared. Each trial lasted 14 seconds, if the subject did not reply during that period, the trial was counted as a mistake and the negative feedback appeared.

6) 2-back: Several images were presented, one after another. For each presented image, participants had to press with their index finger if it is the same that was presented 2 images before or with their middle finger if it was not. The trials were divided into easy and hard. The easy trials were the ones where the current image presented was not displayed two images before, and the hard trials were those where it was. There were 12 trials per block, 7 of the easy type and 5 of the hard type. As the rest of tasks, this was performed twice, leading to 24 trials in total per run. Each image was displayed for 2 seconds, followed by the feedback which was once again a green check or a red cross.

7) Semantic prediction: Words from a sentence were shown, one at a time. Subjects must decide whether the last word fits into the sentence or not, by pressing with their index or middle finger, respectively. There were 4 trials per block, leading to 8 trials per run. Each block consisted of 2 ‘True’ and 2 ‘False’ trials, and the order of appearance was randomized. Each trial could be either hard or easy to perform, depending on the ambiguity of the sentence and there were 2 easy and 2 hard trials per block. The subjects received feedback after their response, a green check or a red cross, consistent with the tasks described above.

8) Romance movie watching: A 30 second clip from the 2009 Disney Pixar movie ‘Up’ was presented without any sound. Subjects were instructed to watch passively. Two such clips were presented on each run, and no clip was repeated across or within the runs.

9) Rest: Short resting-state period, a fixation cross was displayed and subjects were asked to fixate on it and not move.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MDTB

Condition

Description

2back_easy

Easy 2-back trial, it is easy to remember whether the image was shown 2 images ago.

2back_hard

Hard 2-back trial, it is hard to remember whether the image was shown 2 images ago.

action_action

Watching a pair of hands make a specific knot

action_control

The resulting knot is shown from different angles

finger_complex

Sequence of button presses that is hard to complete (with no consecutive or repeated fingers)

finger_simple

Sequence of button presses that is easy to complete (using consecutive or repeated fingers)

flexion_extension

Continuous flexion and extension of toes

search_easy

It is easy to judge whether there is a right-oriented ‘L’ shape present on the array

search_hard

It is hard to judge whether there is a right-oriented ‘L’ shape present on the array

semantic easy

Easy to decide whether the last word fits in the sentence, natural sequence

semantic hard

Hard to decide whether the last word fits in the sentence, ambiguous sequence

tom_belief

The statement presented relates to thoughts or believes the characters from the paragraph might have

tom_photo

The statement presented relates to facts described on the paragraph

Contrasts for MDTB

Contrast

Description

2back_easy

easy 2-back

2back_hard

hard 2-back

2back_hard-easy

hard vs easy 2-back

action_action

watching hands make a specific knot

action_action-control

hands making specific knot vs resulting knot

action_control

resulting knot shown from different angles

finger_complex

hard sequence of button presses

finger_complex-simple

hard vs easy button sequence

finger_simple

easy sequence of button presses

flexion_extension

continuous toes flexion-extension

search_easy

easy to look for the right-oriented shape

search_hard

hard to look for the right-oriented shape

search_hard-easy

hard vs easy to look for the correct shape

semantic_easy

easy to decide whether the last word fits in a sentence

semantic_hard

hard to decide whether the last word fits in a sentence

semantic_hard-easy

ambiguous vs natural sequence

tom_belief

statement relates to character’s believes

tom_belief-photo

statement relates to believes vs facts

tom_photo

statement relates to facts from paragraph

Emotion#

visual_perception negative_emotion visual_scene_perception emotional_self-evaluation

Implementation

  • Software: Psychtoolbox-3 (MATLAB 2021b)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

This tasks was adapted from (Favre et al., 2021). This protocol aimed to examine emotional processing and the regions engaged on it. The subjects were presented with a series of pictures divided in two categories: neutral and negative images. The scenes depicted were mainly in a social context, for instance people chatting or eating during the neutral block; and people suffering or fighting during the negative block. The task consisted of two runs and a short training session before the acquisition. Each run consisted of 12 blocks of 10 images, alternating between neutral and negative blocks. Every picture was displayed for 2 seconds, and the subjects were instructed to press with their index finger if the scene occurred indoors, either inside a building or a car. The inter-block interval lasted 2 seconds, in which a fixation cross was shown. In the middle and in the end of the run the subjects were presented with two questions: How do you feel? and How nervous do you feel?, along with a scale for them to answer, going from not well to extremelly well for the former question and not nervous to extremely nervous for the latter. The subjects used their index and middle fingers to slide through the scale and had 7 seconds to give their answer.

The images used for stimuli were extracted from different databases: the International Affective Picture system (IAPS) (Lang et al., 2008), the Geneva Affective Picture Database (GAPED) (Dan-Glauser and Scherer, 2011), the Socio-Moral Image Databade (SMID) (Crone et al., 2018), the Complex Affective Scene Set (COMPASS) (Weierich et al., 2019), the Besançon Affective Picture Set-Adolescents (BAPS-Ado) (Szymanska et al., 2015) and the EmoMadrid database (Carretié et al., 2019). The training session was performed inside the scanner before running the experiment, in order to get the subject familiar with the task and the slider used to answer. The training consisted of 3 blocks: neutral, negative and neutral images, followed by the two questions. We therefore had three main conditions for the task: neutral, negative and assesment.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Emotion

Condition

Description

echelle_valence

Subject’s rating of emotional state

negative_image

Block of negative images

neutral_image

Block of neutral images

Contrasts for Emotion

Contrast

Description

echelle_valence

assessment of emotional state

negative-neutral

images with negative vs neutral valence

negative_image

images with negative valence

neutral_image

images with neutral valence

MultiModal#

visual_perception tactile_working_memory face_perception visual_face_recognition auditory_recognition

Implementation

  • Software: Psychopy 2021.1.3. (Python 3.8.5)

  • Response device: Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4)

  • Audio device: MRConfon MKII

  • Hardware: LabJack-U3, custom-made computer-controlled pneumatic system

This protocol was derived from work by colleagues at Laboratory for Neuro- and Psychophysiology from the KU Leuven Medical School, who aimed to compare evoked responses to the same sensory stimulation across two different cohorts of human and non-human primates. Three categories of stimuli were used: visual, tactile and auditory. Visual stimuli consisted of grey-scale pictures of ten classes: monkey and human faces, monkey and human bodies (without the head), four-legged mammals, birds, man-made objects that looked either like a human or a monkey’s body (e.g. guitar or kettle), fruits/vegetables and body-like sculptures. We presented 10 pictures per class, giving a total of 100 images, which were presented superimposed onto a pink noise background that filled the entire display. Tactile stimuli consisted of compressed air puffs delivered on both left and right side of the subjects’ face on three different locations: above the upper lip, around the cheek area or middle lip and beneath the lower lip. The air puffs were delivered using 6 plastic pipes, one to each target location, with an intensity of 0.5 bars, at a distance of aproximately 5 mm to the face, without touching it. The plastic pipes were connected to a custom-made computer controlled pneumatic system in the console room. Auditory stimuli consisted of 1-second clips of different natural sounds from six classes: human speech, human no-speech (e.g. baby crying, cough), monkey calls, animal sounds (e.g. horse), tool sounds and musical instruments (e.g. scissors, piano), and sounds from nature (e.g. rain, thunder). There were 10 different sounds per class, thus 60 different sound-clips in total. MR-compatible headphones were used.

To be congruent with the study from our colleagues, the auditory stimuli needed to be presented during silent periods, meaning no scanner noise, to ensure they were clearly audible and distinguishable (Erb et al., 2018). To achieve that, the repetion time (TR) for this protocol was modified to 2.6 seconds, during which we had a silence period (no data acquired, no scanner noise) of 1.2 seconds for stimuli presentation and 1.4 seconds of acquisition 120 time (TA). To ensure uniformity across the experiment, all three types of stimuli were presented during the silent period. Due to the change on TR and TA, some parameters were also updated to maintain a good enough spatial-resolution. This table contains the final set of acquisition parameters used for this protocol.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for MultiModal

Condition

Description

audio_animal

Animal sounds different from monkeys used as audio stimulus

audio_monkey

Monkey sounds used as audio stimulus

audio_nature

Nature sounds (i.e. rain) used as audio stimulus

audio_silence

Control condition for both audio and visual stimuli, no sound played and no image displayed

audio_speech

Speech sounds used as audio stimulus

audio_tools

Tools sounds used as audio stimulus

audio_voice

Human sounds (i.e. laugh, cough) used as audio stimulus

image_animals

Image of an animal different from monkeys and birds used as visual stimulus

image_birds

Image of a bird used as visual stimulus

image_fruits

Image of fruits used as visual stimulus

image_human_body

Image of a human body (without the head) used as visual stimulus

image_human_face

Image of a human face used as visual stimulus

image_human_object

Image of an object used by humans (i.e. guitar) used as visual stimulus

image_monkey_body

Image of a monkey body (without the head) used as visual stimulus

image_monkey_face

Image of a monkey face used as visual stimulus

image_monkey_object

Image of an object used by monkeys (i.e. drinker) used as visual stimulus

image_sculpture

Image of a sculpture used as visual stimulus

tactile_bottom

Air puffs delivered beneath the lower lips as tactile stimulus

tactile_middle

Air puffs delivered by the middle lips as tactile stimulus

tactile_novalve

Control condition for tactile stimulus, air is sent to the pair of pipes placed outside the coil, so it doesn’t touch the subject

tactile_top

Air puffs delivered above the upper lips as tactile stimulus

Contrasts for MultiModal

Contrast

Description

animate-inanimate

images of all faces, bodies or animals vs rest of visual stimuli

audio

all audio stimuli

audio-control

all audio stimuli vs silence

audio-tactile

audio vs tactile stimuli

audio-visual

audio vs visual stimuli

audio_animal

different animal sounds (except for monkey)

audio_monkey

monkey sounds

audio_nature

sound made by nature

audio_silence

no visual or audio stimuli

audio_speech

speech sounds

audio_tools

sound of noise made by tools

audio_voice

sounds made by humans (laugh, cough)

body-non_face

images of human or monkey faces or bodies vs rest of visual stimuli

body-other

images of human or monkey bodies vs rest of visual stimuli

face-other

images of human or monkey faces vs rest of visual stimuli

image_animals

images of animals (no monkeys or birds)

image_birds

images of birds

image_fruits

images of fruits

image_human_body

images of head-less human bodies

image_human_face

images of human faces

image_human_object

images of vertical objects

image_monkey_body

images of head-less monkey bodies

image_monkey_face

images of monkey faces

image_monkey_object

images of round objects

image_sculpture

images of sculptures

monkey_speech-other

monkey sounds vs rest of audio stimuli

speech+voice-other

speech or human sounds vs rest of audio stimuli

speech-other

speech sounds vs rest of audio stimuli

tactile

all tactile stimuli

tactile-audio

tactile vs audio stimuli

tactile-control

all tactile stimuli vs no stimuli

tactile-visual

tactile vs visual stimuli

tactile_bottom

air puff on bottom lip

tactile_middle

air puff on middle lip

tactile_top

air puff on upper lip

visual

all visual stimuli

visual-audio

visual vs audio stimuli

visual-control

all visual stimuli vs pink-noise

visual-tactile

visual vs tactile stimuli

Mario#

spatial_attention strategy loss defensive_aggression motion_detection

Implementation

  • Software: Psychopy 2021.1.3. (Python 3.8.5)

  • Response device: MR-compatible video game controller

  • Audio device: MRConfon MKII

This task involves a video game protocol where participants played Super Mario Bros. We adapted the implementation from our colleagues at the Courtois-Neuromod project, who used it with their own cohort, based on the premise that video game playing engages various cognitive domains such as constant reward processing, strategic planning, environmental monitoring, and action-taking (Bellec and Boyle, 2019). Therefore, monitoring brain activity during video game play provides an intriguing window into the interaction of these cognitive processes. Our colleagues at the Courtois-Neuromod team also designed an MRI-compatible video game controller, which closely resembles the shape and essence of commercial controllers, ensuring a familiar gaming experience. We replicated this controller for the IBC project; for more details, refer to (Harel et al., 2023)_. This implementation was created using OpenAI’s GymRetro package.

The game consisted of eight different worlds, each with three levels. Participants were instructed to play freely and complete as many levels as possible within the session, resulting in varying time spent on each level for each participant. None of the participants completed the entire game, but the majority reached the last world. Participants had unlimited lives but were allowed only three attempts to complete a level, meaning that if they lost twice consecutively, they would return to the last checkpoint in the current level. Losing a third time would restart the level and reset the count. This task was conducted over two sessions, with each session consisting of six runs lasting 10 minutes each. Each session began anew. Additionally, subsequent runs picked up where the previous one left off. For example, if a player was halfway through a level when an acquisition run ended, they would resume from the same point in the next run.

The conditions for this task are described in this table and the main contrasts derived from those conditions are described in this table.

Conditions for Mario

Condition

Description

action_jump

Player jumps by pressing a special key

action_leftrun

Player runs to the left (backwards) constrained by the current field of view

action_leftwalk

Player walks to the left (backwards) constrained by the current field of view

action_rightrun

Player runs to the right, advancing into the current world and level

action_rightwalk

Player walks to the right, advancing into the current world and level

loss_dying

Player loses size or dies

onscreen_enemy

Enemy appears in the field of view

reward_coin

Player earns coins, either visible or hidden coins

reward_enemykill_impact

The enemy is defeated by the player in various ways: by smashing the brick underneath the enemy’s position, by activating a lethal element directed towards the enemy, or by temporarily gaining the power to eliminate the enemy upon contact.

reward_enemykill_kick

Player kicks the enemy, making it fall or die

reward_enemykill_stomp

Player kills the enemy by stomping

reward_powerup_taken

Player gets a shot of life and size by catching a specified element (mushroom)

Contrasts for Mario

Contrast

Description

action

player jumps, runs or walks

action_jump

player jumps

action_leftrun

player runs left

action_leftwalk

player walks left

action_rightrun

player runs right

action_rightwalk

player walks right

loss

losing power or dying

loss_dying

losing power or dying

onscreen_enemy

enemy appearance on screen

reward

getting a reward

reward-loss

getting a reward vs losing

reward_coin

getting coins

reward_enemykill-others

getting a reward for killing enemy vs other rewards

reward_enemykill_impact

killing an enemy by impact

reward_enemykill_kick

killing an enemy by kick

reward_enemykill_stomp

killing an enemy by stomp

reward_powerup_taken

gaining size or power