English version ___________________________________________________________________________________________________________________ MONTREAL IMAGING STRESS TEST (no port info) ___________________________________________________________________________________________________________________ Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC Date: 06-16-2015 last updated: 04-18-2025 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC Script Copyright © 04-18-2025 Millisecond Software ___________________________________________________________________________________________________________________ BACKGROUND INFO ___________________________________________________________________________________________________________________ This script implements an Inquisit version of the Montreal Imaging Stress Test as described in: Dedovic, K.; Renwick, R,; Khalili Mahani, N.; Engert, V.; Lupien, S.J. & Pruessner, J.C. (2005). The Montreal Imaging Stress Task: using functional imaging to investigate the effects of perceiving and processing psychosocial stress in the human brain. J Psychiatry Neurosci, 30, 319-325. The Montreal Stress Task was developed as a test to be used under imaging conditions (e.g. fMRI, PET) where participants might have limited access to keyboards/touchscreens. This script is a best guess effort of Millisecond Software based on published information. ___________________________________________________________________________________________________________________ TASK DESCRIPTION ___________________________________________________________________________________________________________________ Participants are asked to solve arithmetic problems of 5 different difficulty levels under 3 different stress conditions. In the high stress condition (experimental condition), the participant's performance is manipulated to be relatively low by a) timing performance (e.g. if too many are correct, less time is allocated) b) comparing the participant's performance to the "average" performance (80-90% correct) by means of a "performance bar" In the control condition, the participants are asked to solve the same problems but without overt timing and without presenting the performance bar. In the rest condition, participants are simply asked to rest. In all conditions the participants are asked to control the selection of the solution by "dialing" via mouse selections: left button (dials counterclockwise), right button (dials clockwise), middle button (submits the response). ___________________________________________________________________________________________________________________ DURATION ___________________________________________________________________________________________________________________ the default set-up of the script takes appr. +/- 40 minutes to complete ___________________________________________________________________________________________________________________ DATA OUTPUT DICTIONARY ___________________________________________________________________________________________________________________ The fields in the data files are: (1) Raw data file: 'montrealstresstest_raw*.iqdat' (a separate file for each participant) build: The specific Inquisit version used (the 'build') that was run computer.platform: the platform the script was run on (win/mac/ios/android) date, time: date and time script was run subject, group, with the current subject/groupnumber session: with the current session id blockCode, blockNum: the name and number of the current block (built-in Inquisit variable) trialCode, trialNum: the name and number of the currently recorded trial (built-in Inquisit variable) Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those that do not store data to the data file such as feedback trials. Thus, trialNum may not reflect the number of main trials run per block. order: order of Rest, Experimental and Control Condition condition: 1 = training; 2 = experimental; 3 = control; 4 = rest level: stores the level of the presently presented problem problem: stores the presently presented problem solution: stores the solution to the presently presented problem dialPosition: the currently 'highlighted' dial response: the participant's response acc: 0 = Timeout; 1 = correct response; 2 = error response error: 1 = error: erroneous response or timeout; 0 = correct response latency: the response latency (in ms) of the current trial rtComplete: stores the response latency in ms of the current complete training/control/experimental problem trialTimeout: stores the max. duration in ms of the currently complete training/control/experimental task currentTimeoutExp: stores the max. duration in ms of the presented segment of the current experimental task (Note: each dial press starts a new trial segment, so trialtimeout needs to be continuously adjusted) increaseTimeout: 1 = the last 3 experimental responses were incorrect or timeouts; 0 = otherwise decreaseTimeout: 1 = the last 3 experimental responses were all correct; 0 = otherwise meanLast3Errors: for exp condition only: calculates the the mean error performance during the last 3 trials (is set to -1 if there are fewer than 3 trials to report on) currentMeanRT: the RT of the last three responses (2) Summary data file: 'montrealstresstest_summary*.iqdat' (a separate file for each participant) inquisit.version: Inquisit version run computer.platform: the platform the script was run on (win/mac/ios/android) startDate: date script was run startTime: time script was started subjectId: assigned subject id number groupId: assigned group id number sessionId: assigned session id number elapsedTime: time it took to run script (in ms); measured from onset to offset of script completed: 0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run) meanRTComplete1: stores the mean response time in ms of level 1 training trials (not corrected for accuracy) meanRTComplete2: stores the mean response time in ms of level 2 training trials (not corrected for accuracy) meanRTComplete3: stores the mean response time in ms of level 3 training trials (not corrected for accuracy) meanRTComplete4: stores the mean response time in ms of level 4 training trials (not corrected for accuracy) meanRTComplete5: stores the mean response time in ms of level 5 training trials (not corrected for accuracy) percentCorrectExp: percent correct of experimental trials across levels percentCorrectExp1: percent correct of level 1 experimental trials percentCorrectExp2: percent correct of level 2 experimental trials percentCorrectExp3: percent correct of level 3 experimental trials percentCorrectExp4: percent correct of level 4 experimental trials percentCorrectExp5: percent correct of level 5 experimental trials meanrtExp: mean correct response latency in ms of experimental trials across levels meanrtExp1: mean correct response latency in ms of level 1 experimental trials meanrtExp2: mean correct response latency in ms of level 2 experimental trials meanrtExp3: mean correct response latency in ms of level 3 experimental trials meanrtExp4: mean correct response latency in ms of level 4 experimental trials meanrtExp5: mean correct response latency in ms of level 5 experimental trials percentCorrectCtrl: percent correct of control trials across levels percentCorrectCtrl1: percent correct of level 1 control trials percentCorrectCtrl2: percent correct of level 2 control trials percentCorrectCtrl3: percent correct of level 3 control trials percentCorrectCtrl4: percent correct of level 4 control trials percentCorrectCtrl5: percent correct of level 5 control trials meanrtCtrl: mean correct response latency in ms of control trials across levels meanrtCtrl1: mean correct response latency in ms of level 1 control trials meanrtCtrl2: mean correct response latency in ms of level 2 control trials meanrtCtrl3: mean correct response latency in ms of level 3 control trials meanrtCtrl4: mean correct response latency in ms of level 4 control trials meanrtCtrl5: mean correct response latency in ms of level 5 control trials countExp1: stores the number of level 1 concluded experimental trials countExp2: stores the number of level 2 concluded experimental trials countExp3: stores the number of level 3 concluded experimental trials countExp4: stores the number of level 4 concluded experimental trials countExp5: stores the number of level 5 concluded experimental trials countCtrl1: stores the number of level 1 concluded control trials countCtrl2: stores the number of level 2 concluded control trials countCtrl3: stores the number of level 3 concluded control trials countCtrl4: stores the number of level 4 concluded control trials countCtrl5: stores the number of level 5 concluded control trials ___________________________________________________________________________________________________________________ EXPERIMENTAL SET-UP ___________________________________________________________________________________________________________________ (A) 5 levels of difficulty: tested in blocked format In this script version, the levels are defined as follows: 1: 2 1-digit integers (0-9), only + or -, solution 0-9 (1 digit) 2: 3 1 digit integers (0-9), only +,- (no repeated operations, order of operations random) 3: 3 integers, 1-2 integers are double digits (0-99), +,-,* (no repeated operations, selection and order of operations random), solution 0-9 (1 digit) 4: 4 integers, 1-2 integers are double digits (0-99), +,-,* (no repeated operations, selection and order of operations random), solution 0-9 (1 digit) 5: 4 integers, 1-4 integers are double digits (0-99), *, / are a must, then either + or - (order of operations random), solution 0-9 (1 digit) Note: the problems are pregenerated for this script (see also Dedovic et al, p.320) (B) 4 different conditions: 1) Training: - administered outside of imaging equipment - uses the mouse to control the selection of the solution (the 'dials') -> Dedovic et al (2005) use keyboard input for training - minimum suggested duration: 2 minutes for entire training block (this translates to Min = 24s for each level of difficulty); Note: he default in this script is set to 5 minutes (=1 minute/level of difficulty) - presents a random order of the 5 difficulty level blocks - calculates average time participants used to respond to each problem (regardless of accuracy) for each level of difficulty. Note: The uncorrected latencies are used in this script to ensure that initial timelimits can be calculated. - does not put a time limit on performance - does not give overall performance feedback in the form of a performance bar - presents feedback (correct, incorrect) for individual problem for 500ms (editable parameter) Note: training ends with a slide that says "Please Wait" (to continue the experimenter has to press the Spacebar - participant cannot move from here with the mouse) 2) Experimental: - minimum suggested duration: 2 minutes per level of difficulty Note: this timelimit is based on our interpretation of (->"individual runs" p.321, right column) - presents a random order of the 5 difficulty level blocks - puts a time limit on performance a) at the start of a block uses training mean latency for the currently tested level of difficulty but shortens it by 10% (= 90% of the training mean response times) b) continuously tracks the performance and response time of the last 3 trials => if the last 3 trials are all correct: adjust time limit by using the average response duration of the last 3 trials but shorten it by 10% => if the last 3 trials are all errors (or timeouts): adjust time limit by lengthening the current timeout by 10% c) presents a timer on screen that counts down the seconds (Dedovic et al, 2005 used a progress bar) - presents a performance bar that presents the "average" performance as being in the green (good) region of performance and the participant's performance as being (likely) in the red (bad) region of performance (average displayed performance is calcuated across experimental difficulty levels) - presents feedback (correct, incorrect, timeout) for individual problems for 500ms (editable parameter) - adds an intertrial interval (ITI): the duration of the intertrial interval is based on parameters.iti_test (editable parameter) which gets adjusted by the difference of the average response latency for the tested level (based on training performance) minus the current response latency. If the current response latency was faster, the ITI increases. If the current response latency was slower (unlikely as a timeout is imposed) the ITI decreases Note: the ITI was added so that control and experimental trials could (roughly) be matched in frequency. (Dedovic et al, 2005, p.321: "To match the frequency of mental arithmetic tasks [in experimental and control condition], the time between tasks is varied as a function of the time limit imposed during the experimental condition, so that the total number of tasks presented per condition is identical.") The idea is that if the response time is equal to the training response time (which is run under similar conditions to the control condition) the ITI is parameters.iti_test. If it's shorter (because of the timeout) then the ITI gets adjusted up - otherwise down. This works theoretically AS LONG AS there is still a positive ITI (>=0ms) left at the end. If a control trial (or experimental trial, though that is less likely) is longer than the average response time by parameters.ITI then the number of trials in the experimental and control condition might differ. Note: each experimental blocks ends with a slide that informs participants of their average performance; followed by a slide that says "Please Wait" (to move on the experimenter has to press the Spacebar - participants cannot move on from here with the mouse) 3) Control: - minimum suggested duration: 2 minutes per level of difficulty - presents a random order of the 5 difficulty level blocks - does not put a time limit on performance - does not present a performance bar - presents feedback (correct, incorrect, timeout) for individual problems for 500ms (editable parameter) - adds an intertrial interval (ITI): the duration of the intertrial interval is based on parameters.iti_test (editable parameter) which gets adjusted by the difference of the average response latency for the tested level (based on training performance) minus the current response latency. If the current response latency was faster, the ITI increases. If the current response latency was slower (unlikely as a timeout is imposed) the ITI decreases Note: the ITI was added so that control and experimental trials could (roughly) be matched in frequency. (Dedovic et al, 2005, p.321: "To match the frequency of mental arithmetic tasks [in experimental and control condition], the time between tasks is varied as a function of the time limit imposed during the experimental condition, so that the total number of tasks presented per condition is identical.") Note: each control blocks ends with a slide that says "Please Wait" 4) Rest: inside imaging equipment - presents the userinterface with a message to take a break and not move the mouse until told to do so Note: the rest block ends with a slide that says "Please Wait" (to move on the experimenter has to press the Spacebar - participant cannot move from here with the mouse) The order of the 3 test conditions (experimental, control, rest) is counterbalanced by groupnumber (6 different groupnumbers run the 6 possible orders) To change the experimental procedure (e.g. if no rest condition should be run), go to section EXPERIMENT and delete the blocks that should not run. groupnumber1: exp, ctrl, rest groupnumber2: exp, rest, ctrl groupnumber3: ctrl, exp, rest groupnumber4: ctrl, rest, exp groupnumber5: rest, exp, ctrl groupnumber6: rest, ctrl, exp ___________________________________________________________________________________________________________________ STIMULI ___________________________________________________________________________________________________________________ this script uses pregenerated sequences for the 5 levels of difficulty. The problems used can be edited under section "Editable Lists" ___________________________________________________________________________________________________________________ INSTRUCTIONS ___________________________________________________________________________________________________________________ this script uses instructions that are not original to Dedovic et al (2005). They can be edited in script montrealstresstest_instructions_inc.iqjs (language-dependent) ___________________________________________________________________________________________________________________ EDITABLE CODE ___________________________________________________________________________________________________________________ check below for (relatively) easily editable parameters, stimuli, instructions etc. Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment. The parameters you can change are: /trainingTimeout: sets the timeout for the training block (per difficulty level) in ms (default: 60000ms => 1 min per difficulty level) /experimentalBlockTimeout: sets the timeout of the individual experimental block (per difficulty level) in ms (default: 120000ms => 2 min per difficulty level) /feedbackDuration: sets the feedback duration in ms (default: 500ms) /restDuration: sets the duration of the rest period in ms (default: 60000ms) /itiTest: the default length of the intertrial interval in ms in experimental and control task that would run if the response latency of the currently solved problem was equal to the one determined during training (default: 1000ms) Definition of the Performance Bar: Note: the performance bar appears Red until parameters.redperformance; it appears White until parameters.whiteperformance; it appears Green above parameters.whiteperformance default: for 0-60% the bar appears Red; for 60%-80% the bar appears White; above 80% the bar appears Green /whitePerformance: sets the performance proportion of the performance bar that appears white (default: 80%) Note: performance > 80% appears green /redPerformance: sets the performance proportion of the performance bar that appears red (default: 60%) /averagePerformance: sets the average performance that participant compares to (default: 85%) -> used for the average performance triangle /inactiveDialButtonColor: the color of the digit buttons (during experiment) when not active (default: blue) /activeDialButtonColor: the color of the digit buttons (during experiment) when active (default: orange) Note: the dial colors on the instruction pages need to be updated manually