User Manual: Inquisit Montreal Imaging Stress Task


___________________________________________________________________________________________________________________	

								MONTREAL IMAGING STRESS TEST (no port info)
___________________________________________________________________________________________________________________	

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 06-16-2015
last updated:  02-25-2022 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 02-25-2022 Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
This script implements an Inquisit version of the Montreal Imaging Stress Test as described in:

Dedovic, K.; Renwick, R,; Khalili Mahani, N.; Engert, V.; Lupien, S.J. & Pruessner, J.C. (2005).
The Montreal Imaging Stress Task: using functional imaging to investigate the effects of perceiving and
processing psychosocial stress in the human brain. J Psychiatry Neurosci, 30, 319-325.

The Montreal Stress Task was developed as a test to be used under imaging conditions (e.g. fMRI, PET)
where participants might have limited access to keyboards/touchscreens.
This script is a best guess effort of Millisecond Software based on published information.

___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________	
Participants are asked to solve arithmetic problems of 5 different difficulty levels 
under 3 different stress conditions. In the high stress condition (experimental condition), the participant's
performance is manipulated to be relatively low by
a) timing performance (e.g. if too many are correct, less time is allocated)
b) comparing the participant's performance to the "average" performance (80-90% correct) by means of a "performance bar"
In the control condition, the participants are asked to solve the same problems but without 
overt timing and without presenting the performance bar.
In the rest condition, participants are simply asked to rest.
In all conditions the participants are asked to control the selection of the solution by "dialing" via
mouse selections: left button (dials counterclockwise), right button (dials clockwise), middle button (submits the response).

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. +/- 40 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________
The default data stored in the data files are:

(1) Raw data file: 'montrealstresstest_raw*.iqdat' (a separate file for each participant)

build:							The specific Inquisit version used (the 'build') that was run
computer.platform:				the platform the script was run on (win/mac/ios/android)
date, time: 					date and time script was run 
subject, group, 				with the current subject/groupnumber
session:						with the current session id

blockcode, blocknum:			the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 			the name and number of the currently recorded trial (built-in Inquisit variable)
										Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
										that do not store data to the data file such as feedback trials. Thus, trialnum 
										may not reflect the number of main trials run per block.
																		
order:							order of Rest, Experimental and Control Condition	
								
condition:						1 = training; 
								2 = experimental; 
								3 = control; 
								4 = rest
									
level:							stores the level of the presently presented problem
problem:						stores the presently presented problem 
solution:						stores the solution to the presently presented problem
dialposition:					the currently 'highlighted' dial

response:						the participant's response

ACC:							0 = Timeout; 
								1 = correct response; 
								2 = error response
									
error: 							1 = error: erroneous response or timeout; 
								0 = correct response
									
latency: 						the response latency (in ms) of the current trial
RT_complete:					stores the response latency in ms of the current complete training/control/experimental problem
trialtimeout:					stores the max. duration in ms of the currently complete training/control/experimental task
currenttimeout_experimental:	stores the max. duration in ms of the presented segment of the current experimental task
										(Note: each dial press starts a new trial segment, so trialtimeout needs to be continuously adjusted)
									
increasetimeout:				1 = the last 3 experimental responses were incorrect or timeouts; 
								0 = otherwise
									
decreasetimeout:				1 = the last 3 experimental responses were all correct; 
								0 = otherwise

								
(2) Summary data file: 'montrealstresstest_summary*.iqdat' (a separate file for each participant)

inquisit.version:				Inquisit version run
computer.platform:				the platform the script was run on (win/mac/ios/android)
startDate:						date script was run
startTime:						time script was started
subjectid:						assigned subject id number
groupid:						assigned group id number
sessionid:						assigned session id number
elapsedTime:					time it took to run script (in ms); measured from onset to offset of script
completed:						0 = script was not completed (prematurely aborted); 
								1 = script was completed (all conditions run)
									
meanRT_complete1:				stores the mean response time in ms of level 1 training trials (not corrected for accuracy)
meanRT_complete2:				stores the mean response time in ms of level 2 training trials (not corrected for accuracy)
meanRT_complete3:				stores the mean response time in ms of level 3 training trials (not corrected for accuracy)
meanRT_complete4:				stores the mean response time in ms of level 4 training trials (not corrected for accuracy)
meanRT_complete5:				stores the mean response time in ms of level 5 training trials (not corrected for accuracy)

percentcorrect_exp:				percent correct of experimental trials across levels
percentcorrect_exp1:			percent correct of level 1 experimental trials
percentcorrect_exp2:			percent correct of level 2 experimental trials
percentcorrect_exp3:			percent correct of level 3 experimental trials
percentcorrect_exp4:			percent correct of level 4 experimental trials
percentcorrect_exp5:			percent correct of level 5 experimental trials

meanrt_exp:						mean correct response latency in ms of experimental trials across levels
meanrt_exp1:					mean correct response latency in ms of level 1 experimental trials
meanrt_exp2:					mean correct response latency in ms of level 2 experimental trials
meanrt_exp3:					mean correct response latency in ms of level 3 experimental trials
meanrt_exp4:					mean correct response latency in ms of level 4 experimental trials
meanrt_exp5:					mean correct response latency in ms of level 5 experimental trials

percentcorrect_ctrl:			percent correct of control trials across levels
percentcorrect_ctrl1:			percent correct of level 1 control trials
percentcorrect_ctrl2:			percent correct of level 2 control trials
percentcorrect_ctrl3:			percent correct of level 3 control trials
percentcorrect_ctrl4:			percent correct of level 4 control trials
percentcorrect_ctrl5:			percent correct of level 5 control trials

meanrt_ctrl:					mean correct response latency in ms of control trials across levels
meanrt_ctrl1:					mean correct response latency in ms of level 1 control trials
meanrt_ctrl2:					mean correct response latency in ms of level 2 control trials
meanrt_ctrl3:					mean correct response latency in ms of level 3 control trials
meanrt_ctrl4:					mean correct response latency in ms of level 4 control trials
meanrt_ctrl5:					mean correct response latency in ms of level 5 control trials

count_exp1:						stores the number of level 1 concluded experimental trials
count_exp2:						stores the number of level 2 concluded experimental trials
count_exp3:						stores the number of level 3 concluded experimental trials
count_exp4:						stores the number of level 4 concluded experimental trials
count_exp5:						stores the number of level 5 concluded experimental trials

count_ctrl1:					stores the number of level 1 concluded control trials
count_ctrl2:					stores the number of level 2 concluded control trials
count_ctrl3:					stores the number of level 3 concluded control trials
count_ctrl4:					stores the number of level 4 concluded control trials
count_ctrl5:					stores the number of level 5 concluded control trials

___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	

(A) 5 levels of difficulty: tested in blocked format

In this script version, the levels are defined as follows: 
1: 2 1-digit integers (0-9), only + or -, solution 0-9 (1 digit)
2: 3 1 digit integers (0-9), only +,- (no repeated operations, order of operations random)
3: 3 integers, 1-2 integers are double digits (0-99), +,-,* (no repeated operations, selection and order of operations random), solution 0-9 (1 digit)
4: 4 integers, 1-2 integers are double digits (0-99), +,-,* (no repeated operations, selection and order of operations random), solution 0-9 (1 digit)
5: 4 integers, 1-4 integers are double digits (0-99), *, / are a must, then either + or - (order of operations random), solution 0-9 (1 digit)	

Note: the problems are pregenerated for this script (see also Dedovic et al, p.320)

(B) 4 different conditions:
1) Training: 
- administered outside of imaging equipment
- uses the mouse to control the selection of the solution (the 'dials') -> Dedovic et al (2005) use keyboard input for training
- minimum suggested duration: 2 minutes for entire training block (this translates to Min = 24s for each level of difficulty);
	Note: he default in this script is set to 5 minutes (=1 minute/level of difficulty)
- presents a random order of the 5 difficulty level blocks
- calculates average time participants used to respond to each problem (regardless of accuracy) for each level of difficulty.
	Note: The uncorrected latencies are used in this script to ensure that initial timelimits can be calculated.
- does not put a time limit on performance
- does not give overall performance feedback in the form of a performance bar
- presents feedback (correct, incorrect) for individual problem for 500ms (editable parameter)			
	Note: training ends with a slide that says "Please Wait" (to continue the experimenter has to press the Spacebar - participant cannot move from here with the mouse)

2) Experimental: 
- minimum suggested duration: 2 minutes per level of difficulty
	Note: this timelimit is based on our interpretation of  (->"individual runs" p.321, right column)
- presents a random order of the 5 difficulty level blocks
- puts a time limit on performance
	a) at the start of a block uses training mean latency for the currently tested level of difficulty but shortens it by 10% (= 90% of the training mean response times)
	b) continuously tracks the performance and response time of the last 3 trials
	=> if the last 3 trials are all correct: adjust time limit by using the average response duration of the last 3 trials but shorten it by 10%
	=> if the last 3 trials are all errors (or timeouts): adjust time limit by lengthening the current timeout by 10%
	c) presents a timer on screen that counts down the seconds (Dedovic et al, 2005 used a progress bar)

- presents a performance bar that presents the "average" performance as being in the green (good) region of performance and the participant's
performance as being (likely) in the red (bad) region of performance (average displayed performance is calcuated across experimental difficulty levels)

- presents feedback (correct, incorrect, timeout) for individual problems for 500ms (editable parameter)

- adds an intertrial interval (ITI): the duration of the intertrial interval is based on parameters.ITI_test (editable parameter) which gets
adjusted by the difference of the average response latency for the tested level (based on training performance) minus the current response latency. 
If the current response latency was faster, the ITI increases. If the current response latency was slower (unlikely as a timeout is imposed) the ITI decreases
	Note: the ITI was added so that control and experimental trials could (roughly) be matched in frequency.
			(Dedovic et al, 2005, p.321: "To match the frequency of mental arithmetic tasks [in experimental and control condition], the time between tasks is varied as a function of the time limit 
			imposed during the experimental condition, so that the total number of tasks presented per condition is identical.")
			The idea is that if the response time is equal to the training response time (which is run under similar conditions to the control condition)
			the ITI is parameters.iti_test. If it's shorter (because of the timeout) then the ITI gets adjusted up - otherwise down.
			This works theoretically AS LONG AS there is still a positive ITI (>=0ms) left at the end. If a control trial (or experimental trial, though that is less likely) 
			is longer than the average response time by parameters.ITI then the number of trials in the experimental and control condition might differ. 

	Note:  each experimental blocks ends with a slide that informs participants of their average performance; followed by a slide that says "Please Wait"
			(to move on the experimenter has to press the Spacebar - participants cannot move on from here with the mouse)

3) Control:

- minimum suggested duration: 2 minutes per level of difficulty
- presents a random order of the 5 difficulty level blocks
- does not put a time limit on performance
- does not present a performance bar 
- presents feedback (correct, incorrect, timeout) for individual problems for 500ms (editable parameter)
- adds an intertrial interval (ITI): the duration of the intertrial interval is based on parameters.ITI_test (editable parameter) which gets
adjusted by the difference of the average response latency for the tested level (based on training performance) minus the current response latency. 
If the current response latency was faster, the ITI increases. If the current response latency was slower (unlikely as a timeout is imposed) the ITI decreases
	Note: the ITI was added so that control and experimental trials could (roughly) be matched in frequency.
			(Dedovic et al, 2005, p.321: "To match the frequency of mental arithmetic tasks [in experimental and control condition], the time between tasks is varied as a function of the time limit 
			imposed during the experimental condition, so that the total number of tasks presented per condition is identical.")

	Note:  each control blocks ends with a slide that says "Please Wait"

4) Rest: inside imaging equipment

- presents the userinterface with a message to take a break and not move the mouse until told to do so
	Note:  the rest block ends with a slide that says "Please Wait" (to move on the experimenter has to press the Spacebar - participant cannot move from here with the mouse)

The order of the 3 test conditions (experimental, control, rest) is counterbalanced by groupnumber (6 different groupnumbers run the 6 possible orders)
To change the experimental procedure (e.g. if no rest condition should be run), go to section EXPERIMENT and delete the blocks
that should not run.

groupnumber1: exp, ctrl, rest
groupnumber2: exp, rest, ctrl
groupnumber3: ctrl, exp, rest
groupnumber4: ctrl, rest, exp
groupnumber5: rest, exp, ctrl
groupnumber6: rest, ctrl, exp

___________________________________________________________________________________________________________________
STIMULI
___________________________________________________________________________________________________________________
this script uses pregenerated sequences for the 5 levels of difficulty. The problems used can be edited under
section "Editable Lists"

___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________	
this script uses instructions that are not original to Dedovic et al (2005). They can be edited under
section "Editable Instructions"

___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are:

/trainingtimeout:						sets the timeout for the training block (per difficulty level) in ms (default: 60000ms => 1 min per difficulty level)
/experimentalBlockTimeout:					sets the timeout of the individual experimental block (per difficulty level) in ms (default: 120000ms => 2 min per difficulty level)
/feedbackduration:						sets the feedback duration in ms (default: 500ms)
/restduration:							sets the duration of the rest period in ms (default: 60000ms)
/iti_test:								the default length of the intertrial interval in ms in experimental and control task that would run if
										the response latency of the currently solved problem was equal to the one determined during training 
										(default: 1000ms)

Definition of the Performance Bar:
Note: the performance bar appears Red until parameters.redperformance; it appears White until parameters.whiteperformance; it appears Green above parameters.whiteperformance
Default: for 0-60% the bar appears Red; for 60%-80% the bar appears White; above 80% the bar appears Green

/whiteperformance:						sets the performance proportion of the performance bar that appears white (default: 80%)
										Note: performance > 80% appears green
										
/redperformance:						sets the performance proportion of the performance bar that appears red (default: 60%)
/averageperformance:					sets the average performance that participant compares to (default: 85%) -> used for the average performance triangle

/inactiveDialButtonColor:				the color of the digit buttons (during experiment) when not active (default: blue)
/activeDialButtonColor:					the color of the digit buttons (during experiment) when active (default: orange)
											Note: the dial colors on the instruction pages need to be updated manually