Return to the Montreal Imaging Stress Task page
						
											MONTREAL IMAGING STRESS TEST 

SCRIPT INFO

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 06-16-2015
last updated:  08-05-2016 by K.Borchert for Millisecond Software LLC

Script Copyright ©  08-05-2016 Millisecond Software


BACKGROUND INFO

											*Purpose*
This script implements an Inquisit version of the Montreal Imaging Stress Test as described in:

Dedovic, K.; Renwick, R,; Khalili Mahani, N.; Engert, V.; Lupien, S.J. & Pruessner, J.C. (2005).
The Montreal Imaging Stress Task: using functional imaging to investigate the effects of perceiving and
processing psychosocial stress in the human brain. J Psychiatry Neurosci, 30, 319-325.

The Montreal Stress Task was developed as a test to be used under imaging conditions (e.g. fMRI, PET)
where participants might have limited access to keyboards/touchscreens.
This script is a best guess effort of Millisecond Software based on published information.


											  *Task*
Participants are asked to solve arithmetic problems of 5 different difficulty levels 
under 3 different stress conditions. In the high stress condition (experimental condition), the participant's
performance is manipulated to be relatively low by
a) timing performance (e.g. if too many are correct, less time is allocated)
b) comparing the participant's performance to the "average" performance (80-90% correct) by means of a "performance bar"
In the control condition, the participants are asked to solve the same problems but without 
overt timing and without presenting the performance bar.
In the rest condition, participants are simply asked to rest.
In all conditions the participants are asked to control the selection of the solution by "dialing" via
mouse selections: left button (dials counterclockwise), right button (dials clockwise), middle button (submits the response).



DATA FILE INFORMATION: 
The default data stored in the data files are:

(1) Raw data file: 'MontrealStressTest_raw*.iqdat' (a separate file for each participant)

build:							Inquisit build
date, time, subject, group:		date and time script was run with the current subject/groupnumber 
blockcode, blocknum:			the name and number of the current block
trialcode, trialnum: 			the name and number of the currently recorded trial
									(Note: not all trials that are run might record data) 
/condition:						1 = training; 2 = experimental; 3 = control; 4 = rest
/level:							stores the level of the presently presented problem
/problem:						stores the presently presented problem 
/solution:						stores the solution to the presently presented problem
/dialposition:					the currently 'highlighted' dial
response:						the participant's response
/correct:						0 = Timeout; 1 = correct response; 2 = error response
latency: 						the response latency (in ms) of the current trial
/RT_complete:					stores the response latency in ms of the current complete training/control/experimental task
/currenttimeout_experimental:	stores the max. duration in ms of the currently complete training/control/experimental task
/trialtimeout:					stores the max. duration in ms of the presented segment of the current experimental task
									(Note: each dial press starts a new trial segment, so trialtimeout needs to be continuously adjusted)
/increasetimeout:				1 = the last 3 experimental responses were incorrect or timeouts; 0 = otherwise
/decreasetimeout:				1 = the last 3 experimental responses were all correct; 0 = otherwise
/conditionMarker:				marker that contains condition information; sent on onset of stimuli
								in general: digit1=experimental1/control2/rest3; digit2= difficulty (0-5)
								Examples:
									11 = experimental condition (1) with difficulty 1
									25 = control condition (2) with difficulty 5
									30 = restcondition (3) with 0 difficulty

/feedbackMarker:				marker that contains feedback information, sent on onset of feedback
								in general: digit1=experimental 1/control2; digit2 = difficulty (1-5); digit3 = accuracy (0=correct; 1 = error response; 2 = timeout)
								Examples:
									110 = experimental level 1, difficulty 1, correct response 0;
									221 = control level 2, difficulty 2, error response 1
									152 = experimental level 1, difficulty 5, timeout 2;


(2) Summary data file: 'MontrealStressTest_summary*.iqdat' (a separate file for each participant)

script.startdate:				date script was run
script.starttime:				time script was started
script.subjectid:				subject id number
script.groupid:					group id number
script.elapsedtime:				time it took to run script (in ms)
/completed:						0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run)
/meanRT_complete1:				stores the mean response time in ms of level 1 training trials (not corrected for accuracy)
/meanRT_complete2:				stores the mean response time in ms of level 1 training trials (not corrected for accuracy)
/meanRT_complete3:				stores the mean response time in ms of level 1 training trials (not corrected for accuracy)
/meanRT_complete4:				stores the mean response time in ms of level 1 training trials (not corrected for accuracy)
/meanRT_complete5:				stores the mean response time in ms of level 1 training trials (not corrected for accuracy)
/percentcorrect_exp:			percent correct of experimental trials across levels
/percentcorrect_exp1:			percent correct of level 1 experimental trials
/percentcorrect_exp2:			percent correct of level 2 experimental trials
/percentcorrect_exp3:			percent correct of level 3 experimental trials
/percentcorrect_exp4:			percent correct of level 4 experimental trials
/percentcorrect_exp5:			percent correct of level 5 experimental trials
/meanrt_exp:					mean correct response latency in ms of experimental trials across levels
/meanrt_exp1:					mean correct response latency in ms of level 1 experimental trials
/meanrt_exp2:					mean correct response latency in ms of level 1 experimental trials
/meanrt_exp3:					mean correct response latency in ms of level 1 experimental trials
/meanrt_exp4:					mean correct response latency in ms of level 1 experimental trials
/meanrt_exp5:					mean correct response latency in ms of level 1 experimental trials
/percentcorrect_ctrl:			percent correct of control trials across levels
/percentcorrect_ctrl1:			percent correct of level 1 control trials
/percentcorrect_ctrl2:			percent correct of level 2 control trials
/percentcorrect_ctrl3:			percent correct of level 3 control trials
/percentcorrect_ctrl4:			percent correct of level 4 control trials
/percentcorrect_ctrl5:			percent correct of level 5 control trials
/meanrt_ctrl:					mean correct response latency in ms of control trials across levels
/meanrt_ctrl1:					mean correct response latency in ms of level 1 control trials
/meanrt_ctrl2:					mean correct response latency in ms of level 1 control trials
/meanrt_ctrl3:					mean correct response latency in ms of level 1 control trials
/meanrt_ctrl4:					mean correct response latency in ms of level 1 control trials
/meanrt_ctrl5:					mean correct response latency in ms of level 1 control trials

/count_exp1:					stores the number of level 1 concluded experimental trials
/count_exp2:					stores the number of level 2 concluded experimental trials
/count_exp3:					stores the number of level 3 concluded experimental trials
/count_exp4:					stores the number of level 4 concluded experimental trials
/count_exp5:					stores the number of level 5 concluded experimental trials
/count_ctrl1:					stores the number of level 1 concluded control trials
/count_ctrl2:					stores the number of level 1 concluded control trials
/count_ctrl3:					stores the number of level 1 concluded control trials
/count_ctrl4:					stores the number of level 1 concluded control trials
/count_ctrl5:					stores the number of level 1 concluded control trials


EXPERIMENTAL SET-UP

(A) 5 levels of difficulty: tested in blocked format

In this script version, the levels are defined as follows: 
1: 2 1-digit integers (0-9), only + or -, solution 0-9 (1 digit)
2: 3 1 digit integers (0-9), only +,- (no repeated operations, order of operations random)
3: 3 integers, 1-2 integers are double digits (0-99), +,-,* (no repeated operations, selection and order of operations random), solution 0-9 (1 digit)
4: 4 integers, 1-2 integers are double digits (0-99), +,-,* (no repeated operations, selection and order of operations random), solution 0-9 (1 digit)
5: 4 integers, 1-4 integers are double digits (0-99), *, / are a must, then either + or - (order of operations random), solution 0-9 (1 digit)	

Note: the problems are pregenerated for this script (see also Dedovic et al, p.320)

(B) 4 different conditions:
1) Training: 
- administered outside of imaging equipment
- uses the mouse to control the selection of the solution (the 'dials') -> Dedovic et al (2005) use keyboard input for training
- minimum suggested duration: 2 minutes for entire training block (this translates to Min = 24s for each level of difficulty);
	Note: he default in this script is set to 5 minutes (=1 minute/level of difficulty)
- presents a random order of the 5 difficulty level blocks
- calculates average time participants used to respond to each problem (regardless of accuracy) for each level of difficulty.
	Note: The uncorrected latencies are used in this script to ensure that initial timelimits can be calculated.
- does not put a time limit on performance
- does not give overall performance feedback in the form of a performance bar
- presents feedback (correct, incorrect) for individual problem for 500ms (editable parameter)			
	Note: training ends with a slide that says "Please Wait" (to continue the experimenter has to press the Spacebar - participant cannot move from here with the mouse)

2) Experimental: inside imaging equipment
- minimum suggested duration: 2 minutes per level of difficulty
	Note: this timelimit is based on our interpretation of  (->"individual runs" p.321, right column)
- presents a random order of the 5 difficulty level blocks
- puts a time limit on performance
	a) at the start of a block uses training mean latency for the currently tested level of difficulty but shortens it by 10% (= 90% of the training mean response times)
	b) continuously tracks the performance and response time of the last 3 trials
	=> if the last 3 trials are all correct: adjust time limit by using the average response duration of the last 3 trials but shorten it by 10%
	=> if the last 3 trials are all errors (or timeouts): adjust time limit by lengthening the current timeout by 10%
	c) presents a timer on screen that counts down the seconds (Dedovic et al, 2005 used a progress bar)

- presents a performance bar that presents the "average" performance as being in the green (good) region of performance and the participant's
performance as being (likely) in the red (bad) region of performance (average displayed performance is calcuated across experimental difficulty levels)

- presents feedback (correct, incorrect, timeout) for individual problems for 500ms (editable parameter)

- adds an intertrial interval (ITI): the duration of the intertrial interval is based on parameters.ITI_test (editable parameter) which gets
adjusted by the difference of the average response latency for the tested level (based on training performance) minus the current response latency. 
If the current response latency was faster, the ITI increases. If the current response latency was slower (unlikely as a timeout is imposed) the ITI decreases
	Note: the ITI was added so that control and experimental trials could (roughly) be matched in frequency.
			(Dedovic et al, 2005, p.321: "To match the frequency of mental arithmetic tasks [in experimental and control condition], the time between tasks is varied as a function of the time limit 
			imposed during the experimental condition, so that the total number of tasks presented per condition is identical.")
			The idea is that if the response time is equal to the training response time (which is run under similar conditions to the control condition)
			the ITI is parameters.iti_test. If it's shorter (because of the timeout) then the ITI gets adjusted up - otherwise down.
			This works theoretically AS LONG AS there is still a positive ITI (>=0ms) left at the end. If a control trial (or experimental trial, though that is less likely) 
			is longer than the average response time by parameters.ITI then the number of trials in the experimental and control condition might differ. 

	Note:  each experimental blocks ends with a slide that informs participants of their average performance; followed by a slide that says "Please Wait"
			(to move on the experimenter has to press the Spacebar - participants cannot move on from here with the mouse)

3) Control: : inside imaging equipment

- minimum suggested duration: 2 minutes per level of difficulty
- presents a random order of the 5 difficulty level blocks
- does not put a time limit on performance
- does not present a performance bar 
- presents feedback (correct, incorrect, timeout) for individual problems for 500ms (editable parameter)
- adds an intertrial interval (ITI): the duration of the intertrial interval is based on parameters.ITI_test (editable parameter) which gets
adjusted by the difference of the average response latency for the tested level (based on training performance) minus the current response latency. 
If the current response latency was faster, the ITI increases. If the current response latency was slower (unlikely as a timeout is imposed) the ITI decreases
	Note: the ITI was added so that control and experimental trials could (roughly) be matched in frequency.
			(Dedovic et al, 2005, p.321: "To match the frequency of mental arithmetic tasks [in experimental and control condition], the time between tasks is varied as a function of the time limit 
			imposed during the experimental condition, so that the total number of tasks presented per condition is identical.")

	Note:  each control blocks ends with a slide that says "Please Wait"

4) Rest: inside imaging equipment

- presents the userinterface with a message to take a break and not move the mouse until told to do so
	Note:  the rest block ends with a slide that says "Please Wait" (to move on the experimenter has to press the Spacebar - participant cannot move from here with the mouse)

The order of the 3 test conditions (experimental, control, rest) is counterbalanced by groupnumber (6 different groupnumbers run the 6 possible orders)


STIMULI:
this script uses pregenerated sequences for the 5 levels of difficulty. The problems used can be edited under
section "Editable Lists"

INSTRUCTIONS
this script uses instructions that are not original to Dedovic et al (2005). They can be edited under
section "Editable Instructions"

EDITABLE CODE:
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment.

The parameters you can change are:

/trainingtimeout:						sets the timeout for the training block (per difficulty level) in ms (default: 60000ms => 1 min per difficulty level)
/experimentaltimeout:					sets the timeout of the individual experimental block (per difficulty level) in ms (default: 120000ms => 2 min per difficulty level)
/feedbackduration:						sets the feedback duration in ms (default: 500ms)
/restduration:							sets the duration of the rest period in ms (default: 60000)
/iti_test:								the default length of the intertrial interval in ms in experimental and control task that would run if
										the response latency of the currently solved problem was equal to the one determined during training (default: 1000ms)

Definition of the Performance Bar:
Note: the performance bar appears Red until parameters.redperformance; it appears White until parameters.whiteperformance; it appears Green above parameters.whiteperformance
Default: for 0-60% the bar appears Red; for 60%-80% the bar appears White; above 80% the bar appears Green

/whiteperformance:						sets the performance proportion of the performance bar that appears white (default: 80%)
										Note: performance > 80% appears green
/redperformance:						sets the performance proportion of the performance bar that appears red (default: 60%)
/averageperformance:					sets the average performance that participant compares to (default: 85%) -> used for the average performance triangle


Copyright © Millisecond Software. All rights reserved.
Contact | Terms of Service | Security Statement | Employment