User Manual: Inquisit Affective Shift Task


___________________________________________________________________________________________________________________	

								Affective Shift Task (AST)
___________________________________________________________________________________________________________________	


Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 10-13-2022
last updated:  10-23-2023 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 10-23-2023 Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
This script implements Millisecond Software's version of the Affective shift Task (AST), a paradigm to study
executive functioning as measured by inhibition and set shifting, in response to emotional and non-emotional 
face stimuli. 

Due to the fact that the keyboard response keys should spatially map onto the 4 quadrants of a 2x2 matrix,
the script should be run on devices with external keyboards only. In case the script cannot detect an
external keyboard, the script terminates after a brief notification message.
By default, the script runs with absolutely sized stimuli - check section Editable Parameters for more
information.

DISCLAIMER: The implemented procedure is based on the description of the task by De Lissnyder et al (2010)
and is a best guess-effort based on the provided information.

Reference:											
De Lissnyder, E., Koster, E. H. W., Derakshan, N., & De Raedt, R. (2010). 
The association between depressive symptoms and executive control impairments in response to emotional 
and non-emotional information. Cognition and Emotion, 24(2), 264–280. 
https://doi.org/10.1080/02699930903378354

___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________

Participants are asked to find the 'odd-one-out' face in 2x2 arrays of faces based on a cued dimension
(gender, color, or emotion). Three of the four faces always share one particular variation on each
of these three dimensions and one face is always the 'odd-one-out'.

For example, three of the faces might be colored light gray and one face is dark gray - if the cue is 
'COLOR', the target would be dark gray face. At the same time, three faces in the array share the same gender -
if the is 'GENDER' the sole face of the other gender would be the target. Likewise, three faces in the
array share the same emotion - if the cue is 'EMOTION', the target would be the only face with the opposite emotion.
For each of the three dimensions (gender, color, or emotion), the target face is always a different one,
so participants have to pay attention to the 'cue'.

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 25 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________
The default data stored in the data files are:

(1) Raw data file: 'affectiveshifttask_raw*.iqdat' (a separate file for each participant)

build:						The specific Inquisit version used (the 'build') that was run
computer.platform:			the platform the script was run on (win/mac/ios/android)
date, time: 				date and time script was run 
subject:					with the current subject id
group: 						with the current group id
session:					with the current session id


//Screen Setup:
(parameter) runAbsoluteSizes:		true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
									false (0) = should use proportionally sized canvas (uses width = 4/3*screenHeight)
								
canvasAdjustments:					NA: not applicable => parameters- runAbsoluteSize was set to 'false'
									0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
									1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
									adjustments had to be made

activeCanvasHeight_inmm:			the width of the active canvas in mm 
activeCanvasWidth_inmm:				the height of the active canvas in mm 
display.canvasHeight:				the height of the active canvas in pixels
display.canvasWidth:				the width of the active canvas in pixels

px_per_mm:							the conversion factor to convert pixel data into mm-results for the current monitor
									(Note: the higher resolution of the current monitor 
									the more pixels cover the same absolute screen distance)
									This factor is needed if you want to convert pixel data into absolute mm data or the other way around


blockcode, blocknum:		the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 		the name and number of the currently recorded trial (built-in Inquisit variable)
								Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
								that do not store data to the data file. 
																
practiceBlockCounter:		counter to track the number of practice blocks run
practicePass:				1 = practice was passed with the minimum propCorrect
							0 = otherwise

trialCounter:				counter to track the trialcount per block
							Note: a test trial sequence spans 2 (R Trials) to 3 (I, C, U) actual trials
							
trialCounter_I:				counter to track the Inhibitory (I) trials run
trialCounter_C:				counter to track the Control (C) trials run
trialCounter_U:				counter to track the Unclassified (U) trials run
trialCounter_R:				counter to track the Repeat (R) trials run				

trialtype:					I(nhibitory), C(ontrol), U(nclassified), R(epeat) (only relevant for test trials)
							I: the last cue in the 3-face display sequence repeats the first one (a-b-a);
							the second one is always different
							C: all three cues from the 3-face display sequence are different from each other (a-b-c)
							U: the last cue in the 3-face display sequence is different from the first and second cue,
							but the second cue repeats the first (a-a-b)
							R: 2-face display sequence: the second cue repeats the frist (a-a)

cueOrder:					the order in which the cues should run across a test trial sequence	
							as well as the target to find
							Examples: 
							"mda" -> 
							'm' => cue word given: GENDER, the target to find will be the singular male 
							face amongst 3 female face ('m' for male target).
							'd' => cue word given: COLOR, the target to find will be the singular dark gray face
							amongst three light gray faces ('d' for dark target)
							'a' => cue word given: EMOTION, the target to find will the singular angry face
							amongst three happy faces ('a' for angry)
			
cueNumber:					the number of cues run for the current test trial sequence
							(I, C, U: 3; R: 2)

countCues:					a counter that tracks the number of cues run in the current test trial sequence

targetTrial:				1 = the current trial is the last and thus target trial of the current test trial sequence
							0 = otherwise

cue:						stores the cue (with target information) for the current face display
							'f'/'m' = Gender cues, target face will be female/male
							'd'/'l' = Color cues, target face will be dark/light gray
							'a'/'h' = Emotion cue, target face will be angry/happy target
							
cueWord:					The cue word presented for the current face display 
							GENDER, COLOR or EMOTION
			
switch:						1 = the current cue repeats the previous cue; 0 = otherwise
							(Note: this does not necessarily depend on the actual trialtype)
							

targetQuadrant:				the target to find is located in:
							1 = top left; 2 = top right; 3 = bottom right, 4 = bottom left (clockwise from top-left to bottom-left)	
			
correctresponse:			stores the correct key press for the current face display

response:					the response of participant (scancode of response button)
responseText:				the response key pressed
correct:					correctness of response (1 = correct, 0 = error)
latency:					response latency (in ms); measured from: onset of face display

targetPic:					the actual target image presented
foil1Pic:					foil1 image
foil2Pic:					foil2 image
foil3Pic:					foil3 image
targetQuadrant:
foil1Quadrant:				the quadrant foil1 was randomly assigned to
foil2Quadrant:				the quadrant foil2 was randomly assigned to
foil3Quadrant:				the quadrant foil3 was randomly assigned to

//debug:
expressions.x1_inpx:
expressions.y1_inpx:
expressions.x2_inpx:
expressions.y2_inpx:
expressions.x3_inpx:
expressions.y3_inpx:
expressions.x4_inpx:
expressions.y4_inpx:								


(2) Summary data file: 'affectiveshifttask_summary*.iqdat' (a separate file for each participant)

inquisit.version:			Inquisit version run
computer.platform:			the platform the script was run on (win/mac/ios/android)
startDate:					date script was run
startTime:					time script was started
subjectid:					assigned subject id number
groupid:					assigned group id number
sessionid:					assigned session id number
elapsedTime:				time it took to run script (in ms); measured from onset to offset of script
completed:					0 = script was not completed (prematurely aborted); 
							1 = script was completed (all conditions run)	
							
//Screen Setup:
(parameter) runAbsoluteSizes:		true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
									false (0) = should use proportionally sized canvas (uses width = 4/3*screenHeight)
								
canvasAdjustments:					NA: not applicable => parameters- runAbsoluteSize was set to 'false'
									0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
									1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
									adjustments had to be made

activeCanvasHeight_inmm:			the width of the active canvas in mm 
activeCanvasWidth_inmm:				the height of the active canvas in mm 
display.canvasHeight:				the height of the active canvas in pixels
display.canvasWidth:				the width of the active canvas in pixels

px_per_mm:							the conversion factor to convert pixel data into mm-results for the current monitor
									(Note: the higher resolution of the current monitor 
									the more pixels cover the same absolute screen distance)
									This factor is needed if you want to convert pixel data into absolute mm data or the other way around
													
NOTES: 
1) All calculations were based on the responses to the last trial in the sequence.
2) summary RTs are based on CORRECT responses only

//overall:
propCorrect_overall:		proportion correct responses across all test trial sequences
meanCorrRT_overall:			mean response time (in ms) of correct responses across all test trial sequences

//Executive Function Measures
Inhibition: 				calculated as 'RT to Inhibitory trials - RT to Control trials'	
							Interpretation: High inhibition scores reflect good executive ability
							
SetShifting					calculated as '([RT to Control + RT to Unclassified]/2) - RT to Repeat trials'
							Interpretation: Smaller difference is better executive abilit

//Response Time by TrialTypes
meanCorrRT_I: 				mean response time (in ms) of correct responses across all I(nhibitory) trial sequences
meanCorrRT_C: 				mean response time (in ms) of correct responses across all C(olor) trial sequences
meanCorrRT_U: 				mean response time (in ms) of correct responses across all U(nclassified) trial sequences 
meanCorrRT_R: 				mean response time (in ms) of correct responses across all R(epeat) trial sequences

//Response Time by Trialtypes x Cue category
meanCorrRT_I_gender: 		mean response time (in ms) of correct responses across I(nhibitory) trial sequences with GENDER cues
meanCorrRT_I_color: 		mean response time (in ms) of correct responses across I(nhibitory) trial sequences with COLOR cues 
meanCorrRT_I_emotion: 		mean response time (in ms) of correct responses across I(nhibitory) trial sequences with EMOTION cues
(same for all remaining combinations)

//PropCorrect
The same variables that are calculated for the response times are calculated for proportion corrects
							
___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	

(1) INTRO

(2) PRACTICE
- each practice block runs 12 trials
- Each dimension is run four times (with each of the two variations being used as the 'odd-one-out' twice)
Example: dimension "GENDER" is run four times with 2 times showing a Female face as the 'odd-one-out'
and 2 times showing a Male face as the 'odd-one-out'
- the order of the cues is randomly determined
- the Practice block is repeated up to four times (editable parameters) if performance is lower than 80%correct
(editable parameters)
- after a maximum of 4 practice blocks, everyone advances to the test block despite practice performance

(3) TEST

General Design: 4 trialtypes (inhibitory, control, unclassified, repeat) X 
3 cues (emotion: angry/happy, gender: female/male, color: dark/light)

The design runs: 216 trial sequences total (see list.trialtypes and list.cueOrder for details), order is randomly determined

////Shifting trial sequences:////
48 Inhibitory trial sequences: 
	-trial sequence that runs 3 cue-face display trials where the last cue is different from the second cue;
	but repeats the first one (a-b-a)
	Note: there are 48 different possible ways to arrange the 6 variations of each possible
	dimension as cues for Inhibitory trial sequences (see list.cueOrder for details)
	
48 Control trial sequences: 
	- trial sequence that runs 3 cue-face display trials where all 
	three cues are different from each other (a-b-c)
	Note: there are 48 different possible ways to arrange the 6 variations of each possible
	dimension as cues for Control trial sequences (see list.cueOrder for details)	

48 Unclassified trial sequences:
	- trial sequence that runs 3 cue-face display trials where the second cue repeats the first,
	but the last cue is always different (a-a-b)
	Note: there are 48 different possible ways to arrange the 6 variations of each possible
	dimension as cues for Unclassified trial sequences (see list.cueOrder for details)	

////Repeat trial sequences:////
72 Repeat trial sequences:
	- trial sequence that runs 2 cue-face display trials where the second cue always repeats the first one
	Note: there are 12 different possible ways to arrange the 6 variations of each possible
	dimension as cues for Repeat trial sequences; each of those  (see list.cueOrder for details)	

IMPORTANT: The target trials in each trial sequence are always the LAST cue-face display.

/////////////////////////////////////////////////////////////////////
////////////////////////Trial Sequence///////////////////////////////
/////////////////////////////////////////////////////////////////////
Example: I(nhibitory) trial sequence

cue order is sampled beforehand: Example 'afh'

Cue1 (a -> EMOTION -> the 'angry' face will be 'odd-one-out'): 500ms
Face Display of four faces arranged in a 2x2 matrix*: until response ()
ITI: 100ms

Cue2 (f -> GENDER -> the female face will be 'odd-one-out'): 500ms
Face Display of four faces arranged in a 2x2 matrix*: until response
ITI: 100ms

Cue3 (h -> EMOTION -> the 'happy' face will be 'odd-one-out'): 500ms
Face Display of four faces arranged in a 2x2 matrix*: until response (TARGET TRIAL)
ITI: 100ms

* all faces displays are randomly determined

/////////////////////////////////////////////////////////////////////
////////////////////////Face Display Generation//////////////////////
/////////////////////////////////////////////////////////////////////

=> see expressions.generateFoilsAndTargets

Target Face:
- the target face is constrained by the particular 'cue' sampled
Example: 'a' -> the target face needs to be 'angry'; the foil faces need to be 'happy'

- the script randomly determines the remaining variations of the target face:
color: randomly (without replacement) selects dark vs. light
gender: randomly (without replacement) selects female vs. male

- Based on the composition of the target face, 
the required foil face variations are constructed.
Example: 
target face: angry-light-male
foil1: happy-light-male
foil2: happy-light-female
foil3: happy-dark-male

- the script randomly without replacement samples from the available pool for each
face (target face, foil1, foil2, foil3).
Example: the composition of the target face is: angry-light-male
The script randomly (without replacement) selects the next available 'angry-light-male' face
(Note: this script only provides 2 possible examples for each combination, so there will be
a lot of repeats of faces - however, the same face should not be selected on consecutive draws.).
Similarly, the script will select the next available 'happy-light-male' image as foil1.

/////Quadrant Assignment:

- the script randomly (without replacement) selects the quadrant in which the target should be
presented for each face display. Across the 576 face displays (576 = 48*3 + 48*3 + 48*3 + 72*2),
the target should be presented in each quadrant 144 times.
- the foils are randomly distributed amongst the remaining three quadrants

/////Response Keys:
The current response keys are E (quadrant1), I (quadrant2), M (quadrant3), and C (quadrant4).
The spatial location of the keys (on the American keyboard) map directly onto the four quadrants of the 
2x2 face matrix.
Because the default Inquisit response keys on touchscreens (run without external keyboards) would be 
presented in a linear line, the spatial mapping of response keys and matrix positions is lost. 
The script therefore checks whether an external keyboard can be detected on the current device at the start
of the script. If that's not the case, the script terminates with a brief notification message.

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________

provided by Millisecond Software - can be edited under section 'Editable Stimuli'

Faces: 
● Original study (De Lissnyder et al,2010)
The original face stimuli were taken from the Karolinska Directed Emotional Faces database 
(KDEF; https://www.kdef.se/). All faces were adjusted to exclude interference of background
stimuli (hair) and were adjusted to the same size.
The images were taken based on the results from a validation study of this picture set 
(Goeleven, De Raedt, Leyman, & Verschuere, 2008).

● Millisecond Software script:
The Millisecond Software script provides PLACEHOLDER stimuli from the NIMS database.
http://www.macbrain.org/resources.htm

8 female faces and 8 male faces were selected. Each face was selected with an
angry, open-mouth expression and a happy, open-mouth expression.
All faces are of young adult caucasian actors.
The faces were adjusted to exclude interference of background
stimuli (hair - as much as possible) and were adjusted to the same size (600px X 600px).
Color variations were achieved by changing the brightness of the images up and down.
All image changes were made in Paint.net.

These face images can easily be replaced/edited by others under section Editable Stimuli, 
editing item.female_dark_angry to item.male_light_happy_faces.

The script ensures to select a different actor for each of the four quadrants.
___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________

provided by Millisecond Software - can be edited under section 'Editable Instructions'.
They are not the originals.
___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are: