User Manual: Inquisit Probabilistic Reward Task

															
___________________________________________________________________________________________________________________	

										*PROBABILISTIC REWARD TASK*
___________________________________________________________________________________________________________________

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 09-22-2015
last updated:  01-19-2024 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 01-19-2024 Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________	
This script implements a Probabilistic Reward Task, a simple categorization task using a differential 
reinforcement schedule of monetary reward. It can be used as a measure of reward responsiveness (hedonic capacity).

The setup of this script allows for absolute sizing of the stimuli. By default, the stimuli are NOT
absolutely sized (the script uses the largest 4:3 portion of the current screen for data presentation).
However, you can change the sizing parameter settings under section Editable Parameters to
turn on absolute sizing. If you run the script with absolute sizing and your participant's screen isn't big enough,
the script uses the largest 4:3 portion of the current screen (e.g. smartphone screen) that it can find.


The implemented procedure is based on:

Pizzagalli, D.A., Jahn, A.L, & O'Shea, J.P. (2005). Toward an Objective Characterization of an Anhedonic
Phenotype: A Signal-Detection Approach. Biol Psychiatry, 57(4), 319–327.

___________________________________________________________________________________________________________________
TASK DESCRIPTION
___________________________________________________________________________________________________________________	
Participants are asked to categorize faces into "short" and "long" mouths. Correct responses are 
intermittedly rewarded with an asymmetric reinforcement schedule for short and long mouths.
For half the participants short mouths get reinforced about 60% of the times ("frequent reward") 
whereas long mouths get reinforced only about 20% of the times ("infrequent reward"); 
for the other half of the participants  the reverse is true.

Responsekeys are counterbalanced within groups.
Assignment to the 4 experimental condition (2 reinforcement schedules x 2 response keys assignments) is done by
groupnumber.

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 15 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________	
The default data stored in the data files are:

(1) Raw data file: 'probabilisticrewardtask_raw*.iqdat' (a separate file for each participant)

build:							The specific Inquisit version used (the 'build') that was run
computer.platform:				the platform the script was run on (win/mac/ios/android)
date, time: 					date and time script was run 
subject, group: 				with the current subject/groupnumber
session:						with the current session id

//Screen Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

activeCanvasHeight_inmm:		the width of the active canvas (by default: lightGray area) in mm 
activeCanvasWidth_inmm:			the height of the active canvas in mm 
display.canvasHeight:			the height of the active canvas in pixels
display.canvasWidth:			the width of the active canvas in pixels

px_per_mm:						the conversion factor to convert pixel data into mm-results for the current monitor
								(Note: the higher resolution of the current monitor 
								the more pixels cover the same absolute screen distance)
								This factor is needed if you want to convert pixel data into absolute mm data or the other way around


blockcode, blocknum:			the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 			the name and number of the currently recorded trial (built-in Inquisit variable)
									Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
									that do not store data to the data file such as feedback trials. Thus, trialnum 
									may not reflect the number of main trials run per block. 
									


expgroup:						1 = short mouth is frequently rewarded; 
								2 = long mouth is frequently rewarded
									
responsekeyassignment:			1 = short mouth left/long mouth right; 
								2 = short mouth right/long mouth left
									
blockcount:						counts the blocks

reward_short:					0 = short mouth trial is not supposed to be rewarded; 
								1 = short mouth trial is supposed to be rewarded 
									(if response is correct; otherwise the next short trial that is correct is rewarded)
								
new_reward_short:				0 = no new reward_short 
								(this happens if the last short mouth trial was supposed to be rewarded but response was incorrect)
								1 = a new reward_short needs to be determined
								
reward_long:					0 = long mouth trial is not supposed to be rewarded; 
								1 = long mouth trial is supposed to be rewarded 
								(if response is correct; otherwise the next log trial that is correct is rewarded)

new_reward_long:				0 = no new reward_long 
								(this happens if the last long mouth trial was supposed to be rewarded but response was incorrect)
								1 = a new reward_long needs to be determined

stimulusitem:					the presented stimuli in order of trial presentation
image:							the target image presented

response:						the scancode of the the participant's response key:
								18 = E
								23 = I
										
responseCat:					the interpreted key response: "short" vs. "long"										
										
correct:						the correctness of the response (1 = correct; 0 = incorrect)

latency: 						the response latency (in ms); measured from onset of target
countrewardtrials:				counts the number of rewards given out (across test blocks)
total:							stores the currently total cents won (across test blocks)


(2) Summary data file: 'probabilisticrewardtask_summary*.iqdat' (a separate file for each participant)

inquisit.version:				Inquisit version run
computer.platform:				the platform the script was run on (win/mac/ios/android)
startDate:						date script was run
startTime:						time script was started
subjectid:						assigned subject id number
groupid:						assigned group id number
sessionid:						assigned session id number
elapsedTime:					time it took to run script (in ms); measured from onset to offset of script
completed:						0 = script was not completed (prematurely aborted); 
								1 = script was completed (all conditions run)
//Screen Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

activeCanvasHeight_inmm:		the width of the active canvas (by default: lightGray area) in mm 
activeCanvasWidth_inmm:			the height of the active canvas in mm 
display.canvasHeight:			the height of the active canvas in pixels
display.canvasWidth:			the width of the active canvas in pixels

px_per_mm:						the conversion factor to convert pixel data into mm-results for the current monitor
								(Note: the higher resolution of the current monitor 
								the more pixels cover the same absolute screen distance)
								This factor is needed if you want to convert pixel data into absolute mm data or the other way around
								
								
								
							
									
expgroup:						1 = short mouth is frequently rewarded; 
								2 = long mouth is frequently rewarded
									
responsekeyassignment:			1 = short mouth left/long mouth right; 
								2 = short mouth right/long mouth left

countrewardtrials:				counts the number of rewards given out (across test blocks)
total:							stores the currently total cents won (across test blocks)

Notes: 
responses with Latencies < 150ms OR latencies > 2500ms removed from summary statistics

propcorrect:					overall proportion correct -across all test trials-
meanRT:							overall mean response latency (in ms) of correct responses - across all test trials
propCorrect frequent:			proportion correct frequently rewarded mouth trials
meanRT_frequent:				mean latency (in ms) of correct frequently rewarded mouth trials
propcorrect_infrequent:			proportion correct infrequently rewarded mouth trials
meanRT_infrequent:				mean latency (in ms) of correct infrequently rewarded mouth trials


Note regarding logD/logB calculations: 
The counts of correct responses need to be adjusted if correct responses for frequent (infrequent) categories
are either ceiling (propCorrect = 1) or rock bottom (propCorrect = 0)
In this script, the counts get adjusted by +/- 0.0005.
If no corrections are applied in these cases, logD or logB cannot be calculated.


logD:						Measure of Discriminability (see Pizzagalli et al, 2005, p.5), across all testblocks
							logD is a non-parameteric alternative measure to the traditional d' measure
							of a signal detection framework. 

logB:						Measure of Response Bias  (see Pizzagalli et al, 2005, p.5), across all testblocks
							logB is non-parameteric alternative measure of the criterion measure 
							of a signal detection framework

									
and separate measures per test block:
logD_1 (block1),						
logB_1 (block1),
logD_2 (block2),						
logB_2 (block2),
logD_3 (block3),						
logB_3 (block3)

___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________

2 reward frequency schedules x 2 response key assignments: Assignment to the 4 experimental groups is done by groupnumber

group 1 (odd groupnumbers)-> short mouth faces are frequently rewarded if response is correct (potentially 30 out of 50 trials); 
long mouth faces are infrequently rewarded if correct (potentially 10 out of 50 trials)

group 2 (even groupnumbers)-> long mouth faces are frequently rewarded if response is correct (potentially 30 out of 50 trials); 
short mouth faces are infrequently rewarded if correct (potentially 10 out of 50 trials)

within those groups, response key assignments are counterbalanced


1. Practice Block: 
* a short demonstration of 2 trials
* 2 trials, 1 of short mouth faces and 1 of long mouth faces (same durations as test trials)

2. Test Blocks: 
* 3 Blocks, each block runs 100 trials (50 short mouth trials, 50 long mouth trials)
* order of short and long mouth trials is randomly determined with the constraint that 
no more than 3 consecutive runs of the same trial type
* frequently rewarded mouth trials: 30 (30/50->60%); infrequently rewarded mouth trials: 10 (10/50->20%)
	-> per block there are 40 potential reward trials 
	-> the reinforcement schedules for each trial type are randomized (see section Editable Lists for more info)
	-> if a trial has been randomly determined to be a rewarded trial but the response is incorrect, the next
	correct trial of the same trial type is rewarded instead
* there is a forced 30s (default) rest period inbetween each test block; the task continues automatically

Trial Sequence:
fixation (500ms)->No Mouth Face->(500ms)->Mouth Face(100ms)->No Mouth Face until response 
(response latency is taken and measured from onset of Mouth Face)

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________	
Face Stimuli are not original to Pizzagalli et al (2005). They can be edited under section Editable Stimuli.
(The default short mouth is about 88% of the long mouth).
Sizes of stimuli on screen are proportional to the monitor/canvas size; they can be adjusted under
section Editable Parameters.

___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________	
Instructions are not original to Pizzagalli et al (2005). They can be edited under section Editable Instructions.
__________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to 
further customize your experiment.

The parameters you can change are: