User Manual: Inquisit Behavioral Pattern Separation Task - Object Version - Set D


___________________________________________________________________________________________________________________	

						BEHAVIORAL PATTERN SEPARATION TASK - OBJECT VERSION (BPS-O)
						    renamed to: "Mnemonic Similarity Task" (MST) in 2015
							
									- uses set d images-
___________________________________________________________________________________________________________________	

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 01-28-2014
last updated:  02-17-2022 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 02-17-2022 Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
This script implements the Behavioral Pattern Separation Task - Object Version (BPS-O)*- with sets of 192 stimuli
as described in:

Stark, S.M, Yassa, M.A., Lacy, J.W. & Stark, C.E.L. (2013). A task to assess behavioral patternseparation (BPS)
in humans: Data from healthy aging and mild cognitive impairment. Neuropsychologia, 51, 2442–2449.

From the article (Stark et al, 2013):
"[...] the BPS-Otask provides a sensitive measure for observing changes in memory performance across the life span 
and may be useful for the early detection of memory impairments that may provide an early signal of later development 
to mild cognitive impairment."

* the task was renamed in 2015 to "Mnemonic Similarity Task (MST)"


background information, original programs (Win/Mac) and stimuli/instructions freely available via:
http://faculty.sites.uci.edu/starklab/mnemonic-similarity-task-mst/
2 more stimuli sets (E, F) were added to the original task.

						Millisecond Software thanks Dr. Stark for sharing the new site with us!

NOTE: There is NO parameter to choose randomization seeds in this script as it is provided in the original.
The seed used is randomized in this script.


___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________	
Participants participate in a 2-part experiment to assess recognition memory.
The first part of the experiment presents 128 pictures (default) of everyday items and participants have to 
decide whether the item is an OUTDOOR or an INDOOR item. The second part of the experiment presents 64 of the 
previously seen pictures (targets), 64 of very similar items (lures), and 64 new items (foils). 
Participants are asked to categorize the items as old, new, or similar within 2.5s (default).

Stark et al (2013) categorized the Lures into 5 lure bins:
"[...]the stimuli were divided into 5 lures bins, with the more mnemonically similar lures in lure bin1 (L1) and
the least mnemonically similar lure items in lure bin5(L5) (p.2446)"

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 12 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________
The default data stored in the data files are:

(1) Raw data file: 'bps_o_setd_raw*.iqdat' (a separate file for each participant)

build:								The specific Inquisit version used (the 'build') that was run
computer.platform:					the platform the script was run on (win/mac/ios/android)
date, time: 						date and time script was run 
subject, group: 					with the current subject/groupnumber
session:							with the current session id

									
set:								"D" = picture set C is run by this script
										
setsize:							number items per targets/foils/lures

nr_subsetImages:					number of items per trial type condition as well as subset (default: "64")	
									Choose from:
									"64" => no subsets possible 
									"20-1", "20-2", "20-3" => sets are separated into 3 test sets for repeated measures
									"32-1", "32-2" => sets are separated into 2 test sets for repeated measures
										Note: if something other than these options is chosen, the script defaults to "64"
																	
blockcode, blocknum:				the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 				the name and number of the currently recorded trial (built-in Inquisit variable)
										Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
										that do not store data to the data file such as feedback trials. Thus, trialnum 
										may not reflect the number of main trials run per block.
										
stimulus:							stores the currently presented picture
stimulusselect:						the itemnumber of the currently presented picture

trialtype:							the current trial type 
									1 = target
									2 = foil
									3 = lure
								
lurebin:							the lure bin of current lure (1-5); 0 for targets and foils

response:							the scancode of the response key
									part1: 23 = I (indoor); 24 = O (outdoor); 
									part2; 47 = V (old); 49 = N (new); 48 = B (similar)
									
responseCategory:					the 'interpreted response': 
									part1; "indoor" vs. "outdoor"; 
									part2: "old", "new", "similar"
									
correct:							the correctness of the response (1 = correct; 0 = incorrect)
										Note: only relevant for Part 2 (accuracy of responses in Part 1 is not assessed in this script)
										
latency: 							the response latency in ms


(2) Summary data file: 'bps_o_setd_summary*.iqdat' (a separate file for each participant)

inquisit.version:			Inquisit version run
computer.platform:			the platform the script was run on (win/mac/ios/android)
startDate:					date script was run
startTime:					time script was started
subjectid:					assigned subject id number
groupid:					assigned group id number
sessionid:					assigned session id number
elapsedTime:				time it took to run script (in ms); measured from onset to offset of script
completed:					0 = script was not completed (prematurely aborted); 
							1 = script was completed (all conditions run)
								

showresponsekeyreminder:	true (1) = a text reminder is presented onscreen (while pictures are presented) that reminds participants of response keys
							false (0) = no response key reminder is presented (default)

set:						set "D" 

setsize:					number of trials per trial type condition in part 2 (default: 64)						

nr_subsetImages:			number of items per trial type condition as well as subset number (default: "64")		
									Choose from:
									"64", 
									"20-1", "20-2", "20-3" => sets are separated into 3 test sets for repeated measures
									"32-1", "32-2" => sets are separated into 2 test sets for repeated measures
									Note: if something other than these options is chosen, the script defaults to "64"

selfpaced:					true = the task is self-paced => the pictures are still presented for the pre-determined duration
							but participants have as much time as needed to complete the respective tasks
							before the next stimulus shows up
							false = task is not self-paced (default)

stimulusduration:			stimulus presentation time (default: 2000ms)
ISI:						interstimulus interval (default: 500ms)

countl1-
countl5:							helper variables that keep track of how many lures of each of the 5 categories have been sorted into list.lures
minlurefreq:						the minimum number of lures of each lure bin

count_oldtargets:					counts all "old" responses to targets
count_simtargets:					counts all "similar" responses to targets
count_newtargets:					counts all "new" responses to targets
count_targets:						counts all target trials
count_targets_corr:					counts all target trials that were responded to => all target trials minus those with no responses
(same for lures and foils)

rawrate_oldtargets:					proportion of "old" responses to all target trials (includes no responses)
rawrate_simtargets:					proportion of "similar" responses to all target trials  (includes no responses)
rawrate_newtargets:					proportion of "new" responses to all target trials  (includes no responses)
(same for lures and foils)

corrrate_oldtargets:				adjusted proportion of "old" responses to target trials that were responded to (no responses excluded)
corrrate_simtargets:				adjusted proportion of "similar" responses to target trials that were responded to (no responses excluded)
corrrate_newtargets:				adjusted proportion of "new" responses to target trials that were responded to (no responses excluded)
(same for lures and foils)

Lure Bins Summary Statistics:
/L1-L5:								counts how often lures of category 1-5 were run (raw count)

L1_NR-
L5_NR:								counts 'no reponses' (NR) to all 5 bin lure categories

L1O - 
L5O:								counts how often lures of category 1-5 were categorized as "old" (raw count)

L1S-             			
L5S:       							counts how often lures of category 1-5 were categorized as "similar" (raw count)

L1N-							
L5N:								counts how often lures of category 1-5 were categorized as "new" (raw count)

rawpercentcorrect:					overall percent correct rate (takes all responses into account; includes no responses)
corrpercentcorrect:					adjusted overall percent correct rate (takes only those trials with a response into account; no responses are excluded)

BPS:								Behavioral Pattern Separation Score (BPS): rates corrected for no responses
									=> "hit" rate for lures (=proportion of similar responses to lure objects) - "false alarm" rate for foils (=proportion of similar responses to foil objects)

TRS:								Traditional Recognition Score (TRS): rates corrected for no responses
									=> hit rate for targets (=proportion of old responses to old objects) - false alarm rate for foils (=proportion of old responses to foil objects)
								
___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________
Block Sequence: List Generations -> Part 1 -> Part 2

Lures, Targets, Foils and Stimuluspresentation (part1) List Generations:
* the stimuli are randomly divided into lures, targets, and foils lists depending on  
values.set (C or D) and selected parameters.nr_subsetImages (editable parameter)
* list.stimuluspresentation (part 1) contains the itemnumbers selected for the lures and targets
(Note: lure lists are assembled with the constraint that each of the 5 lure bins have at least expressions.minlurefreq members)

Part 1: Stimulus Presentation (default: 128 trials => can be set via Editable Values, parameters.nr_subsetImages)
* stimuli are randomly selected from list.stimuluspresentation
* stimuli are presented for 2s (default, can be set via Editable Values) 
with 0.5s interstimulus interval (default, can be set via Editable Values) => response window 2.5s
* in non-selfpaced mode (parameters.selfpaced == false): response needs to be made within 2s
* in selfpaced mode (parameters.selfpaced == true): picture is presented for 2s but participant has infinite time to make response

NOTE: The task is to categorize the items as Indoor vs. Outdoor items
-> accuracy of response is NOT assessed

Part 2: Recognition Test (default:  192 trials, 64 targets, 64 lures, 64 foils => can be set via Editable Values, parameters.nr_subsetImages)
* 64 stimuli are Old stimuli from Part 1 (the same picture is presented) => Targets
* the remaining stimuli from Part 1 are Similar stimuli (a similar picture of the item is presented) => Lures
* 64 stimuli are New stimuli (not previously presented in Part 1, still from same set) => Foils
* order of stimuli is random
* stimuli are presented for 2s (default, can be set via Editable Values) 
with 0.5s interstimulus interval (default, can be set via Editable Values) 
* in non-selfpaced mode (parameters.selfpaced == false): response needs to be made within 2s
* in selfpaced mode (parameters.selfpaced == true): picture is presented for 2s but participant has infinite time to make response

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________
There are two sets of stimuli: set C and set D. This script runs set D.
The sets are downloaded via http://darwin.bio.uci.edu/~cestark/BPSO/bpso.html (inactivated)
(now: http://faculty.sites.uci.edu/starklab/mnemonic-similarity-task-mst/)

Each set comes with a/b variants of each items:
The 'a' variants are used for part1 and for targets and foils in part2. (-> item.Stimuli/item.stimuliD)
The 'b' variants are used for lures in part2. (-> item.Stimuli_lures/item.stimuliD_lures)

___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________
Instructions are adapted from the originals downloaded via http://darwin.bio.uci.edu/~cestark/BPSO/bpso.html (inactivated)
(now: http://faculty.sites.uci.edu/starklab/mnemonic-similarity-task-mst/)
They can be easily customized under EDITABLE CODE -> Editable Instructions
	
___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are:


/showresponsekeyreminder:						true = a text reminder is presented onscreen (while pictures are presented) that 
															reminds participants of response keys
												false = no response key reminder is presented (default)

/nr_subsetImages:								number of items per trial type condition as well as set number (default: "64")	
													Choose from:
													"64" => no subsets possible 
													"20-1", "20-2", "20-3" => sets are separated into 3 test sets for repeated measures
													"32-1", "32-2" => sets are separated into 2 test sets for repeated measures
													Note: if something other than these options is chosen, the script defaults to "64"

													=> sets of 32: in this script the original set(s) of 192 items was randomly divided into 2 sets of 96 items
													(ensuring that each set received a minimum number of 13 lures per lure bins)
													=> sets of 20: in this script the original set(s) of 192 items was randomly divided into 3 sets of 60 items
													(ensuring that each set received a minimum number of 9 lures per lure bins)
													All these sub sets are the same across participants (not generated dynamically for each participant)


/selfpaced:										true = the task is self-paced => the pictures are still presented for the pre-determined duration
													but participants have as much time as needed to complete the respective tasks
													before the next stimulus shows up
												false = task is not self-paced (default)

Duration Variables:
/stimulusduration:								stimulus presentation time (default: 2000ms)
/ISI:											interstimulus interval (default: 500ms)
												Note: the combined duration of stimulusduration and ISI = response window