Return to the Keep Track Task page
						
										KEEP TRACK TASK
SCRIPT INFO

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 12-07-2017
last updated: 02-14-2018 by K.Borchert (katjab@millisecond.com) for Millisecond Software LLC

Script Copyright © 02-14-2018 Millisecond Software


BACKGROUND INFO 

											*Purpose*
											
This script implements a 'Keep Track Task' similarly to the one outlined by: 

Friedman, N.P., Miyake, A., Young, S.E., DeFries, J.C., Corley, R.P., & Hewitt, J.K. (2008).
Individual Differences in Executive Functions Are Almost Entirely Genetic
in Origin. Journal of Experimental Psychology: General, 137, 201-225.

The Friedman et al (2008) task is in turn based on:

Yntema, D. B. (1963). Keeping track of several things at once. Human Factors, 5, 7–17.



											  *Task*
Participants need to mentally update the state of key categories while watching a sequence of 15 words
that belong to 6 different categories. Before the presentation, participants are 
told the specific categories to keep track of and these target categories are displayed on screen throughout the presentation.
The number of target categories to keep track of (of the 6 possible) varies from round to round (default in this script: 2-4).
At the end of each round, participants are asked to enter the last item presented for each of the target categories.


DATA FILE INFORMATION: 
The default data stored in the data files are:

(1) Raw data file: 'keeptracktask_raw*.iqdat' (a separate file for each participant)*

build:							Inquisit build
computer.platform:				the platform the script was run on
date, time, subject, group:		date and time script was run with the current subject/groupnumber 
blockcode, blocknum:			the name and number of the current block
									Note: each round (aka "trial") is run as a block element in this script
/countPracticeSessions:			running total of practice sessions requested									
/roundCount:					running total of the trials/rounds run; resets after each practice session
/difficulty:					level of difficulty ( = number of categories to keep track of)
trialcode, trialnum: 			the name and number of the currently recorded trial element
									(Note: not all trials that are run might record data; by default data is collected unless /recorddata = false is set for a particular trial/block) 
stimulusitem:					the presented stimuli in order of trial presentation
/currentTargetCategory:			stores the currently presented target category in digits 1-6
/targetCategory1:				stores the label of the randomly selected target category1
/Category1_last:				stores the last item presented for target category1
/targetCategory2:				stores the label of the randomly selected target category2
/Category2_last:				stores the last item presented for target category2
/targetCategory3:				stores the label of the randomly selected target category3
/Category3_last:				stores the last item presented for target category3
/targetCategory4:				stores the label of the randomly selected target category4
/Category4_last:				stores the last item presented for target category4
/targetCategory5:				stores the label of the randomly selected target category5
/Category5_last:				stores the last item presented for target category5
/targetCategory6:				stores the label of the randomly selected target category6
/Category6_last:				stores the last item presented for target category6
response:						the participant's response
latency: 						the response latency (in ms); 
								recall trials: measured from: onset of recall-trial until all textbox responses are 
								submitted via 'submit' button
/countCorrect:					counts the number of correctly items per round (across all target categories)
/propCorrect:					stores the proportion correctly recalled items per round (= values.countCorrect/values.difficulty)

/correctCategory1:				1 = last item of target category 1 was correctly recalled; 0 = otherwise
/correctCategory2:				1 = last item of target category 2 was correctly recalled; 0 = otherwise
/correctCategory3:				1 = last item of target category 3 was correctly recalled; 0 = otherwise
/correctCategory4:				1 = last item of target category 4 was correctly recalled; 0 = otherwise
/correctCategory5:				1 = last item of target category 5 was correctly recalled; 0 = otherwise
/correctCategory6:				1 = last item of target category 6 was correctly recalled; 0 = otherwise

(2) Summary data file: 'keeptracktask_summary*.iqdat' (a separate file for each participant)*

script.startdate:				date script was run
script.starttime:				time script was started
script.subjectid:				subject id number
script.groupid:					group id number
script.elapsedtime:				time it took to run script (in ms)
computer.platform:				the platform the script was run on
/completed:						0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run)
/countPracticeSessions:			running total of practice sessions requested		

/roundCount:					final count of test rounds run

/TotalCorrect:					stores the number of correctly recalled items across all test rounds
/TotalWordsRecalled:			stores the total number of words that needed to be recalled across all test rounds
/propCorrect:					the proportion correct of all possible test round responses
								(= number of correct responses across all test rounds / total number of responses = X/36 in this script)
								
/meanPropCorrect: 				mean proportion correct per round; based on values.propCorrect for each round
								(Example: 0.25 => on average, participant got 25% of all responses correct per test round, regardless of level of difficulty)

/meanPropCorrect1:				mean proportion correct for level 1 trials
/meanPropCorrect2:				mean proportion correct for level 2 trials
/meanPropCorrect3:				mean proportion correct for level 3 trials
/meanPropCorrect4:				mean proportion correct for level 4 trials
/meanPropCorrect5:				mean proportion correct for level 5 trials
/meanPropCorrect6:				mean proportion correct for level 6 trials

* separate data files: to change to one data file for all participants (on Inquisit Lab only), go to section
"DATA" and follow further instructions


EXPERIMENTAL SET-UP

1. Practice Session
* by default, the practice session runs 3 rounds with difficulty level increasing from 2-4
=> number of rounds as well as their difficulty level can be adjusted by editing list.difficulty_practice 
under section Editable Lists
* per round: 
	* target categories are sampled randomly for each round (no balancing across rounds)
	* each category is presented at least twice and at most three times within the 15 word presentations 
	(it's randomly determined for each round which category is represented three times - no balancing across rounds); 
	order of category presentation is randomized
	* the particular exemplars presented for each category are sampled at random from the 6 provided options 
	(constraint: no repeats within the same round)
* after recall, participants receive detailed feedback of their responses
* by default, practice session can be repeated if no more than parameters.maxNumberOfPracticeSessions (default: 2) have been run yet
(change settings under section Editable Parameters)


2. Test Session
* by default, the test session runs 12 rounds with difficulty levels 2, 3, 4 (each difficulty level is repeated  4 times, 
levels are randomly selected) 
=> Total words that need to be recalled: 2x4 + 3x4 + 4x4 = 36
=> number of rounds as well as their difficulty levels can be adjusted by editing list.difficulty_test 
under section Editable Lists
* per round: 
	* target categories are sampled randomly for each round (no balancing across rounds)
	* each category is presented at least twice and at most three times within the 15 word presentations 
	(it's randomly determined for each round which category is represented three times - no balancing across rounds); 
	order of category presentation is randomized
	* the particular exemplars presented for each category are sampled at random from the 6 provided options (see Editable Stimuli)
	(constraint: no repeats within the same round)
* after recall, participants receive detailed feedback of their responses by default. However, feedback can easily be
turned off by setting parameters.skipTestFeedback to 'true' (default setting is 'false', see section Editable Parameters)

Note on Accuracy Checks of entered Responses: 
1) all entered responses as well as target items (e.g. India) are converted to lower-case letters for comparisons
Example: presented item: India; entered item: india (evaluated as correct)
Example: presented item: bear; entered item: BEAR (evaluated as correct)
2) empty characters are removed from all entered responses before comparisons
Example: presented item: 'brother'; entered item 'brother ' (evaluated as correct)
Example: presented item: 'gold'; entered item '    g     o ld   ' (evaluated as correct)

Trial/Round Sequence (default settings):

presentation of target categories until spacebar is hit -> 500ms delay ->
word presentation 1 (1500ms) -> isi (0ms) -> word presentation 2 (1500ms) -> etc. ->
word presentation 15 (1500ms) -> isi (0ms) -> recall delay (0ms) ->
recall until 'submit' button is pressed -> iti (default: 1000ms)


Note: this script provides the code to run any difficulty level btw. 1-6.
To change the number of rounds run and/or the levels of difficulty levels run, simply
change list.difficulty_practice and/or list.difficulty_test under section Editable Lists


STIMULI
categories: Friedman et al (2008)
exemplars: provided by Millisecond Software

By default, this script runs with 6 exemplars per category. That reduces the chance to guess 
the correct exemplar (per category) at the end of each trial to p ~ 0.17.

specific categories as well as exemplars can be edited under section "Editable Stimuli"

INSTRUCTIONS
provided by Millisecond Software - can be edited under section Editable Instructions

EDITABLE CODE:
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment.

The parameters you can change are:

/exemplarSize:				proportional (to canvas height) size of exemplars (default: 8%)

/stimDelay:					the delay (in ms) of the first exemplar presented after hitting spacebar (default: 500ms)
/stimDuration:				duration (in ms) of examplars on screen (default: 1500ms)
/stimISI:					the duration (in ms) of a blank screen presented after each stimulus and before the next (default: 0ms)
/recallDelay:				additional delay (on top of stimISI) (in ms) of the recall trial after the last exemplar is presented (default: 0ms)
/iti:						intertrial interval (in ms) in between each round (default: 1000ms)

/maxNumberOfPracticeSessions: maximum number of times participants can repeat the practice session if they choose to do so (default: 2)
								Note: the script will run at least 1 practice session regardless of parameter setting

/skipTestFeedback:			true(1): participants only receive performance feedback after each round during practice (but not the test)
							false(0): participants receive performance feedback after each round during practice AND test (default)
							
/debugmode:					true(1): the script is run in debugmode; a stimulus with all correct responses is presented with the textboxes
							during each recall trial
							false (0): the script is NOT run in debugmode (default)
							
Copyright © Millisecond Software. All rights reserved.
Contact | Terms of Service | Privacy Statement | Security Statement | GDPR