User Manual: Inquisit Spatial Paired Associates Learning Task


___________________________________________________________________________________________________________________	

								SPATIAL PAIRED ASSOCIATES LEARNING TASK (spatial PALT)
___________________________________________________________________________________________________________________	


Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 04-11-2022
last updated:  05-03-2022 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 05-03-2022 Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
This script implements Millisecond Software's version of the Spatial Paired Associates Learning Task (PALT), 
a test of visual working memory and spatial learning which is often used in research with elderly populations.
This implementation is based on Trewartha et al (2014) published procedure.

The task is developed for touchscreens but will adapt to mouse use on non-touchscreens.

Researchers can select to run the task with an absolute screen size to ensure that distances
stay the same across devices. The default settings are optimized for ipad touchscreens.
See section Editable Parameters for more info.

Reference:	

Trewartha, K.M., Garcia, A., Wolpert, D.M., & Flanagan, J.R. (2014).
Fast But Fleeting: Adaptive Motor Learning Processes Associated with Aging and Cognitive Decline.
The Journal of Neuroscience, 34(40), 13411–13421.

___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________

The Spatial PALT presents 6 boxes on screen, arranged in a circle around the screen center.
The boxes get randomly openend and either reveal an empty box or a shape before they 
are closed again and another box opens.

After all boxes have been opened, all uncovered shapes appear (in random order)
one by one in the center of the screen and participants have to click on (or touch) the box
that they think the presented shape was originally located in. No performance feedback is provided by default.
The number of shapes hidden in the boxes depends on the current set size tested.
Testing begins at setsize 1 and can go up to 6.
Participants get 10 attempts per setsize before the test is terminated.
Note: Repeated setsizes repeat the same shape and shape locations but may open the boxes
in a different random order.

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 8 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________
The default data stored in the data files are:

(1) Raw data file: 'pairedassociateslearningtask_raw*.iqdat' (a separate file for each participant)

build:						The specific Inquisit version used (the 'build') that was run
computer.platform:			the platform the script was run on (win/mac/ios/android)
date, time: 				date and time script was run 
subject:					with the current subject id
group: 						with the current group id
session:					with the current session id

//Play Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

playareaHeight_inmm:			the width of the play area in mm 
playareaWidth_inmm:				the height of the play area in mm 
display.canvasHeight:			the height of the active canvas ('playarea') in pixels
display.canvasWidth:			the width of the active canvas ('playarea') in pixels

px_per_mm:						the conversion factor to convert pixel data into mm-results for the current monitor
								(Note: the higher resolution of the current monitor 
								the more pixels cover the same absolute screen distance)
								This factor is needed if you want to convert pixel data into absolute mm data or the other way around


blockcode, blocknum:		the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 		the name and number of the currently recorded trial (built-in Inquisit variable)
								Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
								that do not store data to the data file. 
								
setSize:					the current setSize tested
roundCounter:				the number of rounds started for this setSize								
phase:						"learning" vs. "test" 
trialCounter:				the trialCounter per phase  (resets after learning and after test)

currentBox:					learning phase only: the current box that is 'opened'
targetPresent:				learning phase only: 0 = an empty box was opened; otherwise it contains the stimulus itemnumber of the presented stimulus 

targetStim:					test phase only: the currently presented target pattern (image itemnumber)							
corrRsp:					test phase only: stores the correct box that needs to be selected

//DVs:
response:					stores the current response made
							test trials: stores the selected box
 
correct:					1 = selection was correct; 0 = otherwise 
list.ACC_round.mean:		calculates the current proportion corrects for the test trials

latency:					the response time (in ms); measured from onset of target Stim							


box1_stim to box6_stim:		store the pattern itemnumbers stored in each box
							Note: 1 = empty box


(2) Summary data file: 'pairedassociateslearningtask_summary*.iqdat' (a separate file for each participant)

inquisit.version:			Inquisit version run
computer.platform:			the platform the script was run on (win/mac/ios/android)
startDate:					date script was run
startTime:					time script was started
subjectid:					assigned subject id number
groupid:					assigned group id number
sessionid:					assigned session id number
elapsedTime:				time it took to run script (in ms); measured from onset to offset of script
completed:					0 = script was not completed (prematurely aborted); 
							1 = script was completed (all conditions run)
							
//Play Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

playareaHeight_inmm:			the width of the play area in mm 
playareaWidth_inmm:				the height of the play area in mm 
display.canvasHeight:			the height of the active canvas ('playarea') in pixels
display.canvasWidth:			the width of the active canvas ('playarea') in pixels

px_per_mm:						the conversion factor to convert pixel data into mm-results for the current monitor
								(Note: the higher resolution of the current monitor 
								the more pixels cover the same absolute screen distance)
								This factor is needed if you want to convert pixel data into absolute mm data or the other way around							
							
														
//Summary Data:							
propCorrect: 				the proportion correct responses across all test trials							
meanRT: 					the mean box selection time across all correct and incorrect test trials							
							
maxSetSize: 				the largest setSize successfully tested at the end of the script	

totalAttempts:				the total number of attempts needed to reach the maxSetSize	
							Note: participants who end up with a maxSetSize < 6 will have run an additional
							10 attempts on the next higher level but were ultimately unsuccessful (or did not complete the script)
							
MS_PALTScore:				Note: this score is added by Millisecond Software as an attempt to find a number comparison btw. participants
							that takes into account the maxSetSize reached as well as the number of attempts needed to get there
							The higher the score, the fewer attempts were needed to reach the maxSetSize.
							calculation: PALTScore = maxSetSixe + [1 - (totalAttempts across all levels up to maxSetSize)/(max possible attempts across all levels up to maxSetSize)]
							Examples: 
							a) level 5 with (1+1+3+4+5 =) 14 attempts to get there: PALTScore = 5 + [1 - (14/50)] = 5.72
							(Level 5 was completed with 1 (level1)+1+3+4+5 (level5) attempts; Level 6 was failed even after 10 attempts)

							b) level 5 with (1+3+6+7+9 =) 26 attempts to get there: PALTScore = 5 + [1 - (26/50)] = 5.48
							(Level 5 was completed with 1 (level1)+3+6+7+5 (level5) attempts; Level 6 was failed even after 10 attempts)

	
							Level 6: scores range from 6-6.9 (a score of 6.0 means participant used up all 60 attempts (10 per level); a score of 6.9 means participant used only 6 attempts (1 per level) total to get to the end)
							Level 5: scores range from 5-5.9
							Level 4: scores range from 4-4.9
							Level 3: scores range from 3-3.9
							Level 2: scores range from 2-2.9
							Level 1: scores range from 1-1.9
							Level 0: 0
							
							Note: if you know of better PALT measures let us know :)
							
//attempts per level//
attempts_setSize1: 			the number of trials needed to complete setSize1
attempts_setSize2:			the number of trials needed to complete setSize2 
attempts_setSize3:			the number of trials needed to complete setSize3
attempts_setSize4:			the number of trials needed to complete setSize4 
attempts_setSize5:			the number of trials needed to complete setSize5 
attempts_setSize6:			the number of trials needed to complete setSize6
							
___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	

(1) Instructions with demos (can be repeated upon request)

(2) Test:
Set Size testing begins with Setsize = 1 and can continue up to setSize = 6
For each new setSize the script randomly selects the relevant number of stimuli from 10 possible ones 
and assigns these stimuli randomly to the 6 possible locations.

The test is divided into 2 phases:
(a) Learning/Encoding Phase:
- each of the 6 boxes is opened to reveal its 'content'
- by default boxes are opened with a SOA of 2s and each box stays open for 1s
- the order in which the boxes are opened is randomly determined

(b) Test/Retrieval Phase:
- the presented stimuli appear one-by-one in the middle of the screen (order of stimuli is randomly determined)
- participants have to select the box that they think the stimulus was originally presented in
- no performance feedback is provided by default (see section Editable Parameters)

//SetSize Adjustments://
- participants have to get the location of all presented stimuli correct to move on to the next setSize
- if an error was made, the setSize is repeated up to 10 times. Repeated setSizes repeat the same stimuli
and locations but the opening sequence is again randomized
- if participant cannot get the pairings (stimulus/box) correctly after 10 tries or participants have completed
setSize6 successfully, testing stops

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________

provided by Millisecond Software - can be edited under section 'Editable Stimuli'
The stimuli images are based on the stimuli published by Trewartha et al (2014)
___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________

provided by Millisecond Software - see section 'Editable Instructions'
___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are: