User Manual: Inquisit Tower of Hanoi


___________________________________________________________________________________________________________________	

										Tower of Hanoi Task (TOH)
										(German version)
___________________________________________________________________________________________________________________	

last updated:  01-09-2023 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC
Script Copyright © 01-09-2023 Millisecond Software

German translation provided by K. Borchert for Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________	
This script implements a computerized version of the Tower of Hanoi Task (TOH), a disk transfer task with 3
equally sized pegs, as described by Humes et al (1997). The TOH is considered a test of executive functioning 
with a focus on planning abilities.

The default setup for the Millisecond Software TOH task is optimized for touchscreen devices 
sized like an ipad but adapts to mouse use on non-touchscreens. By default, the stimuli are 
absolutely sized if the current screen size is big enough - if not, the script uses the 
largest 4:3 portion of the current screen (e.g. smartphone screen) that it can find.
Absolute sizing of stimuli can easily be turned off or fine-tuned via parameter settings.


Reference:
  Humes, G. E., Welsh, M. C., Retzlaff, P., & Cookson, N. (1997). Towers of Hanoi and London: 
  Reliability and Validity of Two Executive Function Tasks. 
  Assessment (Odessa, Fla.), 4(3), 249–257. 
  https://doi.org/10.1177/107319119700400305
		
___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________	
Participants are asked to arrange up to five disks of varying sizes on three different pegs
in a specific goal pattern in as few moves as possible and observing two movement rules:
"Only move the top disk" and "bigger disks may not be placed on top of smaller disks".

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 10 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________
The default data stored in the data files are:

(1) Raw data file: 'towerofhanoi_german_raw*.iqdat' (a separate file for each participant)

build:							The specific Inquisit version used (the 'build') that was run
computer.platform:				the platform the script was run on (win/mac/ios/android)
date, time: 					date and time script was run 
subject, group: 				with the current subject/groupnumber
session:						with the current session id

//Screen Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

activeCanvasHeight_inmm:		the width of the active canvas (by default: lightGray area) in mm 
activeCanvasWidth_inmm:			the height of the active canvas in mm 
display.canvasHeight:			the height of the active canvas in pixels
display.canvasWidth:			the width of the active canvas in pixels

px_per_mm:						the conversion factor to convert pixel data into mm-results for the current monitor
								(Note: the higher resolution of the current monitor 
								the more pixels cover the same absolute screen distance)
								This factor is needed if you want to convert pixel data into absolute mm data or the other way around


runPractice (parameter):		1 = a practice session was run; 0 = otherwise								

blockcode, blocknum:			the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 			the name and number of the currently recorded trial (built-in Inquisit variable)
									Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
									that do not store data to the data file such as feedback trials. Thus, trialnum 
									may not reflect the number of main trials run per block.
																		
problemCount:					tracks the number of problems run																			
problemnumber					Current problem number (by default: 1-12)	
goalstate_category:				"tower" vs. "flat"							
N_disks:						stores the number of disks present in the current problem									
targetmoves						Number of minimal moves to solve the current problem

targetachieved					Returns 1 as soon as the subject has successfully reached a given problem's target / goal state. 
								Otherwise 0.

subjectmoves					Number of subject-performed moves for the current problem.
								Note: each rule violation counts as one extra move in this script 
									
excessmoves						Returns the difference between number of moves performed by the subject ('subjectmoves')
								and the number of target moves for a given problem.
								
consecutiveCorrect:				counts the number of consecutive correct solutions for the same problem

stopTask:						1 = the task should be terminated at this point (values.success = 0 at end of all attempts for the current problem)
								0 = otherwise
								
success:						1 = the current problem was solved correctly according to definition of success
								(default: success (test)= problem solved correctly twice in consecutive attempts)
								0 = the current problem has not (yet) been successfully solved
																
achievementScore				Score awarded for solving the current test problem if values.success = 1. 
								see expressions.scoring
																	
totalscore:						Score achieved across the whole set of test problems.
								In this script: Max is 72
																	
violation:						0 = no rule was violated with the currently recorded move
								1 = rule 1 was violated ('no larger disk onto smaller disk')
								2 = rule 2 was violated ('only top disk can be moved')
								(if rule 1 AND rule 2 are violated in the same move, only rule violation 2 is noted.
								The violations count as a single violation in this script)
									
countViolations_problem:		counts the number of rule violations per problem

firstmovetime					Returns the time (in ms) elapsed between initial presentation of 
								the goal configuration and the initialization of the subject's first valid
								move. Sometimes also referred to as "planning time" or simply
								"latency". Note: Measure is computed separately for each problem attempt.
										
solutiontime					Returns the time (in ms) elapsed between initial presentation of
								the goal configuration and a subject's successful solution or problem termination.
								Note: Measure is computed separately for each problem attempt.										
										
executiontime					Computed as solutiontime - firstmovetime. Note: Measure is 
								computed separately for each problem attempt.
										
										
t_choicestart					Absolute start time for trial.choice in ms. May be used to derive
								additional measures during data analysis (e.g. mean move time).
									
t_choiceend						Absolute end time for trial.choice in ms. May be used to derive
								additional measures during data analysis (e.g. mean move time).
									
TotalCompletionTime:			cumulative solution times across all problem sets attempted										

latency:						the latency of the current response in ms (or if no response: trialduration)									
response:						response made (the peg that was moved to)

trial.choice.lastdropsource:	the last moved disk (1, 2, 3, 4, or 5)
trial.choice.lastdroptarget:    the last peg that a disk was moved to (apeg, bpeg, cpeg)

movestring						Text string containing a record of performed moves. E.g. "1apeg,"
								indicates that disk1 (the smallest) was moved to pegA (the left one).
								A rule violation is expressed as "rule1violation (2bpeg)" => disk2 was moved onto a smaller disk onto b-peg 
									
top_in_a						Returns the disk number (1, 2, 3, 4, or 5) currently inhabiting the top position 
								on peg 1 (left). Returns 'none' if peg is empty. Used to determine
								valid responses in 'trial.choice'.
									
top_in_b						Returns the disk number (1, 2, 3, 4, or 5) currently inhabiting the top position 
								on peg 2 (center). Returns 'none' if peg is empty. Used to determine
								valid responses in 'trial.choice'.
									
top_in_c 						Returns the disk number (1, 2, 3, 4, or 5) currently inhabiting the top position 
								on peg 3 (right). Returns 'none' if peg is empty. Used to determine
								valid responses in 'trial.choice'.									
									
a_count							The number of disks currently placed on peg 1 (left).
b_count							The number of disks currently placed on peg 2 (center).
c_count							The number of disks currently placed on peg 3 (right).									

cumMoves:						cumulative number of moves made across all test problem sets attempted
cumMoves_cS:					cumulative number of moves made across all test problem sets that were solved
cumOptimalMoves:				cumulative number of optimal moves across those test problem sets that were attempted
cumOptimalMoves_cS:				cumulative number of optimal moves across those test problem sets that were solved

							
(2) Summary data file: 'towerofhanoi_german_summary*.iqdat' (a separate file for each participant)

inquisit.version:				Inquisit version run
computer.platform:				the platform the script was run on (win/mac/ios/android)
startDate:						date script was run
startTime:						time script was started
subjectid:						assigned subject id number
groupid:						assigned group id number
sessionid:						assigned session id number
elapsedTime:					time it took to run script (in ms); measured from onset to offset of script
completed:						0 = script was not completed (prematurely aborted); 
								1 = script was completed (all conditions run)	
								
//Screen Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

activeCanvasHeight_inmm:		the width of the active canvas (by default: lightGray area) in mm 
activeCanvasWidth_inmm:			the height of the active canvas in mm 
display.canvasHeight:			the height of the active canvas in pixels
display.canvasWidth:			the width of the active canvas in pixels

px_per_mm:						the conversion factor to convert pixel data into mm-results for the current monitor
								(Note: the higher resolution of the current monitor 
								the more pixels cover the same absolute screen distance)
								This factor is needed if you want to convert pixel data into absolute mm data or the other way around

runPractice (parameter):		1 = a practice session was run; 0 = otherwise	
								
problemsStarted:				lists all test problemnumbers that were started in order
problemsSolved:					lists all test problemnumbers that were solved (in order)	
															
totalscore:						Score achieved across the whole set of test problems.
								In this script: Max is 72	

cumMoves:						cumulative number of moves made across all test problem sets attempted
cumMoves_cS:					cumulative number of moves made across all test problem sets that were solved
cumOptimalMoves:				cumulative number of optimal moves across those test problem sets that were attempted
cumOptimalMoves_cS:				cumulative number of optimal moves across those test problem sets that were solved
								
									
meanFirstMoveTime:				mean first move time (in ms); based on all recorded first move-times

timePerMoveRatio:				mean amount of time (in ms) spent on each move
								(ratio of totalCompletionTime over the number of moves made)
									
moveAccRatio:					number of moves made in relationship to the number of optimal moves
								1 = participant made only the number of optimal moves (but may NOT have solved the problems)
									
moveAccRatio_cS:				number of moves made in relationship to the number of optimal moves
								(only calculated for solved problems)
								1 = participant made only the optimal moves for the problems solved
																																	
countViolations_total:			counts the number of rule violations across problems									

_________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	

The default setup of the test session is based on Humes et al (1997). 
However: 
- you can add/edit an OPTIONAL practice session
- you can change the number of test problems run (see section Editable Lists)
- you can change the placements of the disks (see section Editable Lists), 
Note that changing the disk placements will change the problems
(the maximum number of disks that can be used in this script is 5)
- you can change the goal state images under section Editable Stimuli
- you can change the scoring algorithm under expressions.scoring (see section Editable Parameters)
- you can change the number of attempts needed to move on to the next problem (see section Editable Parameters)
- you can change the number of allowed movements per attempt (see section Editable Parameters)
Note: if you change the design of the test, you may have to update your 
instructions (see section Editable Instructions) accordingly



PRACTICE SETUP (OPTIONAL):
Note: the practice session is optional and can be turned on/off under section Editable Parameters.
Humes et al (1997) did not run a practice.
This script runs one practice problem: a 2-disk, 2-move tower problem
Participants are allowed 20 movements to solve the practice problem (Editable Parameter).
In this script all participants, regardless of practice performance, move on to the test.


TEST SETUP:
* 12 problems alternating tower and flat solutions, problems increase in moves from 5 to 15
(problem1-6: 3 disk problems, problem7-12: 4 disk problems)

* problems are self-paced
* each problem gets 6 attempts to solve it within 20 moves (Note: in this script, rule violations are counted as moves)
* each problem needs to be solved twice in a row to advance to the next problem.
If a problem is not solved twice in a row, the task terminates prematurely

/////scoring////
See Humes et al (1997, p.251)
If a problem is solved successfully twice in a row within the allowable attempts,
the computer assigns the following scores:
- solved in attempts 1&2: 6 points
- solved in attempts 2&3: 5 points
- solved in attempts 3&4: 4 points
- solved in attempts 4&5: 3 points
- solved in attempts 5&6: 2 points
- else: no points

Max Range: 0-72 points


////Rule Reminders provided///
*for the first violation of each rule: a reminder of the rule is provided, and the previous set-up restored
*for subsequent violations: only a 'Violation' reminder is provided and the previous set-up restored
	
___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________	
* provided by Millisecond Software

*see section Editable Stimuli, different base/peg/disk images can be used (in that case positions of 
base/pegs/disks may have to be adjusted)
*the start and goal states of each disk can be edited under section Editable Lists)

___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________	
Instructions are provided by Millisecond Software.
They can be adjusted under section Editable Instructions.

___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are: