User Manual: Inquisit Prerence Ranking Task for the Touchscreen


___________________________________________________________________________________________________________________	

								Preference Ranking on Touchscreens
								(suitable for research with children)
___________________________________________________________________________________________________________________	


Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 12-01-2022
last updated:  01-03-2022 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 01-03-2022 Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
This script implements Millisecond Software's version of a computerized preference ranking test for
touchscreens that requires participants to rank 4 images by moving them into position from best to worst
(or worst to best, see Editable Parameters). 

The script is based on the Meidenbauer et al (2019) study of environmental preferences (urban vs. nature)
in adults and children. The original study was run on Android touchscreens.

The Inquisit script allows researchers to run the task on the computer (with mouse use)
or touchscreens (windows, macs, Android, ios). The default version of this script is sized for ipads 
IF the screen is large enough. If the screen is not large enough,  the script attempts to find the 
biggest 4:3 area of the screen and notes in the data file the dimensions of the used screen canvas.  
Check section Editable Parameters for more info on this topic. 

DISCLAIMER: Millisecond Software attempts to replicate the general task as described by 
Meidenbauer et al (2019) but differences between the implementations will exist.
Any problems that this script may contain are Millisecond's alone.


Reference:											

Kimberly L. Meidenbauer, Cecilia U.D. Stenfors, Jaime Young, 
Elliot A. Layden, Kathryn E. Schertz, Omid Kardan, Jean Decety, Marc G. Berman (2019).
The gradual development of the preference for natural environments,
Journal of Environmental Psychology,65, 101328,
ISSN 0272-4944,
https://doi.org/10.1016/j.jenvp.2019.101328.

article at: https://psyarxiv.com/7hw83/

more info about the original study as well as information about how to run the original study on
Androids: https://osf.io/xj3pk/

___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________

Participants run through 10 rating trials. For each trial, they see four pictures 
(here images of urban and natural environments) and are asked to
rank them by moving them into the order from worst to best (or the other way around) using their fingers
or the computer mouse.

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 5 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________
The default data stored in the data files are:

(1) Raw data file: 'preferenceranking_touchscreen_raw*.iqdat' (a separate file for each participant)

build:						The specific Inquisit version used (the 'build') that was run
computer.platform:			the platform the script was run on (win/mac/ios/android)
date, time: 				date and time script was run 
subject:					with the current subject id
group: 						with the current group id
session:					with the current session id


//Screen Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

activeCanvasHeight_inmm:		the width of the active canvas (by default: lightGray area) in mm 
activeCanvasWidth_inmm:			the height of the active canvas in mm 
display.canvasHeight:			the height of the active canvas in pixels
display.canvasWidth:			the width of the active canvas in pixels


//built-in Inquisit variables:
								
blockcode, blocknum:		the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 		the name and number of the currently recorded trial (built-in Inquisit variable)
								Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
								that do not store data to the data file. 

response:					the response of participant during current trial
latency:					response latency (in ms)

//custom variables
useDefaultSequence:			0 = a valid trialsequence could be generated within the allotted timeframe
							1 = no trialsequence could be generated within the allotted timeframe and the
							default sequence was used instead.

trialCounter: 				tracks the number of trials

trialimages: 				a string variable that stores the presented stimuli by itemnumber
							Example: 03100609
							trial presents: itemnumber 03 (picA), 10 (picB), 06 (picC), 09 (picD)

RT_ranking: 				stores the time in ms that it took participant to rank the four images 
							
rankingOrder:				Example: BCAD, presents the order of the ranked stimuli from worst to best
							(ranking Order goes from worst to best regardless of instructions)
							Example: ACDB
							A = rank1 (least liked) to B = rank4 (most liked)

//individual images (Note: location of picA/picB/picC/picD is randomly determined at trial onset)
picA_image: 				stores the presented image for picA
picA_itemnumber: 			stores the itemnumber of picA
picA_cat:					stores the category of picA 
								1 = attractive nature
								2 = attractive urban
								3 = unattractive nature
								4 = unattractive urban
								5 = highly attractive nature
								6 = very unattractive urban

picA_rank:					stores the assigned rank of picA (1 = worst to 4 = best)

(same for picB/picC/picD)



(2) Summary data file: 'preferenceranking_touchscreen_summary*.iqdat' (a separate file for each participant)

inquisit.version:			Inquisit version run
computer.platform:			the platform the script was run on (win/mac/ios/android)
startDate:					date script was run
startTime:					time script was started
subjectid:					assigned subject id number
groupid:					assigned group id number
sessionid:					assigned session id number
elapsedTime:				time it took to run script (in ms); measured from onset to offset of script
completed:					0 = script was not completed (prematurely aborted); 
							1 = script was completed (all conditions run)

//Screen Setup:
(parameter) runAbsoluteSizes:	true (1) = should run absolutely sized canvas (see parameters- canvasHeight_inmm)
								false (0) = should use proportionally sized canvas (uses width = 43*screenHeight)
								
canvasAdjustments:				NA: not applicable => parameters- runAbsoluteSize was set to 'false'
								0: parameters- runAbsoluteSize was set to 'true' and screen size was large enough
								1: parameters- runAbsoluteSize was set to 'true' BUT screen size was too small and 
								adjustments had to be made

activeCanvasHeight_inmm:		the width of the active canvas (by default: lightGray area) in mm 
activeCanvasWidth_inmm:			the height of the active canvas in mm 
display.canvasHeight:			the height of the active canvas in pixels
display.canvasWidth:			the width of the active canvas in pixels


useDefaultSequence:				0 = a valid trialsequence could be generated within the allotted timeframe
								1 = no trialsequence could be generated within the allotted timeframe and the
								default sequence was used instead.
							
finalTrialSequence:				stores the itemnumbers presented in each of the 10 trials

//////////Summary Variables:

/////by images:
image1:							contains the image file for image1
image1_cat:						containes the category of image 1
								1 = attractive nature
								2 = attractive urban
								3 = unattractive nature
								4 = unattractive urban
								5 = highly attractive nature
								6 = very unattractive urban

meanRating_image1:				the mean rating of image with itemnumber1 (1 to 4 with 4 being the most preferred)
(same for images2 - 10)

/////by rank1 - 10
meanRating1:					the mean rating of the image in rank1 (lowest rank - least liked)
-
meanRating10:					the mean rating of the image in rank10 (highest rank - most liked)

//the ranked itemnumbers from rank1 (least liked) to rank10 (most liked)
//Notes: 
//- if manually ranking is required of a subset of images, a note if left in the data file
//- information on how items were ranked under section 'Experimental Setup' below

rank1:							the image/itemnumber in rank1 (the least liked image)
-
rank10:							the image/itemnumber in rank10 (the most liked image)						

//////additional information about pairwise comparisons based on the first time
//////an item pair was presented

pair0102:			stores the 'winner' (higher ranked - more liked) image when image1 and image2 were presented together for the first time
...
pair0910:			stores the 'winner' (higher ranked - more liked) image when image9 and image10 were presented together for the first time

/////individual counts
image1_count - image10_count:  	number of times each image was presented (should be 4 for each)						
___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	

(1) Trial Sequence Generator:
The Trial Sequence Generator (code in helper script trialsequencegenerator.iqx) generates a random sequence of 
10 trials that presents 4 random images each (out of 10 possible ones) with the following constraints:
- no repeats of image files within the same trial
- each stimulus is presented exactly 4 times across the 10 trials
- each of the 10 items is presented at least once with each of the other items within the same trial
(= each possible image pair is presented at least once within the same trial)

If the script cannot find such as sequence within 500 attempts, the algorithm
reverts to using a default sequence and leaves a note in the data file.
Note: the time to create this sequence will vary from script run to script run.

(2) Intro/Practice
By default, this script runs 2 practice trials (see Editable Parameter) 
ranking squares of colors

(3) Test
The test block runs 10 trials randomly selecting one of the 10 trial sequences generated by the 
'trial sequence generator' at the beginning of the script.
- Each trial sequence contains the itemnumber for the four images to run
- The start location of each image onscreen is randomly determined.
- participants are asked to move the presented four images from best to worst (or worst to best; see editable
parameters)
- To move on to the next trial, participants need to press the continue button twice
- at the end of each trial, the script notes the final ranking order of the 4 stimuli
!IMPORTANT! Regardless of instructions, the final ranking order is *always* recorded from 
'worst to best' (see Meidenbauer et al, 2019) with rank1 being the least liked and rank4 
being the most liked image.

///////Image Ranking Algorithm///////
At the end of the script, the script ranks the 10 image files from
worst (rank1) to best (rank 10).

Steps:
- the script calculates the mean rating for each individual image and ranks them
- if two images share the same mean rating, the script checks the trial in which
both images were presented together for the first time. The image with the higher
rating (the one that is liked better) will receive the higher rank
(see Meidenbauer et al, 2019)
- if three or more images share the same mean rating, it is up to researchers to determine
the final rankings of these items (the script will store 'requires manual ranking'). 
To help with the ranking the data file will store the 'winning' (= better liked) image 
for all possible image pairs (based on first trial they were presented together). 

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________

This script runs with the 2 original stimuli sets provided by Meidenbauer et al: https://osf.io/xj3pk/
By default, the script selects stimuli set 1 - change under section Editable Parameters.
___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________

Instructions provided by Millisecond Software - can be edited under section 'Editable Instructions'
___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are: