Preference Ranking Task - Touchscreen - Version 2

Technical Manual

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com), Millisecond

Created: January 01, 2022

Last Modified: January 10, 2025 by K. Borchert (katjab@millisecond.com), Millisecond

Script Copyright © Millisecond Software, LLC

Background

This script implements Millisecond's version of a computerized preference ranking test for touchscreens that requires participants to rank 4 images by moving them into position from best to worst (or worst to best, see Editable Parameters).

The script is based on the Meidenbauer et al (2019) study of environmental preferences (urban vs. nature) in adults and children. The original study was run on Android touchscreens.

The Inquisit script allows researchers to run the task on the computer (with mouse use) or touchscreens (windows, macs, Android, ios). Screen size can be set to an absolute screen size under section Defaults. The default task uses proportional sizing.

DISCLAIMER: Millisecond attempts to replicate the general task as described by Meidenbauer et al (2019) but differences between the implementations will exist. Any problems that this script may contain are Millisecond's alone.

References

Kimberly L. Meidenbauer, Cecilia U.D. Stenfors, Jaime Young, Elliot A. Layden, Kathryn E. Schertz, Omid Kardan, Jean Decety, Marc G. Berman (2019). The gradual development of the preference for natural environments, Journal of Environmental Psychology,65, 101328, ISSN 0272-4944, https://doi.org/10.1016/j.jenvp.2019.101328.

Article at: https://psyarxiv.com/7hw83/

Stimuli at: https://osf.io/xj3pk/

More info about the original study as well as information about how to run the original study on Androids: https://osf.io/xj3pk/

Duration

5 minutes

Description

Participants run through 10 rating trials. For each trial, they see four pictures (here images of urban and natural environments) and are asked to rank them by moving them into the order from worst to best (or the other way around) using their fingers or the computer mouse.

Procedure

(1) Trial Sequence Generator:
The Trial Sequence Generator (code in helper script trialSequenceGenerator.iqjs) generates a random sequence of
10 trials that presents 4 random images each (out of 10 possible ones) with the following constraints:
- no repeats of image files within the same trial
- each stimulus is presented exactly 4 times across the 10 trials
- each of the 10 items is presented at least once with each of the other items within the same trial
(= each possible image pair is presented at least once within the same trial)

If the script cannot find such as sequence within 500 attempts, the algorithm
reverts to using a default sequence and leaves a note in the data file.
the time to create this sequence will vary from script run to script run.

(2) Intro/Practice
By default, this script runs 2 practice trials (see Editable Parameter)
ranking squares of colors

(3) Test
The test block runs 10 trials randomly selecting one of the 10 trial sequences generated by the
'trial sequence generator' at the beginning of the script.
- Each trial sequence contains the ItemNumber for the four images to run
- The start location of each image onscreen is randomly determined.
- participants are asked to move the presented four images from best to worst (or worst to best; see editable
parameters)
- To move on to the next trial, participants need to press the continue button twice
- at the end of each trial, the script notes the final ranking order of the 4 stimuli
!IMPORTANT! Regardless of instructions, the final ranking order is *always* recorded from
'worst to best' (see Meidenbauer et al, 2019) with rank1 being the least liked and rank4
being the most liked image.

Stimuli

This script runs with the 2 original stimuli sets provided by Meidenbauer et al: https://osf.io/xj3pk/
The version is set by the respective stimuli scripts (e.g. stimuli_set1_inc.iqjs)

Instructions

Instructions provided by Millisecond - can be edited in script
"preferenceranking_instructions_inc.iqjs"

Scoring

Rank1 - Rank10
- if manually ranking is required of a subset of images, a note if left in the data file

Image Ranking Algorithm
At the end of the script, the script ranks the 10 image files from worst (rank1) to best (rank 10).

Steps:
- the script calculates the mean rating for each individual image and ranks them
- if two images share the same mean rating, the script checks the trial in which
both images were presented together for the first time. The image with the higher
rating (the one that is liked better) will receive the higher rank
(see Meidenbauer et al, 2019)
- if three or more images share the same mean rating, it is up to researchers to determine
the final rankings of these items (the script will store 'requires manual ranking').
To help with the ranking the data file will store the 'winning' (= better liked) image
for all possible image pairs (based on first trial they were presented together).

Summary Data

File Name: preferenceranking_touchscreen_summary*.iqdat

Data Fields

NameDescription
inquisit.version Inquisit version number
computer.platform Device platform: win | mac |ios | android
computer.touch 0 = device has no touchscreen capabilities; 1 = device has touchscreen capabilities
computer.hasKeyboard 0 = no external keyboard detected; 1 = external keyboard detected
startDate Date the session was run
startTime Time the session was run
subjectId Participant ID
groupId Group number
sessionId Session number
elapsedTime Session duration in ms
completed 0 = Test was not completed
1 = Test was completed
useDefaultSequence 0 = a valid trialSequence could be generated within the allotted timeframe
1 = no trialSequence could be generated within the allotted timeframe and the
default sequence was used instead.
finalTrialSequence Stores the ItemNumbers presented in each of the 10 trials
Summary Performance Metrics By Images - Explained Only For Image0
image0 Contains the image file for image0
image0Cat Containes the category of image0
1 = attractive nature
2 = attractive urban
3 = unattractive nature
4 = unattractive urban
5 = highly attractive nature
6 = very unattractive urban
meanRatingImage0 The mean rating of image with ItemNumber0 (1 to 4 with 4 being the most preferred)
Summary Performance Metrics By Rank1 - Explained Only For Rank1
meanRating1 The mean rating of the image in rank1 (lowest rank - least liked)
rank1 The image/ItemNumber in rank1 (the least liked image); image itemnumbers 0-9
Summary Performance Metrics By Pairwise Comparisons Based On The First Time An Item Pair Was Presented; Explained Only For Pair01
pair01 Stores the 'winner' (higher ranked - more liked) image when image0 and image1 were presented together for the first time
Individual Counts - Explained Only For Image0count
image0Count Number of times each image was presented (should be 4 for each)

Raw Data

File Name: preferenceranking_touchscreenRaw*.iqdat

Data Fields

NameDescription
build Inquisit version number
computer.platform Device platform: win | mac |ios | android
computer.touch 0 = device has no touchscreen capabilities; 1 = device has touchscreen capabilities
computer.hasKeyboard 0 = no external keyboard detected; 1 = external keyboard detected
date Date the session was run
time Time the session was run
subject Participant ID
group Group number
session Session number
blockcode The name the current block (built-in Inquisit variable)
blocknum The number of the current block (built-in Inquisit variable)
trialcode The name of the currently recorded trial (built-in Inquisit variable)
trialnum The number of the currently recorded trial (built-in Inquisit variable)
trialnum is a built-in Inquisit variable; it counts all trials run
even those that do not store data to the data file.
Inquisit Built-In Dvs
response The response of participant during current trial
latency Response latency (in ms)
Custom Variables
useDefaultSequence 0 = a valid trialSequence could be generated within the allotted timeframe
1 = no trialSequence could be generated within the allotted timeframe and the
default sequence was used instead.
trialCounter Tracks the number of trials
trialImages A string variable that stores the presented stimuli by ItemNumber
Example: 2958
trial presents: ItemNumber 2 (picA), 9 (picB), 5 (picC), 8 (picD) ( ItemNumbers start with 0)
rtRanking Stores the time in ms that it took participant to rank the four images
rankingOrder Example: BCAD, presents the order of the ranked stimuli from worst to best
(ranking Order goes from worst to best regardless of instructions)
Example: ACDB
A = rank1 (least liked) to B = rank4 (most liked)
Individual Images (Note: Location Of Pica/Picb/Picc/Picd Is Randomly Determined At Trial Onset); Explained Only For Pica
picAImage Stores the presented image file name for picA
picAItemNumber Stores the ItemNumber of picA ( ItemNumbers from 0-9)
picACat Stores the category of picA
1 = attractive nature
2 = attractive urban
3 = unattractive nature
4 = unattractive urban
5 = highly attractive nature
6 = very unattractive urban
picARank Stores the assigned rank of picA (1 = worst to 4 = best)

Parameters

The procedure can be adjusted by setting the following parameters.

NameDescriptionDefault
Color Parameter
canvasColor Display color of the actively used portion of the screen (the 'canvas')
if set to a color other than the screenColor, the active canvas
appears 'anchored' on the screen regardless of monitor size
black
screenColor Color of the screen not used by the canvas ('inactive screen')black
defaultTextColor Default color of text items presented on active canvaswhite
Design
rankingOrder 1 = positive to negative
2 = negative to positive
1
practiceTrials Number of practice trials to run2