Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com), Millisecond
Created: January 01, 2022
Last Modified: January 10, 2025 by K. Borchert (katjab@millisecond.com), Millisecond
Script Copyright © Millisecond Software, LLC
This script implements Millisecond's version of a computerized preference ranking test for touchscreens that requires participants to rank 4 images by moving them into position from best to worst (or worst to best, see Editable Parameters).
The script is based on the Meidenbauer et al (2019) study of environmental preferences (urban vs. nature) in adults and children. The original study was run on Android touchscreens.
The Inquisit script allows researchers to run the task on the computer (with mouse use) or touchscreens (windows, macs, Android, ios). Screen size can be set to an absolute screen size under section Defaults. The default task uses proportional sizing.
DISCLAIMER: Millisecond attempts to replicate the general task as described by Meidenbauer et al (2019) but differences between the implementations will exist. Any problems that this script may contain are Millisecond's alone.
Kimberly L. Meidenbauer, Cecilia U.D. Stenfors, Jaime Young, Elliot A. Layden, Kathryn E. Schertz, Omid Kardan, Jean Decety, Marc G. Berman (2019). The gradual development of the preference for natural environments, Journal of Environmental Psychology,65, 101328, ISSN 0272-4944, https://doi.org/10.1016/j.jenvp.2019.101328.
Article at: https://psyarxiv.com/7hw83/
Stimuli at: https://osf.io/xj3pk/
More info about the original study as well as information about how to run the original study on Androids: https://osf.io/xj3pk/
5 minutes
Participants run through 10 rating trials. For each trial, they see four pictures (here images of urban and natural environments) and are asked to rank them by moving them into the order from worst to best (or the other way around) using their fingers or the computer mouse.
This script runs with the 2 original stimuli sets provided by Meidenbauer et al: https://osf.io/xj3pk/
The version is set by the respective stimuli scripts (e.g. stimuli_set1_inc.iqjs)
Instructions provided by Millisecond - can be edited in script
"preferenceranking_instructions_inc.iqjs"
File Name: preferenceranking_touchscreen_summary*.iqdat
| Name | Description |
|---|---|
| inquisit.version | Inquisit version number |
| computer.platform | Device platform: win | mac |ios | android |
| computer.touch | 0 = device has no touchscreen capabilities; 1 = device has touchscreen capabilities |
| computer.hasKeyboard | 0 = no external keyboard detected; 1 = external keyboard detected |
| startDate | Date the session was run |
| startTime | Time the session was run |
| subjectId | Participant ID |
| groupId | Group number |
| sessionId | Session number |
| elapsedTime | Session duration in ms |
| completed | 0 = Test was not completed 1 = Test was completed |
| useDefaultSequence | 0 = a valid trialSequence could be generated within the allotted timeframe 1 = no trialSequence could be generated within the allotted timeframe and the default sequence was used instead. |
| finalTrialSequence | Stores the ItemNumbers presented in each of the 10 trials |
Summary Performance Metrics By Images - Explained Only For Image0 |
|
| image0 | Contains the image file for image0 |
| image0Cat | Containes the category of image0 1 = attractive nature 2 = attractive urban 3 = unattractive nature 4 = unattractive urban 5 = highly attractive nature 6 = very unattractive urban |
| meanRatingImage0 | The mean rating of image with ItemNumber0 (1 to 4 with 4 being the most preferred) |
Summary Performance Metrics By Rank1 - Explained Only For Rank1 |
|
| meanRating1 | The mean rating of the image in rank1 (lowest rank - least liked) |
| rank1 | The image/ItemNumber in rank1 (the least liked image); image itemnumbers 0-9 |
Summary Performance Metrics By Pairwise Comparisons Based On The First Time An Item Pair Was Presented; Explained Only For Pair01 |
|
| pair01 | Stores the 'winner' (higher ranked - more liked) image when image0 and image1 were presented together for the first time |
Individual Counts - Explained Only For Image0count |
|
| image0Count | Number of times each image was presented (should be 4 for each) |
File Name: preferenceranking_touchscreenRaw*.iqdat
| Name | Description |
|---|---|
| build | Inquisit version number |
| computer.platform | Device platform: win | mac |ios | android |
| computer.touch | 0 = device has no touchscreen capabilities; 1 = device has touchscreen capabilities |
| computer.hasKeyboard | 0 = no external keyboard detected; 1 = external keyboard detected |
| date | Date the session was run |
| time | Time the session was run |
| subject | Participant ID |
| group | Group number |
| session | Session number |
| blockcode | The name the current block (built-in Inquisit variable) |
| blocknum | The number of the current block (built-in Inquisit variable) |
| trialcode | The name of the currently recorded trial (built-in Inquisit variable) |
| trialnum | The number of the currently recorded trial (built-in Inquisit variable) trialnum is a built-in Inquisit variable; it counts all trials run even those that do not store data to the data file. |
Inquisit Built-In Dvs |
|
| response | The response of participant during current trial |
| latency | Response latency (in ms) |
Custom Variables |
|
| useDefaultSequence | 0 = a valid trialSequence could be generated within the allotted timeframe 1 = no trialSequence could be generated within the allotted timeframe and the default sequence was used instead. |
| trialCounter | Tracks the number of trials |
| trialImages | A string variable that stores the presented stimuli by ItemNumber Example: 2958 trial presents: ItemNumber 2 (picA), 9 (picB), 5 (picC), 8 (picD) ( ItemNumbers start with 0) |
| rtRanking | Stores the time in ms that it took participant to rank the four images |
| rankingOrder | Example: BCAD, presents the order of the ranked stimuli from worst to best (ranking Order goes from worst to best regardless of instructions) Example: ACDB A = rank1 (least liked) to B = rank4 (most liked) |
Individual Images (Note: Location Of Pica/Picb/Picc/Picd Is Randomly Determined At Trial Onset); Explained Only For Pica |
|
| picAImage | Stores the presented image file name for picA |
| picAItemNumber | Stores the ItemNumber of picA ( ItemNumbers from 0-9) |
| picACat | Stores the category of picA 1 = attractive nature 2 = attractive urban 3 = unattractive nature 4 = unattractive urban 5 = highly attractive nature 6 = very unattractive urban |
| picARank | Stores the assigned rank of picA (1 = worst to 4 = best) |
The procedure can be adjusted by setting the following parameters.
| Name | Description | Default |
|---|---|---|
Color Parameter |
||
| canvasColor | Display color of the actively used portion of the screen (the 'canvas') if set to a color other than the screenColor, the active canvas appears 'anchored' on the screen regardless of monitor size | black |
| screenColor | Color of the screen not used by the canvas ('inactive screen') | black |
| defaultTextColor | Default color of text items presented on active canvas | white |
Design |
||
| rankingOrder | 1 = positive to negative 2 = negative to positive | 1 |
| practiceTrials | Number of practice trials to run | 2 |