___________________________________________________________________________________________________________________ Preference Ranking on Touchscreens (suitable for research with children) ___________________________________________________________________________________________________________________ Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC Date: 12-01-2022 last updated: 01-02-2025 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC Script Copyright © 01-02-2025 Millisecond Software ___________________________________________________________________________________________________________________ BACKGROUND INFO ___________________________________________________________________________________________________________________ This script implements Millisecond Software's version of a computerized preference ranking test for touchscreens that requires participants to rank 4 images by moving them into position from best to worst (or worst to best, see Editable Parameters). The script is based on the Meidenbauer et al (2019) study of environmental preferences (urban vs. nature) in adults and children. The original study was run on Android touchscreens. The Inquisit script allows researchers to run the task on the computer (with mouse use) or touchscreens (windows, macs, Android, ios). Screen size can be set to an absolute screen size under section Defaults. The default task uses proportional sizing. DISCLAIMER: Millisecond Software attempts to replicate the general task as described by Meidenbauer et al (2019) but differences between the implementations will exist. Any problems that this script may contain are Millisecond's alone. Reference: Kimberly L. Meidenbauer, Cecilia U.D. Stenfors, Jaime Young, Elliot A. Layden, Kathryn E. Schertz, Omid Kardan, Jean Decety, Marc G. Berman (2019). The gradual development of the preference for natural environments, Journal of Environmental Psychology,65, 101328, ISSN 0272-4944, https://doi.org/10.1016/j.jenvp.2019.101328. article at: https://psyarxiv.com/7hw83/ more info about the original study as well as information about how to run the original study on Androids: https://osf.io/xj3pk/ ___________________________________________________________________________________________________________________ TASK DESCRIPTION ___________________________________________________________________________________________________________________ Participants run through 10 rating trials. For each trial, they see four pictures (here images of urban and natural environments) and are asked to rank them by moving them into the order from worst to best (or the other way around) using their fingers or the computer mouse. ___________________________________________________________________________________________________________________ DURATION ___________________________________________________________________________________________________________________ the default set-up of the script takes appr. 5 minutes to complete ___________________________________________________________________________________________________________________ DATA OUTPUT DICTIONARY ___________________________________________________________________________________________________________________ The fields in the data files are: (1) Raw data file: 'preferenceranking_touchscreenRaw*.iqdat' (a separate file for each participant) build: The specific Inquisit version used (the 'build') that was run computer.platform: the platform the script was run on (win/mac/ios/android) date, time: date and time script was run subject: with the current subject id group: with the current group id session: with the current session id //built-in Inquisit variables: blockCode, blockNum: the name and number of the current block (built-in Inquisit variable) trialCode, trialNum: the name and number of the currently recorded trial (built-in Inquisit variable) Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those that do not store data to the data file. response: the response of participant during current trial latency: response latency (in ms) //custom variables useDefaultSequence: 0 = a valid trialSequence could be generated within the allotted timeframe 1 = no trialSequence could be generated within the allotted timeframe and the default sequence was used instead. trialCounter: tracks the number of trials trialImages: a string variable that stores the presented stimuli by ItemNumber Example: 2958 trial presents: ItemNumber 2 (picA), 9 (picB), 5 (picC), 8 (picD) (Note: ItemNumbers start with 0) rtRanking: stores the time in ms that it took participant to rank the four images rankingOrder: Example: BCAD, presents the order of the ranked stimuli from worst to best (ranking Order goes from worst to best regardless of instructions) Example: ACDB A = rank1 (least liked) to B = rank4 (most liked) //individual images (Note: location of picA/picB/picC/picD is randomly determined at trial onset) picAImage: stores the presented image file name for picA picAItemNumber: stores the ItemNumber of picA (Note: ItemNumbers from 0-9) picACat: stores the category of picA 1 = attractive nature 2 = attractive urban 3 = unattractive nature 4 = unattractive urban 5 = highly attractive nature 6 = very unattractive urban picARank: stores the assigned rank of picA (1 = worst to 4 = best) (same for picB/picC/picD) (2) Summary data file: 'preferenceranking_touchscreen_summary*.iqdat' (a separate file for each participant) inquisit.version: Inquisit version run computer.platform: the platform the script was run on (win/mac/ios/android) startDate: date script was run startTime: time script was started subjectId: assigned subject id number groupId: assigned group id number sessionId: assigned session id number elapsedTime: time it took to run script (in ms); measured from onset to offset of script completed: 0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run) useDefaultSequence: 0 = a valid trialSequence could be generated within the allotted timeframe 1 = no trialSequence could be generated within the allotted timeframe and the default sequence was used instead. finalTrialSequence: stores the ItemNumbers presented in each of the 10 trials //////////Summary Variables: /////by images: image0: contains the image file for image0 image0Cat: containes the category of image0 1 = attractive nature 2 = attractive urban 3 = unattractive nature 4 = unattractive urban 5 = highly attractive nature 6 = very unattractive urban meanRatingImage0: the mean rating of image with ItemNumber0 (1 to 4 with 4 being the most preferred) (same for images1 - 9) /////by rank1 - 10 meanRating1: the mean rating of the image in rank1 (lowest rank - least liked) - meanRating10: the mean rating of the image in rank10 (highest rank - most liked) //the ranked ItemNumbers from rank1 (least liked) to rank10 (most liked) //Notes: //- if manually ranking is required of a subset of images, a note if left in the data file //- information on how items were ranked under section 'Experimental Setup' below rank1: the image/ItemNumber in rank1 (the least liked image) !!!Note: ItemNumbers from 0-9 - rank10: the image/ItemNumber in rank10 (the most liked image) !!!Note: ItemNumbers from 0-9 //////additional information about pairwise comparisons based on the first time //////an item pair was presented pair01: stores the 'winner' (higher ranked - more liked) image when image0 and image1 were presented together for the first time ...: pair89: stores the 'winner' (higher ranked - more liked) image when image8 and image9 were presented together for the first time /////individual counts image0Count - image9Count: number of times each image was presented (should be 4 for each) ___________________________________________________________________________________________________________________ EXPERIMENTAL SET-UP ___________________________________________________________________________________________________________________ (1) Trial Sequence Generator: The Trial Sequence Generator (code in helper script trialSequenceGenerator.iqjs) generates a random sequence of 10 trials that presents 4 random images each (out of 10 possible ones) with the following constraints: - no repeats of image files within the same trial - each stimulus is presented exactly 4 times across the 10 trials - each of the 10 items is presented at least once with each of the other items within the same trial (= each possible image pair is presented at least once within the same trial) If the script cannot find such as sequence within 500 attempts, the algorithm reverts to using a default sequence and leaves a note in the data file. Note: the time to create this sequence will vary from script run to script run. (2) Intro/Practice By default, this script runs 2 practice trials (see Editable Parameter) ranking squares of colors (3) Test The test block runs 10 trials randomly selecting one of the 10 trial sequences generated by the 'trial sequence generator' at the beginning of the script. - Each trial sequence contains the ItemNumber for the four images to run - The start location of each image onscreen is randomly determined. - participants are asked to move the presented four images from best to worst (or worst to best; see editable parameters) - To move on to the next trial, participants need to press the continue button twice - at the end of each trial, the script notes the final ranking order of the 4 stimuli !IMPORTANT! Regardless of instructions, the final ranking order is *always* recorded from 'worst to best' (see Meidenbauer et al, 2019) with rank1 being the least liked and rank4 being the most liked image. ///////Image Ranking Algorithm/////// At the end of the script, the script ranks the 10 image files from worst (rank1) to best (rank 10). Steps: - the script calculates the mean rating for each individual image and ranks them - if two images share the same mean rating, the script checks the trial in which both images were presented together for the first time. The image with the higher rating (the one that is liked better) will receive the higher rank (see Meidenbauer et al, 2019) - if three or more images share the same mean rating, it is up to researchers to determine the final rankings of these items (the script will store 'requires manual ranking'). To help with the ranking the data file will store the 'winning' (= better liked) image for all possible image pairs (based on first trial they were presented together). ___________________________________________________________________________________________________________________ STIMULI ___________________________________________________________________________________________________________________ This script runs with the 2 original stimuli sets provided by Meidenbauer et al: https://osf.io/xj3pk/ By default, the script selects stimuli set 1 - change under section Editable Parameters. ___________________________________________________________________________________________________________________ INSTRUCTIONS ___________________________________________________________________________________________________________________ Instructions provided by Millisecond Software - can be edited under section 'Editable Instructions' ___________________________________________________________________________________________________________________ EDITABLE CODE ___________________________________________________________________________________________________________________ check below for (relatively) easily editable parameters, stimuli, instructions etc. Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment. The parameters you can change are: //color parameter / canvasColor = black //Display color of the actively used portion of the screen (the 'canvas') //Note: if set to a color other than the screenColor, the active canvas //appears 'anchored' on the screen regardless of monitor size / screenColor = black //Color of the screen not used by the canvas ('inactive screen') / defaultTextColor = white //Default color of text items presented on active canvas /////DESIGN: / stimuliSet = 1 //1 = use set1 stimuli //2 = use set2 stimuli / rankingOrder = 1 //1 = positive to negative //2 = negative to positive / practiceTrials = 2 //number of practice trials to run