___________________________________________________________________________________________________________________ Deck Choice Effort Task ___________________________________________________________________________________________________________________ Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC Date: 11-07-2023 last updated: 11-10-2023 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC Script Copyright © 11-10-2023 Millisecond Software Millisecond Software thanks Joshua Di Vincenzo for his assistance in creating this script! ___________________________________________________________________________________________________________________ BACKGROUND INFO ___________________________________________________________________________________________________________________ This script implements Millisecond Software's version of Deck Choice Effort Task, a paradigm to study cognitive effort-based decision making using two tasks of varying cognitive difficulty while keeping physical effort the same. The Inquisit script is based on the paper by Reddy et al (2018) and the original eprime script of the task. Reference: Reddy, L. F., Horan, W. P., Barch, D. M., Buchanan, R. W., Gold, J. M., Marder, S. R., Wynn, J. K., Young, J., & Green, M. F. (2018). Understanding the Association Between Negative Symptoms and Performance on Effort-Based Decision-Making Tasks: The Importance of Defeatist Performance Beliefs. Schizophrenia Bulletin, 44(6), 1217–1226. https://doi.org/10.1093/schbul/sbx156 ___________________________________________________________________________________________________________________ TASK DESCRIPTION ___________________________________________________________________________________________________________________ Participants are instructed to choose between 2 decks of cards that represent an Easy (cognitive) task vs. Hard (cognitive) Task. The Easy deck contains only cards of the same color The Hard deck contains cards of alternating colors The colors are tied to specific tasks to perform: blue -> do a parity task ('is this number odd or even?') yellow -> do a magnitude task ('is this number less or greater than 5?') Choosing to work on a Hard deck (with the alternating tasks) can earn more money. Specifically, three levels of reward for the Hard deck are tested (the Easy task offers the same amount throughout). 3 different experimental phases: 1. Practice of tasks: learn to associate color with tasks to perform 2. Introduction of the Easy vs. Hard card deck 3. Test Note: the script offers the choice to run the script in an "in person" vs. a "remote" setting (default is remote). in person: test administrator has the choice to select each phase separately via a selection screen remote: task flow is automatic ___________________________________________________________________________________________________________________ DURATION ___________________________________________________________________________________________________________________ the default set-up of the script takes appr. 20 minutes to complete practice1: ~3min per block practice2: ~1min per block test: ~1min per block ___________________________________________________________________________________________________________________ DATA FILE INFORMATION ___________________________________________________________________________________________________________________ The default data stored in the data files are: (1) Raw data file: 'deckchoiceefforttask_raw*.iqdat' (a separate file for each participant) build: The specific Inquisit version used (the 'build') that was run computer.platform: the platform the script was run on (win/mac/ios/android) date, time: date and time script was run subject: with the current subject id group: with the current group id session: with the current session id blockcode, blocknum: the name and number of the current block (built-in Inquisit variable) trialcode, trialnum: the name and number of the currently recorded trial (built-in Inquisit variable) Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those that do not store data to the data file. phase: 1 = practice with feedback (training of individual task); 2 = practice w/o feedback (decks introduction) 3 = test blockCounterPerPhase: tracks the number of blocks run during the current phase trialCounterPerBlock: tracks the number of trials run during the current block difficulty: 1 = easy task 2 = hard task currentRewardEasy: the currently offered reward for the easy task currentRewardHard: the currently offered reward for the hard task rewardLevelHard: 1,2 or 3 (depending on reward amount) currentReward: the offered reward for the current task (tied to selection) currentColor: the color of the current card currentTask: the current task to perform (tied to color) number: the currently presented number (1-9; 5 excluded) correctResp: "1" or "2" (refers to the correct key to press) selectFeedback: 1 = correct; 2 = incorrect; 3 = no response winnings: the current win amount total: sum of all win amounts across all test blocks response: the response of participant (scancode of response button) - Note: scancodes can be confusing for number buttons (Example: scancode 2 refers to key "1") - 57 -> spacebar press responseText: the label of key pressed (note: confusing for spacebar press as it appears empty) correct: correctness of response (1 = correct, 0 = error) latency: response latency (in ms); measured from: onset of card list.ACC_block.mean: running proportion correct responses during the current block list.RT_block.mean: running mean response times (in ms) during the current block (based on correct AND incorrect responses as well as no responses) practiceSuccess: 1 = practice1 session was finished successfully (only applies to 'remote' testing settings) 0 = otherwise (2) Summary data file: 'deckchoiceefforttask_summary*.iqdat' (a separate file for each participant) inquisit.version: Inquisit version run computer.platform: the platform the script was run on (win/mac/ios/android) startDate: date script was run startTime: time script was started subjectid: assigned subject id number groupid: assigned group id number sessionid: assigned session id number elapsedTime: time it took to run script (in ms); measured from onset to offset of script completed: 0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run) propHardChoices: proportion Hard Choices across all test blocks propHardChoices1: proportion Hard Choices when hard choice gets reward1 (lowest reward) propHardChoices2: proportion Hard Choices when hard choice gets reward2 (medium reward) propHardChoices3: proportion Hard Choices when hard choice gets reward3 (highest reward) effortScore: Difference Score of (propHardChoices3 - propHardChoices1) Reddy et al (2018): "Higher scores indicate greater willingness to exert effort for large versus small rewards." Note: this score is only calculated if 0 < propHardChoices < 100, otherwise "NA" is noted in the data file (thus excluding participants who always chose Hard choices and those that never chose Hard choices) propCorrectEasy: proportion correct Easy task responses across all test trials meanCorrRTEasy: mean correct response times (in ms) for correct Easy task responses (across all test trials) propCorrectHard: proportion correct Hard task responses across all test trials meanCorrRTHard: mean correct response times (in ms) for correct Hard task responses (across all test trials) ___________________________________________________________________________________________________________________ EXPERIMENTAL SET-UP ___________________________________________________________________________________________________________________ This task provides the option to use a 'in person' set up or a 'remote' set up of the tasks: in person: for the in-person set up, the script always returns to a task menu option after each experimental phase. This set up allows test administrators to select which phase should be run next (if any) remote: this set up automatically calls the different experimental phases in sequences. The default setting of this script is 'remote' (1) Practice1 (Task Learning Phase with performance feedback after each response) - 20 practice trials per block - colors/tasks are selected randomly WITHOUT replacement => 50% parity tasks, 50% magnitude tasks - numbers are selected randomly WITH replacement from list.numbers (see below) - each card is presented for 3000ms (editable parameter) - at the end of practice1 block performance feedback is provided "remote" only: script checks performance after each practice1 block. If performance is lower than 70% correct (editable parameters) the practice block is repeated. If the maximum number of practice1 blocks has been run without success (default: 3, editable parameters), the script terminates prematurely "in person" only: at the end of the block (when performance feedback is presented), participants are asked to fetch test administrator. Test administrator can use performance feedback to decide if the block needs to be repeated. Pressing key 'C' calls the task selection menu screen (hidden from participants). (2) Practice2 (Introduction of Easy/Hard card decks, no response feedback provided) - 2 blocks of 10 trials (aka '10' cards per deck played): Easy block vs. Hard block - order of decks/blocks is randomly determined - each card is presented for 3000ms (editable parameter) (3) Test - 12 blocks of 10 trials each (editable parameters, Note: original eprime script runs 36 blocks) - equal number of blocks that offer hardReward1, hardReward2, hardReward3 (all blocks offer the same amount for the Easy deck) - colors selected for Easy tasks: randomly selects a color with the constraint that roughly half of the selected Easy tasks are parity tasks - star colors selected for Hard tasks: randomly selects a starting color with the constraint that roughly half of the first presented cards are blue - at the end of each block, participant receives feedback about money won => if 90% of responses (editable parameter) are correct -> money is won and added to total (otherwise no money won) //////////////////////// Test Trial Sequence: //////////////////////// -> task choice (until response) -> task feedback (until spacebar response) -> ISI (500ms) -> card trial (duration: 3000ms, card removed after response) (regardless of response => easy and hard task take the same amount of time) -> ISI (500ms) .... -> feedback (3000ms) ___________________________________________________________________________________________________________________ STIMULI ___________________________________________________________________________________________________________________ provided by Millisecond Software - can be edited under section 'Editable Stimuli' ___________________________________________________________________________________________________________________ INSTRUCTIONS ___________________________________________________________________________________________________________________ provided by Millisecond Software - can be edited under section 'Editable Instructions'. Partly based on original eprime script. ___________________________________________________________________________________________________________________ EDITABLE CODE ___________________________________________________________________________________________________________________ check below for (relatively) easily editable parameters, stimuli, instructions etc. Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment. The parameters you can change are: ##design taskAdministration = "remote" choose from: "in person", "remote" "in person" = provides a menu to choose (1) practice with feedback; (2) practice without feedback; (3) test; (4) Exit "remote" = runs the blocks in sequence and implements a learning criterium for the first practice block ######only applies for taskAdministration = "remote" minPracticeACC = 0.7 implements a learning criterium for association 'color-task' reruns practice block1 for 'maxPracticeBlocks' if performance is lower than 70% correct. If minPracticeACC is not achieved, task prematurely terminates maxPracticeBlocks = 3 number of maximum practice1 blocks run If minPracticeACC is not achieved, task prematurely terminates ######/ numberPractice1TrialsPerBlock = 20 number of practice1 trials per learning block (Note: should be an even number) numberPractice2TrialsPerBlock = 10 number of practice2 trials per block (aka 'number of cards per deck') Note: practice2 runs 2 blocks (one Easy, one Hard) numberTestBlocks = 12 number of test blocks to run (Note: needs to be divisible by 3 because of three different levels of hardRewards) Note: the original eprime script runs 36 blocks numberTestTrialsPerBlock = 10 number of test trials per test block (aka 'number of cards per deck') minTestPerformanceForReward = 0.9 minimum propCorrect responses during a test block to get rewarded ##reward selection easyReward = 0.1 the reward amount offered for selection of Easy deck hardReward1 = 0.1 the lowest reward amount offered for selection of Hard deck hardReward2 = 0.2 the medium reward amount offered for selection of Hard deck hardReward3 = 0.4 the highest reward amount offered for selection of Hard deck monetaryUnit = "$" the money unit used to present amount monetaryUnitPlacement = "B" choose from "B" (before) or "A" (after) monetary amount ##color selection parityTaskColor = "blue" color used for parity (odd/even) task -> use English term for the selected color (used in actual code) magnitudeTaskColor = "yellow" color used for magnitude (less/more) task -> use English term for the selected color (used in actual code) Change actual instructions in section Editable Instructions ##timing parameters numberDuration_inms = 3000 the duration (in ms) that the number is presented (also refers to response timeout) isi_inms = 500 the interstimulus interval (in ms) before/after a number is presented practiceFeedbackDuration_inms = 2000 the duration (in ms) of the feedback reported during practice1 testFeedbackDuration_inms = 3000 the duration (in ms) of the feedback reported after each test block ##sizing parameters cardHeight_inpct = 40% the proportional (to canvas height) height of the cards numberFontHeight_inpct = 10% the proportional (to canvas height) size of the numbers on the card