Return to the Cognitive Effort Discounting Task with Speech Task page
Cognitive Effort Discounting Task (with speech task)
Script Author: Katja Borchert, Ph.D. (firstname.lastname@example.org) for Millisecond Software LLC
last updated: 04-12-2021 by K. Borchert (email@example.com) for Millisecond Software, LLC
Script Copyright © 04-12-2021 Millisecond Software
Millisecond Software thanks Jenny Crawford and the Braver lab at Washington University in St. Louis
for collaborating on this script!
This script implements a variant of the Cognitive Effort Discounting Procedure to establish
indifference points (minimum price points) at which people start to discount ("devalue") monetary rewards
paid for higher effort tasks in favor of less paid lower effort tasks.
The implemented procedure in this script mirrors the n-back Cognitive Effort Discounting Procedure
(see Inquisit script cognitiveeffortdiscountingtask.iqx) but uses a speech task as the cognitive task
(McLaughlin et al, 2020).
Reference: Speech Task Cognitive Effort Discounting Task:
McLaughlin, D.J, Todd S. Braver,T.S., Peelle, J.E. (2020).
Measuring the Subjective Cost of Listening Effort Using a Discounting Task.
Reference for the speech Cognitive Effort Discounting Task:
Westbrook A, Kester D, Braver TS (2013). What Is the Subjective Cost of Cognitive Effort? Load,
Trait, and Aging Effects Revealed by Economic Preference.
Pessiglione M, ed. PLoS One 981 8:e68210.
Participants are repeatedly asked to choose between a more difficult cognitive task with
a higher earning potential or an easier cognitive task with less earning potential.
Through a series of questions the participants' cognitive effort indifference point for the pair
of tasks is determined (aka the price point at which participants start to devalue the potential
higher pay for a challenging cognitive task in favor of less pay for a less challenging one).
In this script, the cognitive task chosen is a 'Speech' task (McLaughlin et al, 2020).
For the Speech task participants are given sentences distorted by different signal to noise ratios (SNR)
and have to repeat the sentences. This script uses 4 levels of difficulty for the Speech task
(SNR0, SNR-4, SNR-8, SNR-12)
The procedure is divided into 3 phases:
1. Participants get familiar with the Speech task procedure and work through each level (from the easiest to the hardest).
Each level presents 15 sentences.
2. Participants work through the indifference point estimation procedure (without actually performing speech tasks)
for the 3 higher levels of N (1 vs.2; 1 vs. 3; 1 vs. 4)
=> 3 indifference points are estimated for each level of N tested
3. One of the participant's choices is randomly selected, the participant works through 20 more trials of the
Speech task at the selected level for the promised reward.
The reward will be multiplied by parameters.phase3Run (default: 5) to be comparable to the speech version of the
the default set-up of the script takes appr. 20 minutes to complete
DATA FILE INFORMATION
The default data stored in the data files are:
(1) Raw data file: 'cognitiveeffortdiscountingtask_speechtask_raw.iqdat' (a separate file for each participant)
build: The specific Inquisit version used (the 'build') that was run
computer.platform: the platform the script was run on (win/mac/ios/android)
date, time, date and time script was run
subject, group, with the current subject/groupnumber
script.sessionid: with the current session id
blockcode, blocknum: the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: the name and number of the currently recorded trial (built-in Inquisit variable)
Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
that do not store data to the data file such as feedback trials. Thus, trialnum
may not reflect the number of main trials run per block.
phase1_sentenceList_level1: the randomly assigned sentencelist (1-5) used for level1 (SNR0) during phase1
phase1_sentenceList_level2: the randomly assigned sentencelist (1-5) used for level2 (SNR-4) during phase1
phase1_sentenceList_level3: the randomly assigned sentencelist (1-5) used for level3 (SNR-8) during phase1
phase1_sentenceList_level4: the randomly assigned sentencelist (1-5) used for level4 (SNR-12) during phase1
phase1BlockCount: the total number of 'speech' training blocks run (phase 1; default = 4 => each level is run once)
phase3BlockCount: the total number of phase 3 'speech' blocks run (phase 3; default = 1)
phase: 1 = phase 1 (practice/training)
2 = indifference point assessment
3 = phase 3 (final speech round)
N: the level of difficulty currently tested (Note: there are four levels run in this script)
N = 1 => black task (SNR0) (easiest level)
N = 2 => red task (SNR-4)
N = 3 => blue task (SNR-8)
N = 4 => purple task (SNR-12) (hardest level)
sentNum: for speech task: the number of the current sentence run
sentenceSoundFile: the actual sentence file run
sentence: the sentence run
correctResponse: the words that are accepted as correct responses
(Note: the official correct responses contain the keys words in singular form)
comparisonResponse: the 'cleaned' entered response (e.g. all lower caps, commas removed, only singular words etc.)
that is compared to the correctResponse
Note: as the implemented 'cleaning' algorithm will not catch all irregularities (e.g. spelling mistakes),
when in doubt, the responses may have to be manually checked
response: the response of the participant (scancode of response button)
speech trials: the entered sentence (as entered)
16 = Q (left box selected during Choice Trials)
25 = P (right box selected during Choice Trials)
57 = spacebar
0 = no response
correct: the correctness of the response (1 = correct; 0 = otherwise)
Note: has no meaning for phase 2 trials
Note: training phase: the trial is recorded as 1 if values.correctResponse = values.comparisonResponse.
HOWEVER: the script also tracks the number of correct words per trial (see values.countCorrectWords)
correctWord1-correctWord4: retrieves the 4 correct words from values.correctResponse and stores them in four individual variables
countCorrectWords: counts the number of correct speech words entered
(Note: script retrieves each of the four correct speech words and checks whether
it can be found in values.comparisonResponse)
latency: how fast a participant responded within the given timeframe (in ms)
the following variables have only meaning for phase2:
N1_x: 25pct = the N1 reward was located in the left box
75pct = the N1 reward was located in the right box
level1Reward: Phase 2: the currently offered reward for choosing the easiest black speech task
levelNReward: Phase 2: the currently offered reward for choosing the more difficult speech task
the following variables have only meaning for phase3:
RoundWinAmount: Phase 3: the amount won after phase 3 round
TotalWinAmount: Phase 3: the total amount won during phase 3
(Note: only one round is run during phase3, the reward will by multiplied by parameters.phase3Runs)
//current NASATLX summary variables: on scale: 1 (very low) - 21 (very high)
(2) Summary data file: 'cognitiveeffortdiscountingtask_speechtask_summary*.iqdat' (a separate file for each participant)
inquisit.build: the inquisit build/version
computer.platform: the platform the script was run on (win/mac/ios/android)
script.startdate: date script was run
script.starttime: time script was started
script.subjectid: assigned subject id number
script.groupid: assigned group id number
script.sessionid: assigned session id number
script.elapsedtime: time it took to run script (in ms); measured from onset to offset of script
script.completed: 0 = script was not completed (prematurely aborted);
1 = script was completed (all conditions run)
IP12-IP14: indifference points
(here: values.level1Reward after the 6th choice for the currently tested level of N)
TotalWinAmount: the total win amount from phase 3
expressions.percentCorrect_N1: the average percent correctly recalled words per level of N during training (Example: 75% => on average participant recalled 3 out of 4 words per speech trial)
expressions.percentCorrect: the average percent correctly recalled words during phase3 (Example: 75% => on average participant recalled 3 out of 4 words per speech trial)
expressions.meanCorrectSpeechWords_N1: the mean number correctly entered speech words per trial during training (Example: 3 => on average, participant entered 3 (out of 4) correct words)
expressions.meanCorrectSpeechWords: the mean number correctly entered speech words per trial during phase3 (Example: 3 => on average, participant entered 3 (out of 4) correct words)
//NASATLX summary variables (per level of difficulty level run during phase 1): on scale: 1 (very low) - 21 (very high)
(1) Phase 1: Speech Task Practice
Speech task: participants are given sentences distorted by progressively increasing signal to noise ratios (SNR)
and have to repeat the sentence by entering the sentence they heard into a textbox.
- by default, this script runs through four levels of the speech task in order of difficulty (= increased noise levels)
- each level presents 15 sentences
N = 1 (level1) => Black task (SNR0)
N = 2 (level2) => Red task (SNR-4)
N = 3 (level3) => Blue task (SNR-8)
N = 4 (level4) => purple task (SNR-12)
- at the end of each block, participants receive feedback
- Accuracy Determination (see expression.responseCleaning for how the entered response is cleaned up for comparison):
The script uses the 'correct responses' declared in sentencelist_1.iqx to sentencelist_repeat.iqx.
Each correct response should contain the important key words in singular form and in lower case letters.
The script cleans up the entered response by
- turning all letters to lowercase
- removing 'the' , 'a', and 'an'
- removing ",", ".", ":", ";" and newlines
- removing '-' and joining hypenated words into one (e.g. boy-friend)
- removing trailing 's' from individual words
- corrected spellings for a couple of individual words
(e.g. 'familie' is corrected to 'family', 'grey' is corrected to 'gray', 'men' is corrected to 'man')
- comparing the 'cleaned-up' input to the correct response
Note: after each level participants are asked to self-report on the 6 rating scales of the NASA TASK LOAD INDEX- Survey:
(2) Phase 2: Indifference Point Estimations
- 3 Indifference Points for level of N tested (e.g. 3 different standard amounts are offered for each level of N)
- by default this script runs 54 trials = 3 levels of N (2, 3, 4) x 3 IPs per N level x 6 trials
- the trials are presented in random order in a mixed design
- level of speech tasks are referred to only by color
- the position (right/left) of the standard amount is randomly determined:
for half the trials the standard amount appears on the right
- participants have 9s to make a choice (after 9s the black task is automatically assigned as the default choice)
Indifference Point Algorithm:
- the harder level N always offers a higher (fixed) reward (see section Editable Parameters),
- the possible reward for the easier black speech task starts at 1 but gets adjusted according to the choices made:
- if the harder choice was previously selected; the reward goes up (by half the previous adjustment)
- if the easier choice was previously selected; the reward goes down (by half the previous adjustment)
- Indifference Points: the adjusted reward for level 1 after the last choice (aka: the reward offered for level 1 on the
(3) Phase 3: Actual Money Earning Phase (can be skipped; see parameters.phase3Runs for more info)
- one of the 3 (level of N: 2-4) * 6 (trials per level of N) * 3 indifference points per N level = 54 choice trials
from Phase 2 is randomly selected by the computer
- the chosen difficulty level N (= chosen color task) during that trial as well as the promised reward is used
for one more block of 20 speech trials. Participants are told that they may have to work through 5 more blocks of these
but the reward is multiplied by 5 after this single block.
This script does not provide any behavioral measure of actual 'effort' during phase 3
* soundfiles provided by McLaughlin et al (2020): https://osf.io/8jpnx/files/.
provided by Millisecond Software as *.htm files.
To change instructions, edit the htm files directly (e.g. in TextEdit for Macs or Notepad for Windows).
Instructions have been generously shared by Jenny Crawford and her lab
check below for (relatively) easily editable parameters, stimuli, instructions etc.
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code
to further customize your experiment.
The parameters you can change are: