Internal vs. External Attention Task with LSL

Technical Manual

Script Author: David Nitz (dave@millisecond.com), Millisecond

Last Modified: January 17, 2024 by K. Borchert (katjab@millisecond.com), Millisecond

Script Copyright © Millisecond Software, LLC

Background

This script implements the Internal-External Attention Task Paradigm that uses a probe-detection paradigm to measure internal (e.g. focusing on bodily sensations) vs external attention (e.g. visual stimuli on the screen).

For details on the procedure implemented by this script refer to

Mansell, W., Clark, D. M., & Ehlers, A. (2003). Internal versus external attention in social anxiety: an investigation using a novel paradigm. Behaviour Research and Therapy, 41, 555–572.

NOTE: this script uses the Lab Streaming Layer (LSL) framework to synchronize streaming data across devices.

Duration

8 minutes

Description

Participants are asked to react to a series of probes as quickly as possible while looking at a series of pictures on a computer screen that are each displayed for app. 25 s. Pictures include faces of happy, angry, and neutral facial expressions of males and females as well as objects. Two types of probes are used: Participants are asked to press the Spacebar whenever they feel a slight vibration, claimed to be due to changes in their physiology (internal) and measured by some sensor, and whenever they see an "E" flashed onto the screen (external).

Procedure

Practice: 8 practice trials (1 practice image with 4 external and 4 internal probe trials; order is fixed)

Test:
• four test rounds are run (32 trials per test round => 128 trials total)
• each test round runs four image category blocks (angry, happy, neutral faces, objects) - the image categories run in a blocked design:
• each image category block runs 8 trials (4 external, 4 internal) => each image category runs 32 trials across the four test rounds (16 internal, 16 external)
• there are 4 different probe sequences: each image category block selects one of the sequences randomly with the constraint
that across the four test rounds, each image category runs each probe sequence once.
• there are 4 different probe SOA patterns: each image category block selects one of four Probe SOAs pattern
(see list.x1, list.x2, list.y1, list.y2 for more information) randomly with the constraint that each
image category runs each of the four patterns once across the four test rounds

• the individual pictures of each image category block are selected randomly (e.g. the four angry pictures are randomly
assigned to testround1-testround4)


• the four test rounds differ in the order of the four image category blocks (order is determined by a Latin Square)
• four groupnumbers run 4 different sequences of the four testrounds:
Groupnumber 1: runs "angry, happy, neutral, object" (round1), "happy, neutral, object, angry" (round2), "neutral, object, angry, happy" (round3), "object, angry, happy, neutral" (round4)
Groupnumber 2: runs "happy, neutral, object, angry" (round1), "neutral, object, angry, happy" (round2), "object, angry, happy, neutral" (round3), "angry, happy, neutral, object" (round4)
Groupnumber 3: runs "neutral, object, angry, happy" (round1), "object, angry, happy, neutral" (round2), "angry, happy, neutral, object" (round3), "happy, neutral, object, angry" (round4)
Groupnumber 3: runs "object, angry, happy, neutral" (round1), "angry, happy, neutral, object" (round2), "happy, neutral, object, angry" (round3), "neutral, object, angry, happy" (round4)


Trial Sequence:
Stimulus (e.g. object) for assigned SOA -> probe (for 100ms) -> response (measured from onset of probe; max response time: 3000ms)

Stimuli

The stimuli used in this script are not the same as in Mansell et al. (2003).
Original face and object images are not in the public domain and/or may not be
redistributed.

Face images courtesy of the Face-Place Face Database Project
(https://www.face-place.org/).
Copyright 2008, Michael J. Tarr, Center for the Neural Basis of
Cognition, Carnegie Mellon University
(https://www.tarrlab.org/).
Funding provided by NSF award 0339122.

Face stimuli released under the Creative Commons Attribution Share Alike license
(https://creativecommons.org/licenses/by-sa/3.0/).

Object images courtesy of the Object Data Bank.
Copyright 1996, Brown University, Providence, RI.
All Rights Reserved.

Permission to use, copy, modify, and distribute this software and its
documentation for any purpose other than its incorporation into a
commercial product is hereby granted without fee, provided that the above
copyright notice appear in all copies and that both that copyright notice
and this permission notice appear in supporting documentation, and that the
name of Brown University not be used in advertising or publicity pertaining
to distribution of the software without specific, written prior permission.
Images produced by this software are also copyright Brown University and
may not be used for any commercial purpose.

BROWN UNIVERSITY DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY
PURPOSE. IN NO EVENT SHALL BROWN UNIVERSITY BE LIABLE FOR ANY SPECIAL,
INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.

Instructions

see section Editable Instructions

Summary Data

File Name: ieat_summary*.iqdat

Data Fields

NameDescription
inquisit.version Inquisit version number
computer.platform Device platform: win | mac |ios | android
startDate Date the session was run
startTime Time the session was run
subjectId Participant ID
groupId Group number
sessionId Session number
elapsedTime Session duration in ms
completed 0 = Test was not completed
1 = Test was completed
propCorrectInt Proportion Spacebar press with latencies <= responseTimeout (here: 3000ms) for internal probes
propCorrecText Proportion Spacebar press with latencies <= responseTimeout (here: 3000ms) for external probes
meanRTInt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes
meanRText Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes
ab Attention bias (difference: int-ext)
=> positive: participant was faster to attend to external than internal probes
=> negative: participant was faster to attend to internal than external probes
meanRTAngryInt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on angry faces
meanRTAngryExt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on angry faces
abAngry Attention bias (difference: int-ext) for angry faces
=> positive: participant was faster to attend to external than internal probes for angry faces
=> negative: participant was faster to attend to internal than external probes for angry faces
meanRTHappyInt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on happy faces
meanRTHappyExt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on happy faces
abHappy Attention bias (difference: int-ext) for happy faces
=> positive: participant was faster to attend to external than internal probes for happy faces
=> negative: participant was faster to attend to internal than external probes for happy faces
meanRTNeutralInt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on neutral faces
meanRTNeutralExt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on neutral faces
abNeutral Attention bias (difference: int-ext) for neutral faces
=> positive: participant was faster to attend to external than internal probes for neutral faces
=> negative: participant was faster to attend to internal than external probes for neutral faces
meanRTObjectInt Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on objects
meanRTObjectext Mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on objects
abObject Attention bias (difference: int-ext) for objects
=> positive: participant was faster to attend to external than internal probes for objects
=> negative: participant was faster to attend to internal than external probes for objects

Raw Data

File Name: ieat_raw*.iqdat

Data Fields

NameDescription
build Inquisit version number
computer.platform Device platform: win | mac |ios | android
date Date the session was run
time Time the session was run
subject, group, With the current subject/groupnumber
session Session number
blockCode Name of the current block
blockNum Number of the current block
trialCode Name of the current trial
trialNum Number of the current trial
testRound Counts the testrounds (1-4)
blockCount Counts the test blocks (1-4) per test round
trialCount Counts the trials per test block (1-8)
imageCategory The image category tested in the current block
angry, happy, neutral, object
probeType 1 = internal probe; 2 = external probe
probeOnset Probe stimulus onset asynchrony in ms (onset of probe after image onset)
picture.currentpic.currentItem: stores the currently presented picture
picType The type of the current picture (female-angry, male-angry, female-happy, male-happy etc.)
response The participant's response (scancode of response buttons)
57 = spacebar press
0 = no response
correct Accuracy of response
1 = correct response (spacebar press within 3000ms of presentation of probe)
0 = otherwise
latency The response latency (in ms); measured from onset of probe
nProbes Counts the number of probes per pictype in a block
nProbesTestRound Counts the number of total probes run in a test round
probeseq Stores the current fixed probe sequence (one of 4 possible)
probepattern Stores the currently Probe Timing Patterns
1 designates 'x1' (see list.x1)
2 designates 'x2' (see list.x2)
3 designates 'y1' (see list.y1)
4 designates 'y2' (see list.y2)

Parameters

The procedure can be adjusted by setting the following parameters.

NameDescriptionDefault
responseTimeout The response TimeOut in ms
after 3000ms the response will be coded as an error response
3000ms