Return to the Internal vs. External Attention Task page
___________________________________________________________________________________________________________________	

								Internal–External Attention Task
___________________________________________________________________________________________________________________

Script Author: David Nitz (dave@millisecond.com) for Millisecond Software, LLC
last updated:  06-30-2020 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 06-30-2020 Millisecond Software

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
	This script implements the Internal-External Attention Task Paradigm that uses a probe-detection paradigm
	to measure internal (e.g. focusing on bodily sensations) vs external attention 
	(e.g. visual stimuli on the screen).
											
	For details on the procedure implemented by this script refer to

	Mansell, W., Clark, D. M., & Ehlers, A. (2003). Internal versus external attention in 
	social anxiety: an investigation using a novel paradigm. Behaviour Research and 
	Therapy, 41, 555–572.

___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________
Participants are asked to react to a series of probes as quickly as possible while looking at a series of pictures 
on a computer screen that are each displayed for app. 25 s. Pictures include faces of happy, angry,
and neutral facial expressions of males and females as well as objects. 
Two types of probes are used: Participants are asked to press the Spacebar whenever they 
feel a slight vibration, claimed to be due to changes in their physiology (internal) and measured by some sensor, 
and whenever they see an "E" flashed onto the screen (external).

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 8 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________
The default data stored in the data files are:

(1) Raw data file: 'ieat_raw*.iqdat' (a separate file for each participant)

build:							The specific Inquisit version used (the 'build') that was run
computer.platform:				the platform the script was run on (win/mac/ios/android)
date, time, 					date and time script was run 
subject, group, 				with the current subject/groupnumber
session:						with the current session id

blockcode, blocknum:			the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 			the name and number of the currently recorded trial (built-in Inquisit variable)
									Note: trialnum is a built-in Inquisit variable; it counts all trials run; 
									even those that do not store data to the data file such as feedback trials
									
									
values.testround:				counts the testrounds (1-4)	
values.blockCount:				counts the test blocks (1-4) per test round
values.trialcount:				counts the trials per test block (1-8)

values.imageCategory:			the image category tested in the current block:
								angry, happy, neutral, object
									
values.probetype:				1 = internal probe; 2 = external probe									
									
values.probeOnset:				probe stimulus onset asynchrony in ms (onset of probe after image onset)
picture.currentpic.currentitem: stores the currently presented picture
values.pictype:					the type of the current picture (female-angry, male-angry, female-happy, male-happy etc.)									
																				
response:						the participant's response (scancode of response buttons)
								57 = spacebar press
								0 = no response
										
correct:						accuracy of response: 
								1 = correct response (spacebar press within 3000ms of presentation of probe); 
								0 = otherwise
										
latency: 						the response latency (in ms); measured from onset of probe


values.nprobes:					counts the number of probes per pictype in a block
values.nProbesTestround:		counts the number of total probes run in a test round

values.probeseq:				stores the current fixed probe sequence (one of 4 possible)

values.probepattern: 			stores the currently Probe Timing Patterns:
								1 designates 'x1' (see list.x1)
								2 designates 'x2' (see list.x2)
								3 designates 'y1' (see list.y1)
								4 designates 'y2' (see list.y2)

(2) Summary data file: 'ieat_summary*.iqdat' (a separate file for each participant)

computer.platform:				the platform the script was run on (win/mac/ios/android)
script.startdate:				date script was run
script.starttime:				time script was started
script.subjectid:				assigned subject id number
script.groupid:					assigned group id number
script.sessionid:				assigned session id number
script.elapsedtime:				time it took to run script (in ms); measured from onset to offset of script
script.completed:				0 = script was not completed (prematurely aborted); 
								1 = script was completed (all conditions run)
																		
									
expressions.propCorrect_Int:	Proportion Spacebar press with latencies <= responseTimeout (here: 3000ms) for internal probes								
expressions.propCorrect_Ext:	Proportion Spacebar press with latencies <= responseTimeout (here: 3000ms) for external probes	

expressions.meanRT_Int:			mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes
expressions.meanRT_Ext:			mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes
expressions.AB:					attention bias (difference: int-ext)
									=> positive: participant was faster to attend to external than internal probes
									=> negative: participant was faster to attend to internal than external probes
			
expressions.meanRT_angry_int:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on angry faces
expressions.meanRT_angry_ext:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on angry faces
expressions.AB_angry:				attention bias (difference: int-ext) for angry faces
									=> positive: participant was faster to attend to external than internal probes for angry faces
									=> negative: participant was faster to attend to internal than external probes for angry faces
										
expressions.meanRT_happy_int:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on happy faces
expressions.meanRT_happy_ext:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on happy faces
expressions.AB_happy:			attention bias (difference: int-ext) for happy faces
									=> positive: participant was faster to attend to external than internal probes for happy faces
									=> negative: participant was faster to attend to internal than external probes for happy faces
										
expressions.meanRT_neutral_int:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on neutral faces
expressions.meanRT_neutral_ext:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on neutral faces
expressions.AB_neutral:			attention bias (difference: int-ext) for neutral faces
									=> positive: participant was faster to attend to external than internal probes for neutral faces
									=> negative: participant was faster to attend to internal than external probes for neutral faces
										
expressions.meanRT_object_int:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for internal probes on objects
expressions.meanRT_object_ext:	mean latency (in ms) of pressing spacebar with latencies <= responseTimeout for external probes on objects
expressions.AB_object:			attention bias (difference: int-ext) for objects
									=> positive: participant was faster to attend to external than internal probes for objects
									=> negative: participant was faster to attend to internal than external probes for objects
										
									
___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	

Practice: 8 practice trials (1 practice image with 4 external and 4 internal probe trials; order is fixed)

Test:
* four test rounds are run (32 trials per test round => 128 trials total)
* each test round runs four image category blocks (angry, happy, neutral faces, objects) - the image categories run in a blocked design:
	* each image category block runs 8 trials (4 external, 4 internal) => each image category runs 32 trials across the four test rounds (16 internal, 16 external) 
			* there are 4 different probe sequences: each image category block selects one of the sequences randomly with the constraint
			that across the four test rounds, each image category runs each probe sequence once.
			* there are 4 different probe SOA patterns: each image category block selects one of four Probe SOAs pattern 
			(see list.x1, list.x2, list.y1, list.y2 for more information) randomly with the constraint that each
			image category runs each of the four patterns once across the four test rounds
			
* the individual pictures of each image category block are selected randomly (e.g. the four angry pictures are randomly 
assigned to testround1-testround4)	
	
	
* the four test rounds differ in the order of the four image category blocks (order is determined by a Latin Square)
* four groupnumbers run 4 different sequences of the four testrounds:
Groupnumber 1: runs "angry, happy, neutral, object" (round1), "happy, neutral, object, angry" (round2), "neutral, object, angry, happy" (round3), "object, angry, happy, neutral" (round4)
Groupnumber 2: runs "happy, neutral, object, angry" (round1), "neutral, object, angry, happy" (round2), "object, angry, happy, neutral" (round3), "angry, happy, neutral, object" (round4) 
Groupnumber 3: runs "neutral, object, angry, happy" (round1), "object, angry, happy, neutral" (round2), "angry, happy, neutral, object" (round3), "happy, neutral, object, angry" (round4) 
Groupnumber 3: runs "object, angry, happy, neutral" (round1), "angry, happy, neutral, object" (round2), "happy, neutral, object, angry" (round3), "neutral, object, angry, happy" (round4)

	
Trial Sequence:
Stimulus (e.g. object) for assigned SOA -> probe (for 100ms) -> response (measured from onset of probe; max response time: 3000ms)

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________
	NOTE: The stimuli used in this script are not the same as in Mansell et al. (2003).
	Original face and object images are not in the public domain and/or may not be
	redistributed.

	Face images courtesy of the Face-Place Face Database Project
	(http://www.face-place.org/).
	Copyright 2008, Michael J. Tarr, Center for the Neural Basis of 
	Cognition, Carnegie Mellon University
	(http://www.tarrlab.org/).
	Funding provided by NSF award 0339122.
	
	Face stimuli released under the Creative Commons Attribution Share Alike license
	(https://creativecommons.org/licenses/by-sa/3.0/).
	
	Object images courtesy of the Object Data Bank.
	Copyright 1996, Brown University, Providence, RI.
	All Rights Reserved.

	Permission to use, copy, modify, and distribute this software and its 
	documentation for any purpose other than its incorporation into a 
	commercial product is hereby granted without fee, provided that the above 
	copyright notice appear in all copies and that both that copyright notice 
	and this permission notice appear in supporting documentation, and that the 
	name of Brown University not be used in advertising or publicity pertaining 
	to distribution of the software without specific, written prior permission.  
	Images produced by this software are also copyright Brown University and 
	may not be used for any commercial purpose.

	BROWN UNIVERSITY DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, 
	INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY 
	PURPOSE. IN NO EVENT SHALL BROWN UNIVERSITY BE LIABLE FOR ANY SPECIAL, 
	INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM 
	LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE 
	OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR 
	PERFORMANCE OF THIS SOFTWARE.
	
___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________
see section Editable Instructions

___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are:

/responseTimeOut:				the response TimeOut in ms (default: 3000ms)
								after 3000ms the response will be coded as an error response