Return to the Cross Modal Priming Task page
						
									CROSS MODAL PRIMING

SCRIPT INFO

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 09-13-2013
last updated:  01-05-2016 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Copyright ©  01-05-2016 Millisecond Software


BACKGROUND INFO

											*Purpose*
This script implements a basic cross modal priming paradigm (audio->visual) as generously shared by Dr. Meghan Sumner. 
The basic question this script addresses is whether a within-accent (typical) voice facilitates the recognition of 
the visual target compared to an atypically accented voice.

Millisecond Software thanks Dr. Sumner for generously supporting the development of this script and providing the stimuli!

For an example of a similar cross modal priming paradigm see:
Sumner, M. (2013). A phonetic explanation of pronunciation variant effects.
JASA Express Letters [http://dx.doi.org/10.1121/1.4807432]. Published Online 5 June 2013.


											  *Task*
Participants listen to audio recordings of words ("primes") that are followed by either visual presentations of 
words or nonwords ("targets") presented in the middle of a computer screen. Participants are asked
to do a simple classification task on the visual word/nonword: press one key if the visual target is a word 
and another key if the visual target is a nonword/pseudoword. Participants are encouraged to respond as quickly as possible.

DATA FILE INFORMATION: 
The default data stored in the data files are:

(1) Raw data file: 'CrossModalPriming_raw*.iqdat' (a separate file for each participant)

build:							Inquisit build
computer.platform:				the platform the script was run on
date, time, subject:			date and time script was run with the current subjectnumber 
blockcode, blocknum:			the name and number of the current block
trialcode, trialnum: 			the name and number of the currently recorded trial
								(Note: not all trials that are run might record data) 
/currentISI:					the currently assigned interstimulusinterval (time between offset of prime and onset of target)
/condition:						1 = "condition1" ("typical" AE); 2 = condition2 ("atypical" AE)
/relatedness:					0 = N/A (filler); 1 = related target; 2 = unrelated target
/prime_itemnumber:				the itemnumber of the current prime
/prime:							stores the current prime wav file played
/target:						stores the current visual target associated with the selected prime
stimulusitem:					the presented stimuli in order of trial presentation
response:						the participant's response (scancode of response key)
correct:						the correctness of the response
latency: 						the response latency

(2) Summary data file: 'CrossModalPriming_summary*.iqdat' (a separate file for each participant)

script.startdate:				date script was run
script.starttime:				time script was started
script.subjectid:				subject id number
script.groupid:					group id number
script.elapsedtime:				time it took to run script (in ms)
computer.platform:				the platform the script was run on
/completed:						0 = script was not completed; 1 = script was completed (all conditions run)


/usefixedISI:					true = a fixed ISI is to be used in the study (set under /fixedISI)
								false = different ISIs are to be used (see lists below under Editable Lists)
/fixedISI:						sets the fixed ISI (if a fixed ISI is to be used)
										(Note: To customize randomly determinded ISI for each type of pairing go to Editable Lists for an example)
/maxtargetpresentation:			sets the max duration of the target/response trial (Note: trial is response terminated unless it takes longer
								than parameters.maxtargetpresentation)
/ITI:							ITI - intertrialinterval: sets the pause between the end of one trial sequence (priming and task)
								and the start of the next
/fontheight:					size of the visual targets

summary DVs:
/errorrate_condition1R -
/errorrate_condition2filler: 	(error)rate of categorizing related targets as pseudowords in all exp.condition
/meanRT_condition1R -
/meanRT_condition2filler:		mean latency of correctly categorizing words/nonwords in each condition

Difference Scores (only correct responses considered):
/Diff1:							Difference Score between mean latency in Condition 1UR (unrelated) and mean latency in Condition 1R (related)
									=> meanRT_condition1UR is expected to be slower (bigger latency) and therefore Diff_condition1 is expected to be positive
/Diff2: 						Difference Score between mean latency in Condition 2UR (unrelated) and mean latency in Condition 2R (related)
									=> meanRT_condition2UR is expected to be slower (bigger latency) and therefore Diff_condition2 is expected to be positive


EXPERIMENTAL SET-UP: 
2 accents ("typical" American English vs. an "atypical" -ethnic variety- of AE ) X 2 levels of prime-target relatedness (Related vs. Unrelated)
R(elated) = identical in this script

- 12 prime-targets in each of the 4 experimental conditions, none of the targets in these conditions are nonwords
- 96 filler pairs that provide nonword targets
=> 1/2 the trials have words/nonwords

*Test Blocks:
- option to run test blocks in a blocked design by condition or in a mixed design (default)
	=> go to EXPERIMENT section and follow additional instructions 

*Trial Sequence:
audio prime -> ISI-> visual target (response terminated or until parameters.maxtargetpresentation) -> ITI

- under Editable Values it can be set whether a fixed interstimulusinterval (ISI: time between offset of prime and onset of target)
is to be used or whether varying ISIs should be used (controlled via editable lists) (default: fixed at 100ms)
- an intertrialinterval (ITI) can be set under Editable Values (default: 1000ms)

!!!Note: the response keys for word/pseudo words can be counterbalanced by groupnumber.

STIMULI:
* audio stimuli: This script runs with stimuli provided by Dr. Sumner
* visual stimuli: This script runs with stimuli provided by Dr. Sumner
The visual stimuli are presented in Times New Roman,  Fontheight is coded as a percentage of screen height and can be 
edited under Editable Values.

INSTRUCTIONS:
Instructions are made by Millisecond Software and can easily be customized under
EDITABLE CODE -> Editable Instructions

EDITABLE CODE:
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment.

The parameters you can change are:

Responsekey Set-up:
/responsekey1-
/responsekey2_label:					sets the 2 responsekeys (uses scancodes) and labels them
										(default: 30 (A) and 38 (L))
										Note: scancodes under Tools -> Keyboard Scancodes
										Note: the response keys for words/pseudo words can be counterbalanced by groupnumber in this script

Duration Set ups:
/maxtargetpresentation:					sets the max duration of the target/response trial (Note: trial is response terminated unless it takes longer
										than parameters.maxtargetpresentation)
/ITI:									ITI - intertrialinterval: sets the pause between the end of one trial sequence (priming and task)
										and the start of the next (default: 2000ms)
/usefixedISI:							true = a fixed ISI is to be used in the study (set under /fixedISI) (default option in this script)
										false = different ISIs are to be used (see lists below under Editable Lists)
/fixedISI:								sets the fixed ISI (if a fixed ISI is to be used)
										(Note: To customize randomly determinded ISI for each type of pairing go to Editable Lists for an example)

/fontheight:							size of the visual target (non)words

Copyright © Millisecond Software. All rights reserved.
Contact | Terms of Service | Security Statement | Employment