Return to the Part Whole Recognition Task page
___________________________________________________________________________________________________________________
	
										PART-WHOLE FACE RECOGNITION TASK
___________________________________________________________________________________________________________________	

Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 11-19-2014
last updated:  06-30-2020 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 06-30-2020 Millisecond Software
___________________________________________________________________________________________________________________
BACKGROUND INFO
___________________________________________________________________________________________________________________	

This script runs a Part-Whole Face Recognition Task.

The implemented procedure is based on:

Tanaka, J.W. & Farah, M.J. (1993). Parts and wholes in face recognition.
The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 46, 225-245.
(Experiment 1)

___________________________________________________________________________________________________________________
TASK OVERVIEW	
___________________________________________________________________________________________________________________	

Participants are asked to memorize intact and scrambled faces of 6 men and learn to associate them
with their corresponding names. In a learning phase, participants view a random order of the 
faces with verbal recordings of the corresponding names with each face being presented for 5s 
(5 repetitions for each face). In a test phase, participants work through 2 forced-choice 
recognition tests. In one test, they are presented with the whole faces and have to choose
between the correct face and a foil (differs in one feature). In another test, they are presented
with features (eyes, nose, mouth) and have to choose between the correct feature and a foil.

___________________________________________________________________________________________________________________	
TASK DURATION
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 8 minutes to complete

___________________________________________________________________________________________________________________	
DATA FILE INFORMATION 
___________________________________________________________________________________________________________________	 
The default data stored in the data files are:

(1) Raw data file: 'partwholerecognition_raw*.iqdat' (a separate file for each participant)

build:								The specific Inquisit version used (the 'build') that was run
computer.platform:					the platform the script was run on (win/mac/ios/android)
date, time, 						date and time script was run 
subject, group, 					with the current subject/groupnumber
session:							with the current session id

blockcode, blocknum:				the name and number of the current block (built-in Inquisit variable)
trialcode, trialnum: 				the name and number of the currently recorded trial (built-in Inquisit variable)
										Note: trialnum is a built-in Inquisit variable; it counts all trials run; even those
										that do not store data to the data file such as feedback trials. Thus, trialnum 
										may not reflect the number of main trials run per block.
																				
										
values.conditionorder:				1 = intact -> scrambled; 
									2: scrambled -> intact
									
values.stimgrouporder:				1 = group1 stimuli-> group2 stimuli; 
									2 = group2 stimuli-> group1 stimuli
									
values.part:						1 = part 1; 
									2 = part 2 (same task is run once with intact faces and once with scrambled faces)

values.condition:					stores the currently run condition (intact vs. scrambled)
values.currentgroup:				stores the currently used stimuli group

values.targetposition:				(position of the correct face/feature)
										1 = left;  
										2 = right 
										
values.itemnumber:					stores the currently used itemnumber
stimulusitem.1-3:					the presented stimuli in order of trial presentation
response:							the participant's response
correct:							the correctness of the response (1 = correct; 0 = incorrect)
latency: 							the response latency (in ms); measured from: onset of face image


(2) Summary data file: 'partwholerecognition_summary*.iqdat' (a separate file for each participant)

computer.platform:					the platform the script was run on (win/mac/ios/android)
script.startdate:					date script was run
script.starttime:					time script was started
script.subjectid:					assigned subject id number
script.groupid:						assigned group id number
script.sessionid:					assigned session id number
script.elapsedtime:					time it took to run script (in ms); measured from onset to offset of script
script.completed:					0 = script was not completed (prematurely aborted); 
									1 = script was completed (all conditions run)

expressions.propcorr_Iwhole:		proportion correct I(ntact) whole (faces) recognition trials
expressions.propcorr_Ipart:			proportion correct I(ntact) part (feature) recognition trials
expressions.propcorr_Swhole:		proportion correct S(scrambled) whole (faces) recognition trials
expressions.propcorr_Spart:			proportion correct S(scrambled) part (feature) recognition trials

___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP
___________________________________________________________________________________________________________________	
4 experimental groups: assignment by groupnumber (see section EXPERIMENT for more info)
(1) 2 conditions (intact faces vs. scrambled faces): tested within in a blocked format; 
order of conditions counterbalanced
(2) 2 groups of stimuli (group1 vs. group2): used for intact and scrambled conditions 
(tested within in a blocked format); order of groups counterbalanced

2 Parts: intact vs. scrambled
each part runs a 
(1) Learning Phase: 5 blocks of 6 faces, random order of faces within each block, 
faces are presented for 5s (default) together with a recording of the assigned name (as well as the written name)
(30 trials total)
!NOTE: if the learning trials should be self-paced, go to section TRIALS -> learningphase and follow instructions
(2) Test Phase (Forced-Choice Recognition Test): 2 blocks -> Whole (whole faces) vs. Parts (individual features); 
order of blocks is randomly determined for each participant and each part (12 trials total)
a) Whole: target faces (presented during learning) are contrasted with foil faces 
(foil faces are changed in either eyes (2), nose (2) or mouth (2))
position of targets to left vs. right side of screen is randomly determined (half will be right) - 6 trials
b) Part: target features (2 eyes, 2 noses, 2 mouths) are contrasted with foil features; 
foil features are the same as used for the corresponding foil faces
position of targets to left vs. right side of screen is randomly determined (half will be right) - 6 trials

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________	
2 groups of stimuli each with 6 male faces (12 faces total).
Stimuli are not original to Tanaka & Farah (1993). Stimuli are provided by Millisecond Software and can 
be replaced under section Editable Stimuli
Detailed information regarding stimulus generation is provided.

Note: spoken names are produced by Inquisit's voice over feature. By default, a generic
female voice is used.

___________________________________________________________________________________________________________________	
INSTRUCTIONS
___________________________________________________________________________________________________________________	
Instructions are not original to Tanaka & Farah (1993). Instructions are provided by Millisecond Software
in the form of html pages. The files can be replaced under section Editable Instructions.
Word changes can be made directly within the files.

To edit html-files: open the respective documents in simple Text Editors such as TextEdit (Mac)
or Notepad (Windows).
	
___________________________________________________________________________________________________________________	
EDITABLE CODE
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to 
further customize your experiment.

The parameters you can change are:

/picturepresentationduration:			stores for how long the faces are presented in ms (default: 5000ms)
										-> the duration of the trials will at least be as long as 
										duration of name soundfiles

Picture Dimensions:
/size_learningpics:						sets the size of the learning pictures (default: 80%)
/size_testfaces:						sets the size of the test faces (default: 80%)
/size_testfeatures:						sets the size of the test features (default: 80%)