User Manual: Inquisit Spatial Perspective Taking Test (Children) - for touchscreens/mouse


___________________________________________________________________________________________________________________	

						Perspective Taking Task (for children) - with touch/mouse input
___________________________________________________________________________________________________________________	


Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 12-19-2023
last updated:  01-03-2024 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 01-03-2024 Millisecond Software

Millisecond Software thanks the E. Bialystok lab for sharing material and scripts!
___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
This script implements Millisecond Software's version of the Perspective Taking Test (Greenberg et al, 2013);
a test to study perspective taking skill in children using a child-friendly setup.

The original test was programmed in eprime and the test was translated into Inquisit by
Millisecond software.

Reference:											

Greenberg, A., Bellana, B., & Bialystok, E. (2013). Perspective-taking ability in bilingual children: 
Extending advantages in executive control to spatial reasoning. 
Cognitive Development, 28(1), 41–50. 
https://doi.org/10.1016/j.cogdev.2012.10.002

___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________

After three practice trials, participants work on 12 trials in which they have to decide how an observer (here: an owl) 
sees a four-block array from one of three different spatial positions (90°, 180°, and 270° counter-clockwise from
the child's position) by selecting one of four responses -- the correct response, the egocentric error, 
an incorrect choice in which the array is correct but in the wrong orientation for the viewer, and an incorrect choice 
in which the array includes an internal spatial error.

This script uses touch (or mouse) input to collect responses
___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 7 minutes to complete

___________________________________________________________________________________________________________________	
DATA OUTPUT DICTIONARY
___________________________________________________________________________________________________________________
The fields in the data files are:

(1) Raw data file: 'perspectivetakingtask_touch_raw*.iqdat' (a separate file for each participant)

build:						The specific Inquisit version used (the 'build') that was run
computer.platform:			the platform the script was run on (win/mac/ios/android)
date, time: 				date and time script was run 
subject:					with the current subject id
group: 						with the current group id
session:					with the current session id

blockCode, blockNum:		the name and number of the current block (built-in Inquisit variable)
trialCode, trialNum: 		the name and number of the currently recorded trial (built-in Inquisit variable)
								Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those
								that do not store data to the data file. 
																
phase: 						practice vs test
blockCounterPerPhase: 		the number of blocks run in the current phase
trialCounterPerBlock:		the number of trials run in the current block

trialIndex:					the trialIndex (1-12)						
target:						stores the selected block target image
owl:						stores the selected owl image
selectOwl:					stores the itemnumber of the selected owl image

option1:					stores the option1 image 
option2:					stores the option2 image 
option3:					stores the option3 image
option4:					stores the option4 image

selectedOption1:			stores the response category of option1 (test: correct)
selectedOption2:			stores the response category of option2 (test: oriented -> correct front-back relationship of blocks but with a left-right reversal)
selectedOption3:			stores the response category of option3 (test: egocentric -> the child's view of the array)
selectedOption4:			stores the response category of option4 (test: structured -> correct internal structure of blocks but incorrect orientation relative to the owl's position
							Note: the assignment of options to screen positions is randomized for each trial		

//position allocations:	'top-left', 'top-right', 'bottom-left', 'bottom-right'						
option1Position:			screen position for option1
option2Position:			screen position for option2
option3Position:			screen position for option3
option4Position:			screen position for option4

response:					the response of participant 					
selectedOption: 			option1, option2, option3, option4
resp: 						the response category (correct, oriented, egocentric, structured)
corrResp:					stores the correct response key
correct: 					1 = correct response; 0 = other
latency:					response latency (in ms); measured from: onset of response options

//screen positions of the 4 response options
picture.option1.x: 	horizontal screen position of option1 (in screen percentages)
picture.option1.y: 	vertical screen position of option1 (in screen percentages)
picture.option2.x: 	horizontal screen position of option2 (in screen percentages) 
picture.option2.y: 	vertical screen position of option2 (in screen percentages)
picture.option3.x: 	horizontal screen position of option3 (in screen percentages) 
picture.option3.y: 	vertical screen position of option3 (in screen percentages)
picture.option4.x: 	horizontal screen position of option4 (in screen percentages) 
picture.option4.y: 	vertical screen position of option4 (in screen percentages)								
								

(2) Summary data file: 'perspectivetakingtask_touch_summary*.iqdat' (a separate file for each participant)

inquisit.version:			Inquisit version run
computer.platform:			the platform the script was run on (win/mac/ios/android)
startDate:					date script was run
startTime:					time script was started
subjectId:					assigned subject id number
groupId:					assigned group id number
sessionId:					assigned session id number
elapsedTime:				time it took to run script (in ms); measured from onset to offset of script
completed:					0 = script was not completed (prematurely aborted); 
							1 = script was completed (all conditions run)	

countCorrect:				number of Correct responses 
countOriented:				number of Oriented responses 
countEgocentric:			number of Egocentric responses 
countStructured:			number of Structured responses 

propCorrect:				proportion Correct responses 
meanCorrRT:					mean response time (in ms) of correct responding
							
___________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	

(1) Practice: 3 examples
- the examples are presented in fixed order
(assignment of option1-option4 to screen positions is fixed as well but can be randomized if needed)
- feedback provided
- practice block can be repeated as often as needed

(2) Test: 12 trials
- three perspectives (looking from the left, from the back, from the right) x 4 blocks (blocks_0, blocks_90, blocks_180, blocks_270)
- order of 12 trials randomized
- screen position of option1-option4 randomized
=> Note: assigned screen position is tied to the response key for that option
T (left, top)
U (right, top)
G (left, bottom)
J (right, bottom)
- no feedback during the test

Trial Sequence:
-> target block alone (2000ms)
-> target block with Owl (2000ms)
-> target block, Owl and 4 response options until response is made

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________

provided by E. Bialystok lab - can be edited under section 'Editable Stimuli'
___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________

provided by E. Bialystok lab - can be edited under section 'Editable Instructions'
___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are:

//timing parameters
/ delayDurationMS = 2000					//the delay duration (in ms) of owl/response options
											//note: the response options appear 2*2000ms = 4000ms after onset of target block
											
/ practiceFeedbackDurationMS = 1500		//the feedback duration (in ms) during practice