User Manual: Inquisit Unexpected Solutions Task


___________________________________________________________________________________________________________________	

								Unexpected Solutions Task
								(Hungarian Instructions)
___________________________________________________________________________________________________________________	


Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC
Date: 06-14-2024
last updated:  07-16-2024 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC

Script Copyright © 07-16-2024 Millisecond Software

Millisecond Software thanks Dr.Kondé for sharing the original script and Hungarian translations
with the Millisecond task library! 

___________________________________________________________________________________________________________________
BACKGROUND INFO 	
___________________________________________________________________________________________________________________
The script implements the Unexpected Solutions Task that investigates the verification 
efficiency of obvious and  unexpected solutions to simple mathematical equations as a 
function of cognitive load.  
The cognitive load theory served as the theoretical framework for designing the method.

Reference:
Kondé, Z., Kovács, Z., & Kónya, E. (2023). Modeling teachers’ reactions to unexpectedness. 
Learning and Instruction, 86. 
https://doi.org/10.1016/j.learninstruc.2023.101784

The Millisecond library script is based on the original Inquisit script by Kondé et al 
and edited to provide summary variables.

___________________________________________________________________________________________________________________
TASK DESCRIPTION	
___________________________________________________________________________________________________________________
The test features a virtual math competition in which students compete against each other in teams 
and individually. Participants have to play the role of a teacher who assesses students' answers to 
simple math puzzles and announces the winner of the competition. 
In two stages of the competition, students solve eight simple math equations. 
The teacher evaluates the students' answers one by one to see if they are correct or not. 
Four teams of five participate in the team stage of the competition, and the aggregate score of the 
teams determine the winner. The winner of the individual competition is determined by 
the sum of the correct answers given by each student. 

___________________________________________________________________________________________________________________	
DURATION 
___________________________________________________________________________________________________________________	
the default set-up of the script takes appr. 40 minutes to complete

___________________________________________________________________________________________________________________	
DATA OUTPUT DICTIONARY
___________________________________________________________________________________________________________________
The fields in the data files are:

(1) Raw data file: 'unexpectedsolutionstask_raw*.iqdat' (a separate file for each participant)

build:						The specific Inquisit version used (the 'build') that was run
computer.platform:			the platform the script was run on (win/mac/ios/android)
date, time: 				date and time script was run 
subject:					with the current subject id
group: 						with the current group id
session:					with the current session id

blockCode, blockNum:		the name and number of the current block (built-in Inquisit variable)
trialCode, trialNum: 		the name and number of the currently recorded trial (built-in Inquisit variable)
								Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those
								that do not store data to the data file.
													
practice: 					0 = test (trials)
							1 = practice (trials)
							
load: 						"LL" (low extrinsic load: task1) vs. "HL" (high extrinsic load: task2)
loadIndex:					1 = LL; 2 = HL

teamIndex: 					1 = teamA; 2 = teamB, 3 = teamC, 4 = teamD
teamLabel: 					"A", "B", "C", "D" (the current team working)

roundPerTeam:				tracks each round per team (1-8)

selectedEquation: 			the image file of the current equation
equationIndex:				the index of the current equation (refers to item.equations)

countSolutions:				tracks the number of solutions provided (1-5) => order of solution presentation 

selectedCompetitor: 		the image file of the currently selected competitor (team member)
suggestedSolution: 			the solution currently suggested by the team member

solutionCat: 				"A" = obviously correct (low intrinsic load)
							"B" = surprisingly correct (high intrinsic load)
							"C" = incorrect (ctrl)
							
correctResp:				stores the currently correct response key

response:					the response of participant (scancode of response button)
responseText:				the label of the response key
correct:					correctness of response (1 = correct, 0 = error)
latency:					response latency (in ms); measured from: onset of solution

rt:							the filtered response latency: if below or above the cut-offs, this variable
							will contain "invalid", otherwise it contains the response latency

winningTeam: 				contains the teams with the highest score (LL task)
teamChampion: 				contains the team member(s) with highest score for the current team (HL task)
overallChampion:			contains the student(s) with the highest scores across teams and tasks (LL + HL task)

stimulusItem:				presented stimuli in order of hard-coded order


//also (debugging) variables that track performance within and across teams to 
check winningTeam/teamChampion/teamChampion calculations

(2) Summary data file: 'unexpectedsolutionstask_summary*.iqdat' (a separate file for each participant)

inquisit.version:			Inquisit version run
computer.platform:			the platform the script was run on (win/mac/ios/android)
startdate:					date script was run
startTime:					time script was started
subjectId:					assigned subject id number
groupId:					assigned group id number
sessionId:					assigned session id number
elapsedTime:				time it took to run script (in ms); measured from onset to offset of script
completed:					0 = script was not completed (prematurely aborted); 
							1 = script was completed (all conditions run)

//Note: all reported latency data are based on the filtered response times
(250ms <= response times <= 30000ms)							

//main effect intrinsic load							
accIntH:					proportion correct responses in high intrinsic load condition (responses to solution B)
corrRTIntH:					median correct response time (in ms) in high intrinsic load condition (responses to solution B)
accIntL:					proportion correct responses in low intrinsic load condition (responses to solution A)
corrRTIntL:					median correct response time (in ms) in low intrinsic load condition (responses to solution A)
accIntC:					proportion correct responses in control condition (responses to solution C and D)
corrRTIntC:					median correct response time (in ms) in in control condition (responses to solution C and D)


//main effect extrinsic load
accExtH:					proportion correct response in high extrinsic load condition (task 2)							
corrExtH:					median correct response time (in ms) in high extrinsic load condition	
accExtL:					proportion correct response in low extrinsic load condition (task 1)							
corrExtL:					median correct response time (in ms) in low extrinsic load condition	


//interactions:
//low extrinsic load (LL = task1)
accIntHExtL:				proportion correct responses in high intrinsic load condition (responses to solution B) in task 1 (LL)
corrRTIntHExtL:				median correct response time (in ms) in high intrinsic load condition (responses to solution B) in task 1 (LL)
accIntLExtL:				proportion correct responses in low intrinsic load condition (responses to solution A) in task 1 (LL)
corrRTIntLExtL:				median correct response time (in ms) in low intrinsic load condition (responses to solution A) in task 1 (LL)
accIntCExtL:				proportion correct responses in control condition (responses to solution C and D) in task 1 (LL)
corrRTIntCExtL:				median correct response time (in ms) in in control condition (responses to solution C and D) in task 1 (LL)

//high extrinsic load (HL = task2)
accIntHExtH:				proportion correct responses in high intrinsic load condition (responses to solution B) in task 2 (HL)
corrRTIntHExtH:				median correct response time (in ms) in high intrinsic load condition (responses to solution B) in task 2 (HL)
accIntLExtH:				proportion correct responses in low intrinsic load condition (responses to solution A) in task 2 (HL)
corrRTIntLExtH:				median correct response time (in ms) in low intrinsic load condition (responses to solution A) in task 2 (HL)
accIntCExtH:				proportion correct responses in control condition (responses to solution C and D) in task 2 (HL)
corrRTIntCExtH:				median correct response time (in ms) in in control condition (responses to solution C and D) in task 2 (HL)							

//Counts:
countIntHExtL:				number of responses in Intrinsic High X  Extrinsic Low
countValidRTIntHExtL:		number of VALID response times in Intrinsic High X  Extrinsic Low (the difference is the number of removed latencies)

countIntLExtL: 				number of responses in Intrinsic Low X  Extrinsic Low
countValidRTIntLExtL:		number of VALID response times in Intrinsic Low X  Extrinsic Low

countIntCExtL:				number of responses in Intrinsic Control X  Extrinsic Low
countValidRTIntCExtL:		number of VALID response times in Intrinsic Control X  Extrinsic Low

counTextHExtL:				number of responses in Intrinsic High X  Extrinsic High
countValidRTextHExtL:		number of VALID response times in Intrinsic High X  Extrinsic High

counTextLExtL:				number of responses in Intrinsic Low X  Extrinsic High
countValidRTextLExtL:		number of VALID response times in Intrinsic Low X  Extrinsic High

counTextCExtL:				number of responses in Intrinsic Control X  Extrinsic High 
countValidRTextCExtL:		number of VALID response times in Intrinsic Control X  Extrinsic High 

__________________________________________________________________________________________________________________	
EXPERIMENTAL SET-UP 
___________________________________________________________________________________________________________________	
In a dual-task arrangement, verification of alternative solutions to simple mathematical equations is required 
as the primary task, while memorizing task-irrelevant information is the secondary task.

Intrinsic load (IL) – content related load in the primary task
Alternative solutions to mathematical puzzles represents high intrinsic load (type B solution), 
low intrinsic load (type A solution) conditions, and control condition (types C and D solution).
•	IL is low if the correct answer provides an obvious solution to the equation (e.g. by performing a 
single arithmetic operation);
•	IL is high if the answer is correct but is the result of some additional operation compared to a 
simple one-step solution (e.g. simplification of fractions; changing the representation of the fraction; 
complicating the result by performing an unnecessary step; using alternative notation);
•	Two false answers represent the control condition.
extrinsic load (EL) – context related load in the secondary task
•	EL is high when, as a secondary task, additional, interfering information needs to be remembered while 
processing the mathematical information (individual competition); 
•	EL is low when the memory demand of the secondary task is little (team competition).

The script uses 64 equations, and 256 solutions. Each equation has four alternative solutions, two correct 
and two incorrect. Accordingly 128 solutions (types C and D) belong to the control, 64 to the 
low intrinsic load condition (type A), and 64 to the high intrinsic load condition (type B). 
Half of the equations coupled with related solutions are presented in the low extrinsic load condition 
(team competition), and the other half are presented in the high extrinsic load condition (individual competition).
Note that the participants had to verify five answers (given by the five competitors of teams) for each equation, 
resulting in 64 x 5=320 answers. While four solutions (two correct and two false) are assigned to each equation, 
the fifth answer is a random repetition of one of the solutions presented earlier. 
The purpose of using two false responses is to provide a chance of 0.5 that an answer is correct or incorrect. 
In total, the probability of each of the alternative solutions is approximately the same.


EXPERIMENTAL SET-UP 
(1) INTRO
- general introduction
- introducing the class, the teams, the math problems, the response keys
- practice (1 equation, 5 answers assigned to team members) 
(2) Team competition (Low extrinsic load: LL) 
•	Team A-D (fixed assignments)
-	each team consists of five competitors
-	8 equation per team
-	5 answers per equations (correct, alternative correct, false1, false2, + random repetition of one of the four alternatives)  
-	students and answers are randomly paired
•	In the team stage of the competition, a total of 5 x 8 x 4 = 160 answers are given for verification in the first stage.
•	Each run of five answers is followed by the follow-up question of team performance (trial.questionL)  
-	„How many correct answers were received from the team members?”
•	The team competition ends with a final question for the winner of the team competition (block.teamEvaluation)
-	"Which team do you think gave the most correct answers and win the quiz? Team A, B, C or D?"
(3) Individual competition (High extrinsic load: HL)
•	Group A-D (same assignments as for LL condition)
-	each group consists of five competitors
-	8 equation per group
-	5 answers per equations (correct, alternative correct, false1, false2, + random repetition of one of the four alternatives)  
-	students and answers are randomly paired
•	In the individual stage of the competition, a total of 5 x 8 x 4 = 160 answers are given for verification in the second stage.
•	Each run of five answers is followed by the follow up question for the performance of a randomly selected team member (QuestionB)  
•	The individual competition ends with a final question for the winner of the individual competition (QuestionCompetitor)
-	„Did this competitor below give a correct or false answer?”, presented together with a portrait of a randomly selected student.
•	The competition ends with a final question for the individual winner of the competition (QuestionCompetition)
-	"Which student do you think win the competition?"

TIMING
The equations are presented for 3000 ms for observation.
The alternative solutions are presented one by one, each paired with a picture of a competitor. There is no time limit for verification. No immediate feedback is given regarding whether the verification was correct.

___________________________________________________________________________________________________________________	
STIMULI
___________________________________________________________________________________________________________________

provided by Kondé et al (2023)

•	Equations: 01-64 (see item.equations)
•	Obvious correct solutions: 01A-64A ('A' solutions)
•	Unexpected correct solutions: 01B-64B ('B' solutions)
•	False solutions: 01C-64C; 01D-64D ('C' and 'D' solutions)
•	equation and solutions for practice: 24P; 24AP, 24BP, 24CP, 24DP; 13P, 13AP, 13BP, 13CP, 14DP 
•	The class 
•	Teams for team and individual stage: TeamA, TeamB, TeamC, TeamD
•	Competitors

Source of pictures: https://www.shutterstock.com/hu/image-vector/cute-smiling-faces-people-76786762.

___________________________________________________________________________________________________________________	
INSTRUCTIONS 
___________________________________________________________________________________________________________________

provided by Kondé et al (2023)
___________________________________________________________________________________________________________________	
EDITABLE CODE 
___________________________________________________________________________________________________________________	
check below for (relatively) easily editable parameters, stimuli, instructions etc. 
Keep in mind that you can use this script as a template and therefore always "mess" with the entire code 
to further customize your experiment.

The parameters you can change are:

//timing parameters
/ equationPresentationDurationMS = 3000  	//the duration (in ms) that each equation is presented
/ highLoadQuestionDurationMS = 1500	  		//the duration (in ms) that the HL load question is presented

//responseKeys:
/ leftResponseKey = "S" 					//the response key on the left side of the keyboard
/ rightResponseKey = "L" 					//the response key on the right side of the keyboard

/ correctResponseKey = parameters.rightResponseKey //'L' is assigned to indicate 'correct'
/ incorrectResponseKey = parameters.leftResponseKey //'S' is assigned to indicate 'incorrect'

//data cleaning parameters:
/ minValidRtMS = 250			//the minimum valid RT that should be considered for RT analyses is 250ms
/ maxValidRTMS = 30000			//the maximum valid RT that should be considered for RT analyses is 30000ms (per suggestion by Dr. Kondé)