___________________________________________________________________________________________________________________ KEEP TRACK TASK (Russian instructions) ___________________________________________________________________________________________________________________ Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC Date: 12-07-2017 last updated: 11-08-2024 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC Script Copyright © 11-08-2024 Millisecond Software Millisecond Software thanks Marine Ohanjanyan for sharing the Russian translations! ___________________________________________________________________________________________________________________ BACKGROUND INFO ___________________________________________________________________________________________________________________ This script implements a 'Keep Track Task'; a test of executive functioning focusing on continuously updating working memory representations. The implemented procedure is similar to the one outlined by: Friedman, N.P., Miyake, A., Young, S.E., DeFries, J.C., Corley, R.P., & Hewitt, J.K. (2008). Individual Differences in Executive Functions Are Almost Entirely Genetic in Origin. Journal of Experimental Psychology: General, 137, 201-225. The Friedman et al (2008) task is in turn based on: Yntema, D. B. (1963). Keeping track of several things at once. Human Factors, 5, 7–17. ___________________________________________________________________________________________________________________ TASK DESCRIPTION ___________________________________________________________________________________________________________________ Participants need to mentally update the state of key categories while watching a sequence of 15 words that belong to 6 different categories. Before the presentation, participants are told the specific categories to keep track of and these target categories are displayed on screen throughout the presentation. The number of target categories to keep track of (of the 6 possible) varies from round to round (default in this script: 2-4). At the end of each round, participants are asked to enter the last item presented for each of the target categories. ___________________________________________________________________________________________________________________ DURATION ___________________________________________________________________________________________________________________ the default set-up of the script takes appr. 15 minutes to complete ___________________________________________________________________________________________________________________ DATA OUTPUT DICTIONARY ___________________________________________________________________________________________________________________ The fields in the data files are: (1) Raw data file: 'keeptracktask_raw*.iqdat' (a separate file for each participant)* build: The specific Inquisit version used (the 'build') that was run computer.platform: the platform the script was run on (win/mac/ios/android) date, time: date and time script was run subject, group, with the current subject/groupnumber session: with the current session id blockCode, blockNum: the name and number of the current block (built-in Inquisit variable) countPracticeSessions: running total of practice sessions requested roundCount: running total of the trials/rounds run; resets after each practice session difficulty: level of difficulty ( = number of categories to keep track of) trialCode, trialNum: the name and number of the currently recorded trial (built-in Inquisit variable) Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those that do not store data to the data file such as feedback trials. Thus, trialNum may not reflect the number of main trials run per block. stimulusItem: the presented stimuli in order of trial presentation currentTargetCategory: stores the currently presented target category in digits 1-6 targetCategory1: stores the label of the randomly selected target category1 category1Last: stores the last item presented for target category1 targetCategory2: stores the label of the randomly selected target category2 category2Last: stores the last item presented for target category2 targetCategory3: stores the label of the randomly selected target category3 category3Last: stores the last item presented for target category3 targetCategory4: stores the label of the randomly selected target category4 category4Last: stores the last item presented for target category4 targetCategory5: stores the label of the randomly selected target category5 category5Last: stores the last item presented for target category5 targetCategory6: stores the label of the randomly selected target category6 category6Last: stores the last item presented for target category6 response: the participant's response latency: the response latency (in ms); recall trials: measured from: onset of recall-trial until all textbox responses are submitted via 'submit' button countCorrect: counts the number of correctly items per round (across all target categories) propCorrect: stores the proportion correctly recalled items per round (= countCorrect/difficulty) correctCategory1: 1 = last item of target category 1 was correctly recalled; 0 = otherwise correctCategory2: 1 = last item of target category 2 was correctly recalled; 0 = otherwise correctCategory3: 1 = last item of target category 3 was correctly recalled; 0 = otherwise correctCategory4: 1 = last item of target category 4 was correctly recalled; 0 = otherwise correctCategory5: 1 = last item of target category 5 was correctly recalled; 0 = otherwise correctCategory6: 1 = last item of target category 6 was correctly recalled; 0 = otherwise (2) Summary data file: 'keeptracktask_summary*.iqdat' (a separate file for each participant)* inquisit.version: Inquisit version run computer.platform: the platform the script was run on (win/mac/ios/android) startDate: date script was run startTime: time script was started subjectId: assigned subject id number groupId: assigned group id number sessionId: assigned session id number elapsedTime: time it took to run script (in ms); measured from onset to offset of script completed: 0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run) countPracticeSessions: running total of practice sessions requested roundCount: final count of test rounds run totalCorrect: stores the number of correctly recalled items across all test rounds totalWordsRecalled: stores the total number of words that needed to be recalled across all test rounds propCorrect: the proportion correct of all possible test round responses (= number of correct responses across all test rounds / total number of responses = X/36 in this script) meanPropCorrect: mean proportion correct per round; based on propCorrect for each round (Example: 0.25 => on average, participant got 25% of all responses correct per test round, regardless of level of difficulty) meanPropCorrect1: mean proportion correct for level 1 trials meanPropCorrect2: mean proportion correct for level 2 trials meanPropCorrect3: mean proportion correct for level 3 trials meanPropCorrect4: mean proportion correct for level 4 trials meanPropCorrect5: mean proportion correct for level 5 trials meanPropCorrect6: mean proportion correct for level 6 trials * separate data files: to change to one data file for all participants (on Inquisit Lab only), go to section "DATA" and follow further instructions ___________________________________________________________________________________________________________________ EXPERIMENTAL SET-UP ___________________________________________________________________________________________________________________ 1. Practice Session * by default, the practice session runs 3 rounds with difficulty level increasing from 2-4 => number of rounds as well as their difficulty level can be adjusted by editing list.difficulty_practice under section Editable Lists * per round: * target categories are sampled randomly for each round (no balancing across rounds) * each category is presented at least twice and at most three times within the 15 word presentations (it's randomly determined for each round which category is represented three times - no balancing across rounds); order of category presentation is randomized * the particular exemplars presented for each category are sampled at random from the 6 provided options (constraint: no repeats within the same round) * after recall, participants receive detailed feedback of their responses * by default, practice session can be repeated if no more than parameters.maxNumberOfPracticeSessions (default: 2) have been run yet (change settings under section Editable Parameters) 2. Test Session * by default, the test session runs 12 rounds with difficulty levels 2, 3, 4 (each difficulty level is repeated 4 times, levels are randomly selected) => Total words that need to be recalled: 2x4 + 3x4 + 4x4 = 36 => number of rounds as well as their difficulty levels can be adjusted by editing list.difficulty_test under section Editable Lists * per round: * target categories are sampled randomly for each round (no balancing across rounds) * each category is presented at least twice and at most three times within the 15 word presentations (it's randomly determined for each round which category is represented three times - no balancing across rounds); order of category presentation is randomized * the particular exemplars presented for each category are sampled at random from the 6 provided options (see Editable Stimuli) (constraint: no repeats within the same round) * after recall, participants receive detailed feedback of their responses by default. However, feedback can easily be turned off by setting parameters.skipTestFeedback to 'true' (default setting is 'false', see section Editable Parameters) Note on Accuracy Checks of entered Responses: 1) all entered responses as well as target items (e.g. India) are converted to lower-case letters for comparisons Example: presented item: India; entered item: india (evaluated as correct) Example: presented item: bear; entered item: BEAR (evaluated as correct) 2) empty characters are removed from all entered responses before comparisons Example: presented item: 'brother'; entered item 'brother ' (evaluated as correct) Example: presented item: 'gold'; entered item ' g o ld ' (evaluated as correct) Trial/Round Sequence (default settings): presentation of target categories until spacebar is hit -> 500ms delay -> word presentation 1 (1500ms) -> isi (0ms) -> word presentation 2 (1500ms) -> etc. -> word presentation 15 (1500ms) -> isi (0ms) -> recall delay (0ms) -> recall until 'submit' button is pressed -> iti (default: 1000ms) Note: this script provides the code to run any difficulty level btw. 1-6. To change the number of rounds run and/or the levels of difficulty levels run, simply change list.difficulty_practice and/or list.difficulty_test under section Editable Lists ___________________________________________________________________________________________________________________ STIMULI ___________________________________________________________________________________________________________________ categories: Friedman et al (2008) exemplars: provided by Millisecond Software By default, this script runs with 6 exemplars per category. That reduces the chance to guess the correct exemplar (per category) at the end of each trial to p ~ 0.17. specific categories as well as exemplars can be edited under section "Editable Stimuli" ___________________________________________________________________________________________________________________ INSTRUCTIONS ___________________________________________________________________________________________________________________ provided by Millisecond Software - can be edited under section Editable Instructions ___________________________________________________________________________________________________________________ EDITABLE CODE ___________________________________________________________________________________________________________________ check below for (relatively) easily editable parameters, stimuli, instructions etc. Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment. The parameters you can change are: /exemplarSize: proportional (to canvas height) size of exemplars (default: 8%) /stimDelay: the delay (in ms) of the first exemplar presented after hitting spacebar (default: 500ms) /stimDuration: duration (in ms) of examplars on screen (default: 1500ms) /stimISI: the duration (in ms) of a blank screen presented after each stimulus and before the next (default: 0ms) /recallDelay: additional delay (on top of stimISI) (in ms) of the recall trial after the last exemplar is presented (default: 0ms) /iti: intertrial interval (in ms) in between each round (default: 1000ms) /maxNumberOfPracticeSessions: maximum number of times participants can repeat the practice session if they choose to do so (default: 2) Note: the script will run at least 1 practice session regardless of parameter setting /skipTestFeedback: true(1): participants only receive performance feedback after each round during practice (but not the test) false(0): participants receive performance feedback after each round during practice AND test (default) /debugmode: true(1): the script is run in debugmode; a stimulus with all correct responses is presented with the textboxes during each recall trial false (0): the script is NOT run in debugmode (default)