___________________________________________________________________________________________________________________ AUDITORY SELECTIVE ATTENTION TASK (Turkish version) ___________________________________________________________________________________________________________________ Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC Date: 03-03-2014 last updated: 01-18-2023 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC Script Copyright © 01-18-2023 Millisecond Software Millisecond Software thanks DR.Ümmügülsüm Gündoğdu for sharing the translations! ___________________________________________________________________________________________________________________ BACKGROUND INFO ___________________________________________________________________________________________________________________ This script implements an Auditory Selective Attention Paradigm based on: Humes, L.E., Lee, J.H. & Coughlin, M.P. (2006). Auditory measures of selective and divided attention in young and older adults using single-talker competition. J. Acoust. Soc. Am., 120, 2926 - 2937. Differences btw script and Humes et al: - script only tests selective attention - script uses one male and one female speaker only (target and comp phrase are always spoken by the opposite gender) - script only uses call signals as cues - script measures latency Millisecond Software thanks Dr. Desjardins for her collaboration on this script! ___________________________________________________________________________________________________________________ TASK DESCRIPTION ___________________________________________________________________________________________________________________ Participants are presented two phrases (target vs. competing phrase) that contain information about a specific call signal (e.g. "Charlie" vs. "Ringo") a color (blue, red, white, or green) and a digit (1-8) (e.g "Ready Charlie go to BLUE 8 now") and are asked to select (1) a color box (on the right) and (2) a digit box (on the left) on the computer screen that correspond to the information provided by the target phrase. Before presenting the two phrases, participants are told which call signal belongs to the target phrase. ___________________________________________________________________________________________________________________ DURATION ___________________________________________________________________________________________________________________ the default set-up of the script takes appr. 40 minutes to complete ___________________________________________________________________________________________________________________ DATA OUTPUT DICTIONARY ___________________________________________________________________________________________________________________ The fields in the data files are: (1) Raw data file: 'auditoryselectiveattentiontask_raw*.iqdat' (a separate file for each participant) build: The specific Inquisit version used (the 'build') that was run computer.platform: the platform the script was run on (win/mac/ios/android) date, time: date and time script was run subject, group: with the current subject/groupnumber session: with the current session id blockCode, blockNum: the name and number of the current block (built-in Inquisit variable) trialCode, trialNum: the name and number of the currently recorded trial (built-in Inquisit variable) Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those that do not store data to the data file such as feedback trials. Thus, trialNum may not reflect the number of main trials run per block. earCondition: 1 = dichotic presentation of target and comp phrases 2 = monaural presentation of target and comp phrases trialType: stores the combination of color x digit run (11 = blue1 -> 18 = blue8; 21 = red1 -> 28 = red8; 31 = white1 -> 38 = white8; 41 = green1 -> 48 = green8) countTrialsCondition: counts all trialsequences per condition (resets after each condition is run) countTrialsBlock: counts all trialsequences per block (resets after each block) targetear: 1 = target in right ear (depending on values.earcondition -> comp phrase in left ear) 2 = target in left ear (depending on values.earcondition -> comp phrase in right ear) Note: monaural: only right ear callSignal: stores the actual name of the current call signal (not just the digit) nTargetcallSignal: charlie (1), ringo (2), laker (3), hopper (4), arrow (5), tiger (6), eagle (7), baron (8) nTargetColor: Blue (1), Red (2), White (3), Green (4) nTargetDigit: 1-8 nTargetSpeaker: 1 = male (here: speaker 3 from Bolia et al, 2000); 2 = female (here: speaker 6 from Bolia et al, 2000) keyTarget: calculates the index to select the target phrase (= itemnumber of the current target phrase) targetphrase: stores the current target phrase sound file nCompcallSignal: charlie (1), ringo (2), laker (3), hopper (4), arrow (5), tiger (6), eagle (7), baron (8) nCompColor: Blue (1), Red (2), White (3), Green (4) nCompDigit: 1-8 nCompSpeaker: 1 = male (here: speaker 3 from Bolia et al, 2000); 2 = female (here: speaker 6 from Bolia et al, 2000) keyComp: calculates the key to select the comp phrase (= itemnumber of the current comp phrase) compphrase: stores the selected comp phrase sound file Note: comp phrase has to be different in call signal, speaker, color AND digit responseColor: stores the color response (the first box selected) correctColor: stores whether the correct color was selected (1 = correct; 0 = incorrect) rtColor: the latency (in ms) of the color response (measured from: start of trial to clicking the color box) Note: mouse cursor may not be necessarily in the same position for all participants mouse may have been moved. responseDigit: stores the digit response in ms (the second box selected) correctDigit: stores whether the correct digit was selected (1 = correct; 0 = incorrect) rtDigit: the latency (in ms) of the digit response (measured from: click of color response box to click of digit box) correctCombined: 0 = no response was correct; 1 = one response was correct; 2 = both responses were correct rtCombined: calculates the combined latency of selecting both response boxes in ms (2) Summary data file: 'auditoryselectiveattentiontask_summary*.iqdat' (a separate file for each participant) inquisit.version: Inquisit version run computer.platform: the platform the script was run on (win/mac/ios/android) startDate: date script was run startTime: time script was started subjectId: assigned subject id number groupId: assigned group id number sessionId: assigned session id number elapsedTime: time it took to run script (in ms); measured from onset to offset of script completed: 0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run) percentCorrectDichotic: stores the percent correct for Combined responses in Dichotic condition meanrtDichotic: stores the mean combined latency (in ms) of correct Combined responses for Dichotic condition percentCorrectDichoticColor: stores the percent correct for Dichotic Color responses meanRTDichoticColor: stores the mean correct Color response latency (in ms) for dichotic condition percentCorrectDichoticDigit: stores the percent correct for Dichotic Digit responses meanRTDichoticDigit: stores the mean correct Digit response latency (in ms) for dichotic condition Note: as selecting the digit box is the second response this latency might be more comparable across participans as the start position of the mouse cursor at the beginning of the digit trial is similar for each trial (mouse cursor on left side of screen on one of the color boxes) percentCorrectMonaural: stores the percent correct for Combined responses in Monaural condition meanrtMonaural: stores the mean combined latency (in ms) of correct Combined responses for Monaural condition percentCorrectMonauralColor: stores the percent correct for Monaural Color responses meanRTMonauralColor: stores the mean correct Color response latency (in ms) for Monaural condition percentCorrectMonauralDigit: stores the percent correct for Monaural Digit responses meanRTMonauralDigit: stores the mean correct Digit response latency (in ms) for Monaural condition Note: as selecting the digit box is the second response this latency might be more comparable across participans as the start position of the mouse cursor at the beginning of the digit trial is similar for each trial (mouse cursor on left side of screen on one of the color boxes) ___________________________________________________________________________________________________________________ EXPERIMENTAL SET-UP ___________________________________________________________________________________________________________________ 2 Conditions: tested in blocked design (by default order is determined randomly) * Dichotic Presentation: target and comp phrases are played through different ears (need earphones) * Monaural Presentation: target and comp phrases are always played through the RIGHT ear (see Humes et al, 2006) + 1 practice block of 32 trials (only monaural, with errorfeedback) 512 phrases provided => 2 speakers (male vs female) x 8 call signals x 4 colors x 8 digits Each condition runs 4 blocks (default, controlled by parameters.blockspercondition, editable parameter) Each block runs 32 trials (default, controlled by parameters.numberoftrialsperblock, editable parameter) => 4 colors x 8 digit combinations callsignal (3000ms) -> phrasepresentation (lasts until both sentences are done)-> color/digit selections -> iti "Target Phrase" Selection: Each color/digit combination (32) is run once per block; order is randomly determined (-> list.trialtype) Each of the 8 call signals is used 4 times as the target signal; order is randomly determined (-> list.NTargetcallsignal) Each of the 2 speakers is used 16 times as the target speaker; order is randomly determined (-> list.NTargetspeaker) "Comp(eting) Phrase" Selection: Hard Constraint: Comp phrases contain different call signals, different color and digit info, and are spoken by the opposite speaker (-> list.NCompcallsignal, list.NCompcolor, list.NCompdigit, list.NCompspeaker) Soft Constraint: As much as possible, each call signal, each color, each digit, and each speaker are used equally often in the comp sentences. However: in rare cases the soft constraint may have to be violated to fulfill the hard constraint (go to LISTS for more info) ___________________________________________________________________________________________________________________ STIMULI ___________________________________________________________________________________________________________________ Stimuli are taken from Bolia et al (2000): only speaker 3 (male) and speaker (6) are included in this script => 256 x 2 => 512 phrases (Bolia et al recorded all 256 combinations of 8 call signals x 4 colors x 8 digits from 8 people -4 male/4female- = 2048 phrases) Bolia, R.S, Nelson, W.T., Ericson, M.A. & Simpson, B.D. (2000). A speech corpus for multitalker communications research. J. Acoust. Soc. Am., 107, 1065 - 1066. ___________________________________________________________________________________________________________________ INSTRUCTIONS ___________________________________________________________________________________________________________________ Instructions are not original. They are presented in the form of html-pages. To edit the provided instructions, edit the html pages directly. ___________________________________________________________________________________________________________________ EDITABLE CODE ___________________________________________________________________________________________________________________ check below for (relatively) easily editable parameters, stimuli, instructions etc. Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment. The parameters you can change are: /fontSizeCallSignal: the size/height of the callsign cue in % of canvas height (default: 10%) /fontSizeOther: the size/height of other instructions in % of canvas height (default: 10%) Note: does not affect the coordinate response screen /callSignalDuration: the duration of the call sign cue in ms (default: 3000ms) /blocksperCondition: the number of blocks run per condition (default: 4) /numberofTrialsperBlock: the number of trials run per block (default: 32) Note: 8 digits x 4 numbers = 32 => each digit/color combination is presented once in a block !!!If you change the numberoftrialsperblock, check LISTS -> list.trialtype and follow further instructions (if certain color x digit combos should be removed) !!!Depending on the chosen number of trials, messages might be posted under the message list in the Inquisit editor after the script is run informating the user that the poolSize attribute of several lists was adjusted. This should not impact the functioning of the script. !!!For trialnumbers 1-3, the lists under LISTS need to be edited by removing the /poolSize attribute completely from all the lists that use this attribute (otherwise adjustments are automatically made that might prevent the script from running). /conditionOrder: 1 = dichotic condition only 2 = monaural condition only 3 = random order of dichotic and monaural condition (default) Note: the practice block will run for each of those conditions unless removed from the expt-element under section EXPERIMENT