___________________________________________________________________________________________________________________ *Dragging Task* ___________________________________________________________________________________________________________________ Script Author: Katja Borchert, Ph.D. (katjab@millisecond.com) for Millisecond Software, LLC Date: 01-07-2022 last updated: 11-05-2024 by K. Borchert (katjab@millisecond.com) for Millisecond Software, LLC Script Copyright © 11-05-2024 Millisecond Software ___________________________________________________________________________________________________________________ BACKGROUND INFO ___________________________________________________________________________________________________________________ This script implements Millisecond Software's version of the NASA developed 'Dragging Task', which measures manual dexterity, or the speed of arm movements. Note the script runs with proportional sizing by default. Researchers can select to run the task with an absolute screen size to ensure that distances stay the same across devices. See section Defaults for more info. Bettina L. Beard (2020). The Cognition and Fine Motor Skills Test Batteries: Normative Data and Interdependencies Technical Memorandum (TM) 20205008023. https://ntrs.nasa.gov/citations/20205008023 Public Access ipad App ('Fine Motor Skills'); free App from Apple App Store Note: the ipad App data/implementation may differ from the one collected by the Inquisit script ___________________________________________________________________________________________________________________ TASK DESCRIPTION ___________________________________________________________________________________________________________________ description from Beard (2020): "The Drag test measures manual dexterity, or the speed of arm movements. The task is to push (i.e., place finger or stylus on a square and drag) a white square back and forth or up and down from one designated area on the screen to another. Each block contained 16 trials." ___________________________________________________________________________________________________________________ DURATION ___________________________________________________________________________________________________________________ the default set-up of the script takes appr. 10 minutes to complete (~5min per horizontal/vertical orientation) ___________________________________________________________________________________________________________________ DATA OUTPUT DICTIONARY ___________________________________________________________________________________________________________________ The fields in the data files are: (1) Raw data file: 'draggingtask_raw*.iqdat' (a separate file for each participant) build: The specific Inquisit version used (the 'build') that was run computer.platform: the platform the script was run on (winmaciosandroid) date, time: date and time script was run subject, group: with the current subject/groupnumber session: with the current session id parameters- draggingMechanism: "finger" vs. "stylus" parameters- draggingHand: "dominant" vs. "nondominant" vs. "left" vs. "right" playAreaHeightMM: the width of the play area in mm playAreaWidthMM: the height of the play area in mm squareHeightMM: the size of the square in mm distanceWallToWallMM: the distance between left (top) and right (bottom) wall (from center of wall to center of wall) in mm wallWidthMM: the length of the short side of the wall in mm wallHeightMM: the length of the long side of the wall in mm display.canvasHeight: the height of the active canvas ('playarea') in pixels display.canvasWidth: the width of the active canvas ('playarea') in pixels pxPerMM: the conversion factor to convert pixel data into mm-results for the current monitor (Note: the higher resolution of the current monitor the more pixels cover the same absolute screen distance) This factor is needed if you want to convert pixel data into absolute mm data blockCode, blockNum: the name and number of the current block (built-in Inquisit variable) trialCode, trialNum: the name and number of the currently recorded trial (built-in Inquisit variable) Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those that do not store data to the data file. conditionCounter: counts the number of conditions run blockCounter: counts the number of blocks run in the current condition draggingDirection: "horizontal" vs. "vertical" conditionRepetition: counts the number of times the current condition has run trialCounter: counts the number of trials in a block one trial: one right to left AND left to right swipe (same for vertical movement) stimulusItem: presented stimuli response: the response of participant (the valid drop-off response area) latency: response latency (in ms); measured from: onset of trial until the lift-off response on the currently valid response area is registered. Note: if the drop-off happens not in the valid response area, the wall does NOT change colors //break down of latency: rtMovementInitiation: the time in ms from beginning of trial until the square was grabbed rtDragTime: the time in ms that the square was dragged until a valid drop was achieved mouse.x: the horizontal mousefinger coordinate at square drop-off (in valid drop-off) mouse.y: the vertical mousefinger coordinate at square drop-off (in valid drop-off) //HO = horizontal orientation hoOptimallineYPX: the optimal y-coordinate in the Horizontal Orientation Condition (= y-coordinate of the white square at start of each trial) (= if participant does not diverge from this coordinate, they use the fastest way from A to B) Note: this optimal line depends on where the square is dropped off. In this script it is NOT automatically the center line but is reset to the drop-off coordinates hoOptimalLineDivergenceDropoffonlyPX: calculates the absolute pixel difference btw. (HOOptimallineYPX - mouse.y) at drop-off only (also conversion to mm) //VO = vertical orientation voOptimallineXPX: the optimal x-coordinate in the Vertical Orientation Condition (= x-coordinate of the white square at start of each trial) (= if participant does not diverge from this coordinate, they use the fastest way from A to B) voOptimalLineDivergenceDropoffonlyPX: calculates the absolute pixel difference btw. (VOOptimalline_xPX - mouse.x) at drop-off only //based on streaming data (these measures might give a more detailed picture of the dragging performance): //HO = horizontal orientation medianHOOptimalLineDivergencePerTrialPX: the median vertical distance (in canvas pixels) of the finger from the 'optimal' y-coordinate from pick-up to drop-off meanHOOptimalLineDivergencePerTrialPX: the mean vertical distance (in canvas pixels) of the finger from the 'optimal' y-coordinate from pick-up to drop-off sdHOOptimalLineDivergencePerTrialPX: the mean standard deviation of the vertical distance (in canvas pixels) btw. the finger and 'optimal' y-coordinate from pick-up to drop-off Note: 'optimal' in this sense means the y-coordinate of the square at the start of the trial. Moving the square along this y-coordinate is the shortest (thus fastest) way btw. the walls. //VO = vertical orientation same for vertical orientation (2) Summary data file: 'draggingtask_summary*.iqdat' (a separate file for each participant) inquisit.version: Inquisit version run computer.platform: the platform the script was run on (win/mac/ios/android) startDate: date script was run startTime: time script was started subjectId: assigned subject id number groupId: assigned group id number sessionId: assigned session id number elapsedTime: time it took to run script (in ms); measured from onset to offset of script completed: 0 = script was not completed (prematurely aborted); 1 = script was completed (all conditions run) parameters- draggingMechanism: "finger" vs. "stylus" parameters- draggingHand: "dominant" vs. "nondominant" vs. "left" vs. "right" playAreaHeightMM: the width of the play area in mm playAreaWidthMM: the height of the play area in mm squareHeightMM: the size of the square in mm distanceWallToWallMM: the distance between left (top) and right (bottom) wall (from center of wall to center of wall) in mm wallWidthMM: the length of the short side of the wall in mm wallHeightMM: the length of the long side of the wall in mm //Latency Performance (based on draggingRT only): medianRTLR: median time (in ms) it took participant to move from left to right (and drop off square in appropriate wall space) meanRTLR: mean time (in ms) it took participant to move from left to right (and drop off square in appropriate wall space) sdRTLR: standard deviation of mean time (in ms) it took participant to move from left to right (and drop off square in appropriate wall space) medianRTRL: median time (in ms) it took participant to move from right to left (and drop off square in appropriate wall space) meanRTRL: mean time (in ms) it took participant to move from right to left (and drop off square in appropriate wall space) sdRTRL: standard deviation of mean time (in ms) it took participant to move from right to left (and drop off square in appropriate wall space) medianRTH: median time (in ms) it took participant to move one side to the other in horizontal condition (and drop off square in appropriate wall space) meanRTH: mean time (in ms) it took participant to move one side to the other in horizontal condition (and drop off square in appropriate wall space) sdRTH: standard deviation of mean time (in ms) it took participant to move one side to the other (and drop off square in appropriate wall space) medianRTTB: median time (in ms) it took participant to move from top to bottom (and drop off square in appropriate wall space) meanRTTB: mean time (in ms) it took participant to move from top to bottom (and drop off square in appropriate wall space) sdRTTB: standard deviation of mean time (in ms) it took participant to move from top to bottom (and drop off square in appropriate wall space) medianRTBT: median time (in ms) it took participant to move from bottom to top (and drop off square in appropriate wall space) meanRTBT: mean time (in ms) it took participant to move from bottom to top (and drop off square in appropriate wall space) sdRTBT: standard deviation of mean time (in ms) it took participant to move from bottom to top (and drop off square in appropriate wall space) medianRTV: median time (in ms) it took participant to move one side to the other in vertical condition (and drop off square in appropriate wall space) meanRTV: mean time (in ms) it took participant to move one side to the other in vertical condition (and drop off square in appropriate wall space) sdRTV: standard deviation of mean time (in ms) it took participant to move one side to the other (and drop off square in appropriate wall space) //Movement Performance (Divergence from optimal path): //!!!IMPORTANT: different screens differ in their resolution (pixel density) //To convert them into absolute 'mm' units, use 'pxPerMM' //Example:'medianHOOptimalLineDivergenceDropoffPX * pxPerMM = medianHOOptimalLineDivergenceDropoffMM' //based on drop-off coordinates only (Note: each trial compares the drop-off y-coordinate to the starting y-coordinate; these divergence measures ignore the intermediate movements) medianHOOptimalLineDivergenceDropoffPX: median pixel distance from drop-off coordinate to optimal line meanHOOptimalLineDivergenceDropoffPX: mean pixel distance from drop-off coordinate to optimal line sdHOOptimalLineDivergenceDropoffPX: standarddeviation of the pixel distance from drop-off coordinate to the optimal line //Movement Performance: (based on streaming data, the divergence from the optimal y-coordinate is continuously monitored; not just at the end) //Horizontal Orientation (HO): medianHOOptimalLineDivergenceMeanPX: overall median distance (in canvas pixels) of the finger from the 'optimal' y-line (based on all trial means) meanHOOptimalLineDivergenceMeanPX: overall mean distance (in canvas pixels) of the finger from the 'optimal' y-line (averaged across all trial means) meanHOOptimalLineDivergenceSDPX: the overall mean (in canvas pixels) of all the trial standarddeviations of the vertical distances btw. the finger and the 'optimal' y-line Note: 'optimal' in this sense means the y-coordinate of the square at the start of the trial. Moving the square along this y-coordinate is the shortest (thus fastest) way btw. the walls. (same for VO - Vertical Orientation conditions) (3) Mouse Coordinates file: 'draggingtask_stream*.iqdat' (a separate file for each participant) Note: this data file stores data every ~17ms (=> records a line of data every ~17ms) build: The specific Inquisit version used (the 'build') that was run computer.platform: the platform the script was run on (win/mac/ios/android) date, time: date and time script was run subject, group: with the current subject/groupnumber session: with the current session id parameters- draggingMechanism: "finger" vs. "stylus" parameters- draggingHand: "dominant" vs. "nondominant" vs. "left" vs. "right" playAreaHeightMM: the width of the play area in mm playAreaWidthMM: the height of the play area in mm squareHeightMM: the size of the square in mm distanceWallToWallMM: the distance between left (top) and right (bottom) wall (from center of wall to center of wall) in mm wallWidthMM: the length of the short side of the wall in mm wallHeightMM: the length of the long side of the wall in mm display.canvasHeight: the height of the active canvas ('playarea') in pixels display.canvasWidth: the width of the active canvas ('playarea') in pixels pxPerMM: the conversion factor to convert pixel data into mm-results for the current monitor (Note: the higher resolution of the current monitor the more pixels cover the same absolute screen distance) blockCode, blockNum: the name and number of the current block (built-in Inquisit variable) trialCode, trialNum: the name and number of the currently recorded trial (built-in Inquisit variable) Note: trialNum is a built-in Inquisit variable; it counts all trials run; even those that do not store data to the data file. trialCounter: counts the number of trials in a block one trial: one right to left AND left to right swipe (same for vertical movement) startComputing: 0 = the square has not been grabbed yet on the gray wall 1 = the square has been grabbed (though might still be on gray wall) and performance data should be collected from this point on //response (time) data// rsp: 1 = a down or up response was made 0 = no down or up response was made respStatus: the data file will note 'down' and 'up' responses down = participants moved the finger down to grab the square up-miss = the square was dropped somewhere on the canvas (but not on the target wall) up-partial-miss = the square was dropped partially on the target wall up-corrected = the square was correctly dropped on the target wall after previously making an a (partially) missed drop up = the square was correctly dropped on the target wall without any previous errors respRTMS: the response time for the current response Example: first 'down' response: by definition this response time is 0 as measurements start with grabbing the square for the first time on the gray wall all 'up' responses: RespRTMS will store the time it took from previously grabbing the square until dropping it at the current location 'down' (after error response): stores the time it took participant to grab the stimulus again after erroneously dropping it somewhere on the canvas Ideally, participants' records show one 'down' response, followed by an 'up' response. sumRespRTMS: sum of all RespRTMS in a block //pathway data// mouse.x: the current horizontal mousefinger coordinate mouse.y: the current vertical mousefinger coordinate lastDown: the critical down coordinate from the last down response horizontal dragging: the critical down coordinate is the x-coordinate vertical dragging: the critical down coordinate is the y-coordinate dstPX: calculates distance travelled from last 'down' response to current 'up' response horizontal dragging: the distance only takes into account horizontal coordinate changes vertical dragging: the distance only takes into account vertical coordinate changes hoOptimallineYPX: the optimal y-coordinate (= y-coordinate of the white square at start of each trial) (= if participant does not diverge from this coordinate, they use the fastest way from A to B) Note: this optimal line depends on where the square is dropped off. In this script it is NOT automatically the center line but is reset to the drop-off coordinates hoOptimalLineDivergencePX: the current divergence (in pixels) from the optimal y-coordinate (measured roughly every 16ms or so) (same for VO condition) ___________________________________________________________________________________________________________________ EXPERIMENTAL SET-UP ___________________________________________________________________________________________________________________ By default, this script runs 2 conditions (horizontal vs. vertical) with 1 repetition per condition: The order of 2 conditions is selected at random. Horizontal Orientation: * 6 blocks of 16 trials per block (16 left-to-right and 16 right-to-left trials; one trial is a successful left and right movement) * start direction: right-to-left There is a short break inbetween each block Vertical Orientation: * 6 blocks of 16 trials per block (16 top-to-bottom and 16 bottom-to-top trials; one trial is a successful up and down movement) => start direction: bottom-to-top There is a short break inbetween each block By default, participants receive optional performance feedback at script conclusion. Check section 'Editable Parameters' for parameters that control - circle diameter conditions to run - rotation (clockwise, counterclockwise) conditions to run - block/trial/repetition numbers - performance feedback settings - automated task demo You can easily customize the experimental design by changing them. ***Task*** Square needs to grabbed from the gray wall and dropped off within the blue wall otherwise the wall doesn't change from blue to gray and the script still waits for a valid drop-off -The script records the time it takes participants to grap the square and how long the dragging movement takes -script continuously logs the divergence from the optimal dragging line in px and mm as well as all 'down' ('grab') and 'up' ('drop') responses made - in the streaming data file, the responses (down-up are recorded) ___________________________________________________________________________________________________________________ STIMULI ___________________________________________________________________________________________________________________ provided by Millisecond Software - can be edited under section 'Editable Stimuli' ___________________________________________________________________________________________________________________ INSTRUCTIONS ___________________________________________________________________________________________________________________ provided by Millisecond Software - can be edited under section 'Editable Instructions' The instructions are based on the ones published in Beard (2020, p.58 - Appendix D) ___________________________________________________________________________________________________________________ EDITABLE CODE ___________________________________________________________________________________________________________________ check below for (relatively) easily editable parameters, stimuli, instructions etc. Keep in mind that you can use this script as a template and therefore always "mess" with the entire code to further customize your experiment. The parameters you can change are: / draggingMechanism = "finger" //choose from: "finger" vs. "stylus" //Note: if "finger", the index finger is assumed / draggingHand = "dominant" //choose from: "dominant", "non-dominant", "left", "right" / skipPerformanceFeedback = false //true: participants won't receive any performance feedback at the end //false: participants will receive performance feedback at the end / runDemo = true //true: the script runs an demonstration of the task //false: the script skips the demonstration (e.g. if task administrator wants to demonstrate themselves) //design parameters: / runHorizontalOrientation = true //true = run the task in the Horizontal Orientation //false = do not run the task in the Horizontal Orientation / runVerticalOrientation = true //true = run the task in the Vertical Orientation //false = do not run the task in the Vertical Orientation / numberOfBlocksPerCondition = 6 //number of blocks per orientation condition (default: 6); Beard (2020) report 12 blocks per condition / trialsPerBlock = 16 //one trial = one left and right movement (default: 16) / repetitionsPerCondition = 1 //the number of repetitions for each condition //in this script: conditions repeat once they have all run //Example for a condition: 'horizontal orientation' //color parameter / canvasColor = lightGray //Display color of the actively used portion of the screen (the 'canvas') //Note: if set to a color other than the screenColor, you can distinguish the active canvas //from the inactive portion of the screen / screenColor = lightGray //Color of the screen not used by the canvas ('inactive screen') / defaultTextColor = black //Default color of text items //CANVAS SIZING PARAMETERS //sizing Parameters in RELATIVE measurements relative to CANVAS HEIGHT //NOTE: to run the script with ABSOLUTE screen measurements, go to 'defaults' and set //canvasSize to absolute measurements / squareHeightPct = 10.5% //the proportional size of the square to canvas HEIGHT / distanceWallTowallPct = 86% //the proportional distance from one wall to the other wall relative to canvas HEIGHT / wallShortsidePct = 17% //the length of the short side of the wall relative to canvas HEIGHT / wallLongsidePct = 93% //the length of the long side of the wall relative to canvas HEIGHT