eyetracker element


eyetracker element

Author
Message
sdeanda
sdeanda
Expert
Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)

Group: Forum Members
Posts: 9, Visits: 31
Hi,

Where can I find the documentation for the "aoidurationthreshold" attribute for the eyetracker element ? I see it used in the preferential looking example script, but can't seem to find the documentation/syntax for the attribute. I'm trying to use the preferential looking script but I want to record the total looking time to each image on each trial rather than having it time out after a threshold is reached. Any help is appreciated!

Thanks,

SD

Dave
Dave
Supreme Being
Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)

Group: Administrators
Posts: 9.8K, Visits: 47K
/ aoidurationthreshold determines the time a given participant must gaze within a specific area of interest before a hit is scored / that gaze is scored as a response.

> I'm trying to use the preferential looking script but I want to record the total looking time to each image on each trial rather than
> having it time out after a threshold is reached.

The problem with this idea is: How is the given trial supposed to terminate, i.e., accept a gaze at an object as a response if there is no threshold of any kind specified?

<trial puppypuppy>
/ ontrialbegin = [values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;]
/ ontrialbegin = [port.marker.setitem(values.marker, 1);]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = eyetracker
/ validresponse = (puppyleft, puppyright)

/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

sdeanda
sdeanda
Expert
Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)

Group: Forum Members
Posts: 9, Visits: 31
Hi,

The trial duration is set to 2500 milliseconds. A correct look is marked when the participant looks longer to the target image than the distractor image on a given trial.

-SD
Dave
Dave
Supreme Being
Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)

Group: Administrators
Posts: 9.8K, Visits: 47K
Then -- unless I'm missing something -- you should simply be able to set /aoithresholdduration to a value equal or greater to /trialduration.

sdeanda
sdeanda
Expert
Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)

Group: Forum Members
Posts: 9, Visits: 31
Sorry for being unclear!

For the preferential looking task, every trial will last 2500 ms. I want to record the looking time to each of the two images (a target image and a distractor image) presented during this time window. A correct response is awarded when the looking time to target > distractor. I thought that I might be able to use inquisit for this task since it does seem to capture looking time to areas of interest.


Thanks!!
Dave
Dave
Supreme Being
Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)

Group: Administrators
Posts: 9.8K, Visits: 47K
I don't think that's possible. You would have to derive the the cumulative looking time for both target and distractor from the collected gaze data *after data collection is complete* and then score correctness based on that.

Dave
Dave
Supreme Being
Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)

Group: Administrators
Posts: 9.8K, Visits: 47K
I am wondering though, if something along the lines of

<eyetracker>
/ aoidurationthreshold = 1251
...
</eyetracker>

<trial mytrial>
/ stimulusframes = [1=target, distractor]
/ validresponse = (target, distractor)
/ correctresponse = (target)
/ inputdevice = eyetracker
/ trialduration = 2500
...
</trial>

would work...

sdeanda
sdeanda
Expert
Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)Expert (1.2K reputation)

Group: Forum Members
Posts: 9, Visits: 31
I thought of the same thing, except that this assumes that the participant will look at the screen for the entire trial length. Because we are working with children, they often look off screen. So, for example, you might have someone look at the screen for 2000 out of the total trial duration of 2500 ms. In this case, they could have looked at the target for 1100ms, and the distractor for 900 ms, and we would want to credit them for looking at the target longer than the distractor. There is a minimum looking time, of course, since we would want gaze durations <500 ms to be counted.

The /aoidurationthreshold attribute must be tracking gaze to each picture to evaluate whether a given threshold is reached, the question is whether I can record this.

Dave
Dave
Supreme Being
Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)Supreme Being (705K reputation)

Group: Administrators
Posts: 9.8K, Visits: 47K
> The /aoidurationthreshold attribute must be tracking gaze to each picture to evaluate whether a given threshold is reached, the
> question is whether I can record this.

Nope, not as far as I am aware at least. Calculating cumulative viewing time from the raw gaze data would be the way to go then.

seandr
seandr
Supreme Being
Supreme Being (100K reputation)Supreme Being (100K reputation)Supreme Being (100K reputation)Supreme Being (100K reputation)Supreme Being (100K reputation)Supreme Being (100K reputation)Supreme Being (100K reputation)Supreme Being (100K reputation)Supreme Being (100K reputation)

Group: Administrators
Posts: 1.2K, Visits: 964
This might be possible. On each trial, present a transparent <shape> stimulus that is sized to 100% of the screen width and height along with the target and distractor and include the shape stimulus in the validresponse list:


<shape screen>
/ size = (100%, 100%)
/ color = transparent
</shape>

<trial test>
/ inputdevice = eyetracker
/ stimulusframes = [1=target, distractor, screen]
/ validresponse = (screen, one, two)
/ isvalidresponse = []
/ trialduration = 2500
</trial>


With each change in gaze point, /isvalidresponse is called. Within isvalidresponse, you can check the trial.test.response property to determine whether the infant is looking at either stimulus or the screen. The code here could then note when this changes and tally up the total time for either stimulus. For example, if gaze transitions from screen to the distractor, you'd store the onset time of the transition in a value, e.g. value.distractoronset = script.elapsedtime. If gaze transitions back to the screen or to the target, you'd tally the total time, e.g., value.distractorgazetime += (script.elapsedtime - value.distractoronset). Your code would need to handle the various possible transitions and make sure values are initialized properly, but it shouldn't be too complicated. Also, /isvalidresponse should always return false or the trial will end.

One potential issue - if  gazepoint transitions from the target directly off screen, that transition would be missed. Not sure how likely that is.

-Sean



Edited
5 Years Ago by seandr
GO


Reading This Topic


Login
Existing Account
Email Address:


Password:


Social Logins

Select a Forum....






Millisecond Forums


Search