Millisecond Forums

Gaze-contingently updating stimulus using SR Research Plugin

https://forums.millisecond.com/Topic21149.aspx

By adkinsty - 3/21/2017

I would like to create a task in which participants are presented with six words, such that the word that they are currently looking at is visible while the words that they are not currently looking at are masked. 

For example:

Not looking at any word:     ####    ####    ####    ####    ####    ####
Looking at word 1:              dart      ####    ####    ####    ####    ####
Looking at word 2:              ####    home   ####    ####    ####    ####
etc.

Is this possible to do using the eye tracker element and the SR research plugin? If so, how might I implement this? I will be using an Eyelink eye tracker. 

Thank you very much!!!

-Tyler

By Dave - 3/21/2017

adkinsty - Tuesday, March 21, 2017
I would like to create a task in which participants are presented with six words, such that the word that they are currently looking at is visible while the words that they are not currently looking at are masked. 

For example:

Not looking at any word:     ####    ####    ####    ####    ####    ####
Looking at word 1:              dart      ####    ####    ####    ####    ####
Looking at word 2:              ####    home   ####    ####    ####    ####
etc.

Is this possible to do using the eye tracker element and the SR research plugin? If so, how might I implement this? I will be using an Eyelink eye tracker. 

Thank you very much!!!

-Tyler


Yes, that should be possible: You can use gaze data as response input similar to how you would use mouse input, cf. https://www.millisecond.com/support/docs/v5/html/howto/srresearch.htm

I.e. you'd have a <trial> that presents six objects (one for each masked word) and then /branch to separate trials with the given word unmasked depending on which object was looked at.