Millisecond Forums

TOBII areas of interest in Inquisit

https://forums.millisecond.com/Topic18482.aspx

By jrichmond - 3/3/2016

Hi everyone, 
I am switching from running my TOBII studies in Eprime to running them in Inquisit and I am wondering if it is possible to have the experiment script define areas of interest so that the output includes not just x and y coordinates but also which AOI they correspond to. In eprime you can mark objects as AOIs and there are userdefined and AOI columns in the output that tell you what part of the trial is playing and which AOI the participant was looking at.

How would I make Inquisit do that?

thanks
Jenny
By Dave - 3/3/2016

The preferential looking example at https://www.millisecond.com/download/library/Tobii/ may be helpful here. An AOI would be simple on-screen objects such as a <shape>, <text>, or <picture> elements defined as /validresponse in a <trial>:

<trial puppypuppy>
/ ontrialbegin = [values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;]
/ ontrialbegin = [port.marker.setitem(values.marker, 1);]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = eyetracker
/ validresponse = (puppyleft, puppyright)
/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

The threshold to score a "hit" can be defined via the <eyetracker> element's /aoidurationthreshold attribute:

<eyetracker>
/ plugin = "tobii"
/ aoidurationthreshold = 2000
</eyetracker>

Here the participants must gaze within an area of interest for 2000ms before a hit is scored.

The AOI that was "hit" would be recorded in the standard data file's response column, just like a mouse-click on some on-screen object would be.

In addition, you could also insert a marker identifying the hit AOI into the gaze point log by sending <port> stimuli via the <trial>'s /responsemessage attributes.

Hope this helps.
By coglab - 7/12/2017

Hello, all! I'm currently in the process of building a pilot experiment using Inquisit 5 and a Tobii eye tracker, and I'm having issues with creating areas of interest within the program. Specifically, the experiment I'm creating consists of a number of trials, each containing 3 areas of interest. I'm having difficulty figuring out a way to create a column in the eye tracking data file that indicates which area of interest each individual gaze point is located in. 

I've been referencing the existing Tobii eye tracking scripts available online as well as the information provided in this thread, but I've been unable to successfully create areas of interest. Using the template provided in this thread has not resulted in successful data collection. Does anyone have other references or guides that would be helpful in figuring this out? Is there a more straightforward way to create areas of interest within Inquisit using the Tobii plugin? Thank you!
By Dave - 7/12/2017

coglab - Wednesday, July 12, 2017
Hello, all! I'm currently in the process of building a pilot experiment using Inquisit 5 and a Tobii eye tracker, and I'm having issues with creating areas of interest within the program. Specifically, the experiment I'm creating consists of a number of trials, each containing 3 areas of interest. I'm having difficulty figuring out a way to create a column in the eye tracking data file that indicates which area of interest each individual gaze point is located in. 

I've been referencing the existing Tobii eye tracking scripts available online as well as the information provided in this thread, but I've been unable to successfully create areas of interest. Using the template provided in this thread has not resulted in successful data collection. Does anyone have other references or guides that would be helpful in figuring this out? Is there a more straightforward way to create areas of interest within Inquisit using the Tobii plugin? Thank you!

What model of Tobii eyetracker are you using?
By coglab - 7/12/2017

Dave - Wednesday, July 12, 2017
coglab - Wednesday, July 12, 2017
Hello, all! I'm currently in the process of building a pilot experiment using Inquisit 5 and a Tobii eye tracker, and I'm having issues with creating areas of interest within the program. Specifically, the experiment I'm creating consists of a number of trials, each containing 3 areas of interest. I'm having difficulty figuring out a way to create a column in the eye tracking data file that indicates which area of interest each individual gaze point is located in. 

I've been referencing the existing Tobii eye tracking scripts available online as well as the information provided in this thread, but I've been unable to successfully create areas of interest. Using the template provided in this thread has not resulted in successful data collection. Does anyone have other references or guides that would be helpful in figuring this out? Is there a more straightforward way to create areas of interest within Inquisit using the Tobii plugin? Thank you!

What model of Tobii eyetracker are you using?

I am using a Tobii 2X-60!
By Dave - 7/12/2017

coglab - Wednesday, July 12, 2017
Dave - Wednesday, July 12, 2017
coglab - Wednesday, July 12, 2017
Hello, all! I'm currently in the process of building a pilot experiment using Inquisit 5 and a Tobii eye tracker, and I'm having issues with creating areas of interest within the program. Specifically, the experiment I'm creating consists of a number of trials, each containing 3 areas of interest. I'm having difficulty figuring out a way to create a column in the eye tracking data file that indicates which area of interest each individual gaze point is located in. 

I've been referencing the existing Tobii eye tracking scripts available online as well as the information provided in this thread, but I've been unable to successfully create areas of interest. Using the template provided in this thread has not resulted in successful data collection. Does anyone have other references or guides that would be helpful in figuring this out? Is there a more straightforward way to create areas of interest within Inquisit using the Tobii plugin? Thank you!

What model of Tobii eyetracker are you using?

I am using a Tobii 2X-60!

Thanks. There is a problem with the current release of the plugin (update coming soon) with respect to fixation detection when tracking at high frequencies, specifically at 300Hz. This would apply to only to very recent devices like the Tobii Pro TX300.

There should be no problem with an X2-60, which tracks at 60Hz. So I'm not sure what the specific issue is you're experiencing.
- Does the device calibrate properly when used with Inquisit?
- Does the Preferential Looking Task script (as available at https://www.millisecond.com/download/library/tobii/ ) work, i.e. does the script move on when you fixate on one of the on-screen images and is the fixated object properly logged as response in the data file?
- Generally speaking, there is no special magic to creating areas of interest. You basically set up your stimulus elements (<picture>, <text>, and/or <shape>), position them according to your needs, have a <trial> display them via its /stimulusframes or -times and define those "area of interest"-objects as the trial's /validresponse:

<trial puppypuppy>
/ ontrialbegin = [
    values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;
    port.marker.setitem(values.marker, 1);
]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = eyetracker
/ validresponse = (puppyleft, puppyright)
/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

In essence, this works the same way as setting up clickable objects when working with a mouse as input device. I.e. if you were to change the above to

<trial puppypuppy>
/ ontrialbegin = [
    values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;
    port.marker.setitem(values.marker, 1);
]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = mouse
/ validresponse = (puppyleft, puppyright)
/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

the trial would accept a click on either the left or right image as its response (instead of a fixation).
By coglab - 7/12/2017

Dave - Wednesday, July 12, 2017
coglab - Wednesday, July 12, 2017
Dave - Wednesday, July 12, 2017
coglab - Wednesday, July 12, 2017
Hello, all! I'm currently in the process of building a pilot experiment using Inquisit 5 and a Tobii eye tracker, and I'm having issues with creating areas of interest within the program. Specifically, the experiment I'm creating consists of a number of trials, each containing 3 areas of interest. I'm having difficulty figuring out a way to create a column in the eye tracking data file that indicates which area of interest each individual gaze point is located in. 

I've been referencing the existing Tobii eye tracking scripts available online as well as the information provided in this thread, but I've been unable to successfully create areas of interest. Using the template provided in this thread has not resulted in successful data collection. Does anyone have other references or guides that would be helpful in figuring this out? Is there a more straightforward way to create areas of interest within Inquisit using the Tobii plugin? Thank you!

What model of Tobii eyetracker are you using?

I am using a Tobii 2X-60!

Thanks. There is a problem with the current release of the plugin (update coming soon) with respect to fixation detection when tracking at high frequencies, specifically at 300Hz. This would apply to only to very recent devices like the Tobii Pro TX300.

There should be no problem with an X2-60, which tracks at 60Hz. So I'm not sure what the specific issue is you're experiencing.
- Does the device calibrate properly when used with Inquisit?
- Does the Preferential Looking Task script (as available at https://www.millisecond.com/download/library/tobii/ ) work, i.e. does the script move on when you fixate on one of the on-screen images and is the fixated object properly logged as response in the data file?
- Generally speaking, there is no special magic to creating areas of interest. You basically set up your stimulus elements (<picture>, <text>, and/or <shape>), position them according to your needs, have a <trial> display them via its /stimulusframes or -times and define those "area of interest"-objects as the trial's /validresponse:

<trial puppypuppy>
/ ontrialbegin = [
    values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;
    port.marker.setitem(values.marker, 1);
]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = eyetracker
/ validresponse = (puppyleft, puppyright)
/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

In essence, this works the same way as setting up clickable objects when working with a mouse as input device. I.e. if you were to change the above to

<trial puppypuppy>
/ ontrialbegin = [
    values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;
    port.marker.setitem(values.marker, 1);
]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = mouse
/ validresponse = (puppyleft, puppyright)
/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

the trial would accept a click on either the left or right image as its response (instead of a fixation).

Thank you for your response! I used that script for reference and was able to do what you described above, but unfortunately, experiment I'm building requires that a response is recorded for each individual gaze point, not each trial. 

In the data file, there is indeed a column titled "response" in which the registered response for each trial is noted. However, I'm trying to create a response column for the eye tracking data file such that for each gaze point recorded, there is a column that notes which shape (which I've specified as valid responses) the participant was looking at. Each trial is 30 seconds in duration, and ultimately, we are trying to calculate what proportion of that 30 seconds was spent looking at each shape. 

Is there a way to do this using <port> stimuli?
By Dave - 7/13/2017

coglab - Thursday, July 13, 2017
Dave - Wednesday, July 12, 2017
coglab - Wednesday, July 12, 2017
Dave - Wednesday, July 12, 2017
coglab - Wednesday, July 12, 2017
Hello, all! I'm currently in the process of building a pilot experiment using Inquisit 5 and a Tobii eye tracker, and I'm having issues with creating areas of interest within the program. Specifically, the experiment I'm creating consists of a number of trials, each containing 3 areas of interest. I'm having difficulty figuring out a way to create a column in the eye tracking data file that indicates which area of interest each individual gaze point is located in. 

I've been referencing the existing Tobii eye tracking scripts available online as well as the information provided in this thread, but I've been unable to successfully create areas of interest. Using the template provided in this thread has not resulted in successful data collection. Does anyone have other references or guides that would be helpful in figuring this out? Is there a more straightforward way to create areas of interest within Inquisit using the Tobii plugin? Thank you!

What model of Tobii eyetracker are you using?

I am using a Tobii 2X-60!

Thanks. There is a problem with the current release of the plugin (update coming soon) with respect to fixation detection when tracking at high frequencies, specifically at 300Hz. This would apply to only to very recent devices like the Tobii Pro TX300.

There should be no problem with an X2-60, which tracks at 60Hz. So I'm not sure what the specific issue is you're experiencing.
- Does the device calibrate properly when used with Inquisit?
- Does the Preferential Looking Task script (as available at https://www.millisecond.com/download/library/tobii/ ) work, i.e. does the script move on when you fixate on one of the on-screen images and is the fixated object properly logged as response in the data file?
- Generally speaking, there is no special magic to creating areas of interest. You basically set up your stimulus elements (<picture>, <text>, and/or <shape>), position them according to your needs, have a <trial> display them via its /stimulusframes or -times and define those "area of interest"-objects as the trial's /validresponse:

<trial puppypuppy>
/ ontrialbegin = [
    values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;
    port.marker.setitem(values.marker, 1);
]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = eyetracker
/ validresponse = (puppyleft, puppyright)
/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

In essence, this works the same way as setting up clickable objects when working with a mouse as input device. I.e. if you were to change the above to

<trial puppypuppy>
/ ontrialbegin = [
    values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;
    port.marker.setitem(values.marker, 1);
]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = mouse
/ validresponse = (puppyleft, puppyright)
/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>

the trial would accept a click on either the left or right image as its response (instead of a fixation).

Thank you for your response! I used that script for reference and was able to do what you described above, but unfortunately, experiment I'm building requires that a response is recorded for each individual gaze point, not each trial. 

In the data file, there is indeed a column titled "response" in which the registered response for each trial is noted. However, I'm trying to create a response column for the eye tracking data file such that for each gaze point recorded, there is a column that notes which shape (which I've specified as valid responses) the participant was looking at. Each trial is 30 seconds in duration, and ultimately, we are trying to calculate what proportion of that 30 seconds was spent looking at each shape. 

Is there a way to do this using <port> stimuli?

Ah, I see. I wasn't clear on what exactly you were planning to do. There are two options:
(1) You should be able to calculate the cumulative viewing time for each object with the approach sketched out at https://www.millisecond.com/forums/FindPost14474.aspx
The respective variables can then be logged to the regular data file (not the gaze file).
(2) You can also do the mapping of gaze points to on-screen objects after the fact using your preferred analysis application. To that end, you would probably want to have Inquisit insert a marker into the gaze data file signifying trial onset as well as have the trials produce screen captures (set / screencapture = true) so you can produce heat-map like overlays of gaze data points / durations over the objects as presented on screen for each respective trial.