Millisecond Forums

Web Participant Log Questions

By jbfleming - 5/13/2014

Having been running our study for a few weeks now, I wanted to get a better understanding of what is going on with the data in the web participant log so we can understand why/how we are losing the people we are losing.

The first thing that sticks out to me is the summary says only one participant has completed the study, but we have data from over 50 who completed the study. What increments this number?  It would be strange even if it were just way too low, but that it is stuck at one seems even stranger.

Second, we've lost 10 participants due to errors, that look like mostly Direct2D calls failing. Here are some sampleS:
Direct2D Error: 0x88982f80 The pixel format is not supported. Line 259, File win\Direct2DGraphics.cpp
Direct2D Error: 0x8899000c A presentation error has occurred that may be recoverable. The caller needs to re-create the render target then attempt to render the frame again. Line 765, File win\Direct2DGraphics.cpp
DXGI Error: 0x887a0002 Unknown error 0x-2005270526lX Line 454, File win\Direct2DGraphics.cpp

I know when I traced one of these errors back to the script that crashed, it was a portion of the study that just has an openended text box in it, nothing fancy (It does use /isvalidresponse and latency to enforce staying on the page at least 30 seconds, but that isn't a graphics call)

We also have 16 who "failed to launch." Can someone tell me what condition is counted as failed to launch? Is it user error?

85 of 95 participants who consented look like they got as far as the launch page, but due to all of these various failures/dropouts, we've lost about 30 of them. We just want to understand what the conditions are that would lead to this beyond simple drop-out. We need a good understanding so we can do further analysis and see if those who fail to launch or drop out share any characteristics.


By Dave - 5/13/2014

> The first thing that sticks out to me is the summary says only one participant has completed the study [...]

Could be for a number of reasons. Most likely things changed around (adding / removing / renaming scripts) or different studies were registered at different times under the same name, thus giving the impression that people haven't completed what was *initially* specified.

As a general recommendation, it's probably best to start from a clean slate once you've got your procedure fully worked out and tested. I.e., take down any previous version(s) used for testing etc. and upload the final product under a new, distinct name.

Re. the DirectX errors, it's hard to say with absolutely certainty without knowing more about the systems in question, the exact circumstances and the involved scripts. In general, those errors you provided as examples indicate a loss of the "drawing surface", which could be caused by a number of various factors such as interference from another application, a wonky dual-graphics-card setup, switching or disconnecting a monitor mid-task, or a more fundamental driver issue.

Re. "failed to launch", this, too, can mean a number of things, including some sort of technical failure (e.g. plugin was blocked from running by the browser) or indeed user intervention (participant started the launch process but then decided to quit).

Hope this helps.

By jbfleming - 5/16/2014

We are just concerned because of the people who screen in and go to the initial study page, only 59% are finishing the study. Most fail to launch, some have errors, and then a handful bail out in the middle for some other reason.

While we are on the topic, a participant sent me the attached error text today. I haven't seen this one yet, is it platform related as well? Can he try again from the same computer or should he try another one?
By Dave - 5/16/2014

The error points to an issue with the script. You should check the /stimulusframes or -times of the <trial> elements it mentions. Some frame there is misspecified / being redefined.
By jbfleming - 5/16/2014

The file has remained unchanged and hasn't caused this error for anyone else. I will check it, but why would this error only happen for one participant?
By jbfleming - 5/16/2014

Here's the block in question:

<trial ec_positive_gay_trials_consonants>
/ pretrialpause = 300
/ validresponse = ("f", "j")
/ correctresponse = ("j")
/ stimulustimes = [0=ec_positive_gay_mask; 500=noreplace(ec_positive_gay_stimulus_gay);
                  517=noreplace(ec_positive_gay_stimulus_positive); 534=noreplace(ec_positive_gay_test_consonant_strings)]

By Dave - 5/16/2014

He might be running a system with an odd, extremely slow refresh rate. Consider this:

/ stimulustimes = [0=somestim; 10=someotherstim]

The above will work on a display running at 100Hz refresh rate. Try it on a system running at 50Hz, and you'll get an error. 10ms aren't achievable (frame duration is 20ms), thus the same frame effectively gets redefined (this is what the error message points to: for event 22 in the frame sequence).
By Admin - 5/16/2014

The reason this might show up sporadically is that it depends on the vertical refresh rate of the participant's monitor. With displays running at frequencies (other than 60 Hz) two of the stimulus times might be rounded to the same vertical retrace frame, which would produce this message. Should be relatively uncommon given that most monitors are running at 60 Hz or fairly close to it.

By Dave - 5/16/2014

This is exactly what I laid out in my previous response:

/ stimulustimes = [0=ec_positive_gay_mask; 500=noreplace(ec_positive_gay_stimulus_gay);
                  517=noreplace(ec_positive_gay_stimulus_positive); 534=noreplace(ec_positive_gay_test_consonant_strings)]

You need a minimum of 60Hz to achieve those timings (frame duration ~16.77ms). Won't work at 50Hz or below.
By seandr - 5/16/2014

Yes, looks like we cross posted.
By Dave - 5/16/2014

Yes, but wasn't referring to your response (hadn't even seen it yet) :-)
By jbfleming - 5/16/2014

Okay, yeah, my understanding was if I used milliseconds it would get rounded to something that worked on that system, and it sounds like that is indeed the case, but that two frames can get rounded to the same, causing the problem.

I think considering that this has only popped up once, I am not going to worry about it.


By seandr - 5/19/2014

FYI - I uploaded a fix to the activity reports on Friday that fixes the incorrect count for participants who have completed the test. The numbers should be correct now.

By jbfleming - 5/20/2014

The participant log does appear to have much more accurate counts now. Thank you!