Millisecond Forums

Missing Data

https://forums.millisecond.com/Topic14854.aspx

By nashby - 11/3/2014

So I have been running a study on Mturk and while the normal problems are found (missing data, participants cant get it to run or it breaks halfway through) I am having a huge problem with data not being uploaded/saved to inquists server so when I go to retrieve the data its just not there. Anyone have this problem and know of any fixes? I am wondering where the file is saved on the persons computer before the upload, maybe its just sitting there and if they search for it they could email it to me? This is a huge problem of course as I am either having to pay for work that is missing or getting a ton of complaints and a very bad rep for not paying for the work.
By Dave - 11/3/2014

Once a web experiment finishes, the Inquisit Web runtime will attempt to contact the server and upload the data. If it cannot contact the server for some reason (e.g., because a local firewall on the participant's machine blocked communication or the internet connection went down entirely), it will prompt the participant to save the file locally and contact the researcher. If a participant declines to save the file, the data is lost.

I unfortunately cannot tell you why some (many?) of your participants may be experiencing problems without you providing a link to the respective launch page. In general, you should make sure it's using the latest Inquisit Web version and that the script is properly activated.
By nashby - 11/3/2014

Yes I am doing all of that and they are not being prompted to save the file locally it just ends doesnt show the end page and thats it. It works for maybe 70% of people and fails for the other 30%....

Here is a link to the study.

http://research.millisecond.com/nashby/Bundled.web
By Dave - 11/4/2014

Thanks for the link. I did a couple of quick test runs and it's working as expected for me. It appears to properly save data and redirect to the finish page *if* a participant fully completes the procedure.

It will not do so if a given participant triggers one of your various script.abort() conditions, which is also the expected behavior.

It would seem plausible that the above is the case for at least *some* of your participants reporting an apparent breakdown / not being redirected. As for the others experiencing issues, if you have any additional details (error messages and the like), please don't hesitate to share them.
By nashby - 11/4/2014

Actually it does upload the data on the script abort as well, just doesnt show the finish page.

So its some other problem and its costing me dearly. I dont have any other details to go off of people report using a variety of browsers and many report that inquist has worked for them in the past. This is really frustrating.
By Dave - 11/4/2014

Hmm, offhand I have no idea what other problem that may be. In case you have participant tracking enabled, the logs accessible via your account at https://www.millisecond.com/myaccount/logs.aspx may provide some hint(s).
By nashby - 11/4/2014

Thanks had a look only 1 error reported and so dont think thats going to shed much light. I think I would then simply say if you are going to run a study online don't use Inquisit because it doesn't work so well and will just get you in trouble :( Back to using java I guess.
By seandr - 11/4/2014

I just tried your script, and I was unable to proceed past the following trial:


<trial genintro>
/ stimulusframes = [1 = intro2, proceed2, box]
/ validresponse = (proceed, box)
/ recorddata = false
/ ontrialend = [if(trial.genintro.response == "proceed"){values.kickem = 1}]
</trial>


The problem is that "proceed" is listed as a valid response, but it isn't actually presented anywhere on the screen, so there's no way to actually proceed. Instead, the trial presents a stimulus called "proceed2". I'm guessing your participants click about 10 times in vain on the "proceed2" stimulus, and then bail out by hitting Ctrl+Alt+Delete.

You'll likely see higher completion rates if you fix this problem. 

-Sean

By nashby - 11/4/2014

Thanks for catching this but that is actually an attention check. If they click proceed they fail and it kicks them out anyways so if they ctrl quit out that would be for the best. 

The problem with the data not uploading still remains since this is people who did the whole experiment and then it didnt upload.
By seandr - 11/4/2014

Ok, doing some more digging here.

I'm counting data files, and what I'm seeing is 96 data files in your account right now. Another 18 data files were deleted, one on 10/28 and the others on 10/30, so the cumulative total is 114. The log for the experiment reports there should be 112 - not sure why the discrepancy, but possibly the log was cleared after 2 test runs, or there were 2 duplicate uploads (which can happen if the server returns an error after the first attempt, so Inquisit tries again, and then both attempts succeed). In any case, all of the data files that made it to the server seem to be accounted for.

What makes you think there are additional data files that didn't make it to the server?

Regarding the attention check, it's clever, but there's no way to differentiate
1) someone who misses or is confused by the instructions, repeatedly clicks the "Let's get started" link, assumes Inquisit is hung when nothing happens, and angrily shuts down Inquisit using Ctrl+Alt+Delete.
2) someone who encountered a legitimate bug.

Also note that every participant who misses the instructions is going to blame you and/or Inquisit for their inability to proceed, which ruins both yours and Inquisit's reputation. If you're going to use this kind of check, you should allow them to respond with either the link or the invisible box, and show them a message that says "Sorry, you weren't reading instructions, the experiment will now end" if they click the wrong one. That would also give you a record in the data file that they did, in fact, click the wrong link. It looks like 18 people dropped out after successfully starting the experiment - I'm guessing most or all of them got caught in your trap without knowing it.

If you have any additional info regarding data files that didn't get sent to the server, let me know. I'm looking into the missing finish page entries in the log now...

-Sean
By nashby - 11/4/2014

Well there are quite a few people who are emailing me saying they completed the task, they know all the details of the task, yet there is no data file for them there. If it was just one or two  I would say its a scam, but its actually a lot of them.

The attention check goes on to a next page that then does as you suggest already (e.g., you have failed to pay attention so you cannot take part in this task). No one has complained about this and there files still show up if the control quit or are kicked out.


By seandr - 11/4/2014

Just to be clear, unless you've updated the script, the only choice is to click the invisible rectangle. You'll need to list "proceed2" as a response for that response to be recognized.

I'll look further at the logs to see if I can find more data to make sense of the feedback you're getting from respondents.

-Sean
By nashby - 11/4/2014

Yes I fixed it but again it has nothing to do with the issue I am having here with the data not being uploaded.
By seandr - 11/4/2014

I'm slogging through the servers raw hit logs, and it appears some of the participants are also running your "T2" study, which captures the MTurk ID in the link. With those participants, I can use the ip address to examine their hits with respect to the "Bundle" study to see what connection they made with the server. Would it be possible to share some/all of the MTurk IDs for respondents who say they completed the whole study but for whom no data can be found? It would also be helpful to know which date they ran the test.

If you'd rather not post them here, you can email them to me at seandr at millisecond dot com.

Thanks,
Sean
By seandr - 11/4/2014

P.S. It appears that T2 is registered with Inquisit 4.0.6, and Bundled is registered with 4.0.7. You could save workers the trouble of downloading the Inquisit control twice by setting these to the same version.

-Sean
By nashby - 11/4/2014

The T2 study is done now so no more people will be taking it. Didnt have any issues there, at least none were reported to me though that one wasn't run through my MTurk account and I think they just pay everyone regardless so that might be why there were no complaints.

 Here is a subset of workers which I am most confident actually did the study but whose data I cant seem to find. In terms of dates its from Nov1st to Today.

A1FTR8DJIKGD8H
A21A93NS7IJNZM
AQLO2RDFLX3DN
A34L4JED8UAIXC
A38Z4HGE16SX10
A19ED8FYO6CA36
A2JZQHB8N6LS7T
By seandr - 11/4/2014

Ok, of those, only A34L4JED8UAIXC also ran T2. With that participant, Inquisit was for some reason unable to reach the server to verify the license. In that case, a message appears indicating the failure and giving the participant the option of running anyway even though data wouldn't be saved, which they seem to have done, since they did eventually arrive at the finish page. That would explain how they could have run the whole test without producing a data file.

I've identified 2 other cases where the license check similarly failed, although I can't map them back to the mturk IDs, unless you happen to know their subject id or ip address (not sure how you would). There may be other cases that I can't identify because they didn't reach the finish page.

I'm still investigating why the license check failed to reach the server. In the meantime, you could try setting the version back to 4.0.6 for this script, although that's just a shot in the dark, as I'm not aware of any recent changes to the code that could explain this.

P.S. As for the finish page timestamps missing from the logs, that appears to be a bug in the report. The hits are there in the underlying data tables. 


-Sean

By nashby - 11/4/2014

Thanks for looking into this.

So basically something pops up and they just ignore it and click through blindly. I think that part is going to be hard to avoid but I can at least put a warning about it up to try to prevent it. Do you think if they try to install and run again it would work on the next pass? Is there alternatively anything I could add to the script that could check if this is the case then kill the session if it is?
By nashby - 11/4/2014

Ah and yes a few reported that they got to the end but the finish page never popped up.
By seandr - 11/4/2014

I don't know if trying again would work, but you could instruct participants to try, especially if they have access to another computer. 

I'm planning to update 4.0.7 to remove the option of running the study anyway given its potential to cause this type of problem. That way, if the script registration fails, the participant doesn't waste their time and your money by completing the study. I'm kind of surprised this hasn't come up before.

The earliest I'd publish that update would be tomorrow (it's 11pm Seattle time right now). 

I'm also trying to find a computer that reproduces this issue. Once we can reproduce it, we should be able to fix it.

Regards,
Sean
By nashby - 11/4/2014

Okay that sounds great.
By seandr - 11/5/2014

Just updated 4.0.7. Now, if the licensemanager is unreachable, the script won't run. In addition, the error is logged so we'll have a clue about the cause. 

Thanks for your help in troubleshooting this. Hopefully the remainder of your study will go more smoothly, but let us know if you continue to encounter this issue. 

-Sean
By seandr - 11/6/2014

I think I've solved the mystery. Not too long ago we tightened up server security by disabling support for SSL 2 and SSL 3 for encrypted exchanges. The minimum protocol the server supports is TSL 1. Some machines (mostly XP) may not have support for TSL 1 enabled by default. If not, the license check would be rejected by the server and never logged. Data upload would also be rejected since is relies on HTTP as well. 

I'm not sure what's to be done about this beyond disabling the option to run when the registration check fails. Re-enabling SSL 3 makes me a little nervous.

-Sean