This is the only participant with negative latencies out of a sample of approximately 300. The study was completed anonymously, so we don't have a way of contacting the participant now. I've not seen the problem reproduced.
Ultimately, disregarding the latency data from this participant isn't terribly problematic (the task is not RT based) - but I would like to ensure that it's not a problem with the script (i.e., are the latencies for other participants valid - presuming they are non-negative?).
#1: I don't see any particular problem w/ the script and it sure does not give me any negative latencies when run. Apparently that's also true for the vast majority of your participants, which should be comforting. Note: Since you have logged elapsedtime you can always check that for inconsistencies -- it should increase monotonically as already noted. If it does not, something's fishy. Also, the difference between elapsedtime on two consecutive trials always ought be at least equal (usually slightly greater) than the logged latency. I'd be very surprised if you found any such inconsistencies in your other participants' data sets.
#2: As for validity of the remaining latency data, one would first have to debate the (somewhat philosophical) point of what "valid" actually means / can mean in the given context. When you conduct research using machines you have no control over, you can never fully know their "true" measurement properties. I.e., different machines may -- in theory -- have certain hard- and/or software configurations that result in different forms of measurement biases or errors. Inquisit tries to do the best it can given the environment it's running in, but unknowns remain (to quantify or get rid of those unknowns, one would have to hook up external measurement hardware to every individual machine and profile its properties in detail). That said, barring any other obvious issues (such as negative latencies), the data are most likely fine, or if you prefer, "valid".
Hope this helps.