Batch File Crashing


Author
Message
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 12K, Visits: 98K
jmwotw - Monday, June 18, 2018
Dave - Monday, June 18, 2018
desiree80 - Monday, June 18, 2018
Dave - Monday, June 11, 2018
jmwotw - Saturday, June 9, 2018
We've been doing some testing and figured out a few things.
  • The batch file seems to work if you remove the EMATHFR test from the batch code, which is the only test that uses a second monitor. Currently, we're using this monitor
  • If you run the EMATHFR code by itself, it also seems to work, but embedding it in the middle of the test like it is now does not. 
  • If you start the test with EMATHFR (IDInfo is a one-pager that we don't remove because it's where we input the batch-level parameter, age), then it will run and so will the remaining tests.
Not sure if this gives you any clues, but I wanted to let you know what we've learned in case it helps. 

Thanks,
Jeff


Thanks, this is very helpful. Based on what you describe, the switch from single-monitor to dual-monitor in the middle of things is what throws things off (not sure yet whether the problem originates in Inquisit itself or the graphics card driver refusing to switch / returning an error).

A workaround might be to put a small script that uses both monitors at the very start of the batch -- that script need not do anything, need not take any input or last long (it can just time out after a second or two). I'll be looking into your files later on (the download is taking a while) and let you know what I find.

Hi Dave, 
Would you happen to know anything about batch file discrepancies between different types of computers (if that even exists)? At the moment our batch file runs through most of the files when we run it on our desktops. However, on our surface pros they crash somewhere in the middle. Let me know if you have any ideas, thank you!

Assuming the scripts are those available under the Box link, the only thing I can think of would be resource-exhaustion on the Surface Pros. The scripts involve lots and lots of files, so I can imagine that the Surface devices *might* for example run out of memory somewhere along the way whereas more powerful desktop machines can cope with the overall load. Can you give me a better sense of when / where crashes occur, i.e. is it always during a particular script or even a particular trial or block in a script, or does it seem somewhat random, i.e. sometimes happens sooner in script X, other times it happens later in a different script Y? The latter would support the idea that the Surface devices can't quite cope with the combined load imposed by the scripts.

I think you're right, Dave. We were just about to post with some tests we've run. When we first posted, the script would run if only MATHFR were removed. We added a very small script at the end to collect the timing of the last test, and that was too much. So we then tried breaking up the batch file even further; roughly half of the tests in one batch, half in another, and MATHFR by itself. So far, this is the only way that it works. Am I understanding correctly that Inquisit needs memory for the entire series of tests, not just the one it is currently running? This is unfortunate because one of the key reasons why we purchased Inquisit 5 was for the ability to pass a common, randomly-generated code across all tests administered during a batch. Every time we break up the batch file means it's harder to merge a participant's individual tests back together when there's a problem with the subject id. 


While memory is *mostly* consumed by the currently running script, running several scripts via <batch> causes some additional overhead -- the <batch> script runs in its own thread, has to keep track of things (which script is currently running, is it over, which script is next, etc.?) and the <batch> element also needs to preserve some state beyond that for <batch> <parameters> and <values> used to pass information from script to script.

If you can forego the <paramaters> and <values>, you could get rid of the <batch> and its overhead and execute your script's via a command line script instead -- that command line script could generate a random subject ID which would be identical across all scripts it executes, and thus allow for relatively easy data matching and merging of all tests as before. Since technically Inquisit would shut down briefly between scripts and would not need to keep <batch> state and do housekeeping, the memory consumption overall should be lower.

https://www.millisecond.com/support/docs/v5/html/howto/howtocommandline.htm

https://www.millisecond.com/support/docs/v5/html/articles/batchscripts.htm

Might that be an option?

Jeff
Jeff
Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)
Group: Forum Members
Posts: 49, Visits: 123
Dave - Monday, June 18, 2018
jmwotw - Monday, June 18, 2018
Dave - Monday, June 18, 2018
desiree80 - Monday, June 18, 2018
Dave - Monday, June 11, 2018
jmwotw - Saturday, June 9, 2018
We've been doing some testing and figured out a few things.
  • The batch file seems to work if you remove the EMATHFR test from the batch code, which is the only test that uses a second monitor. Currently, we're using this monitor
  • If you run the EMATHFR code by itself, it also seems to work, but embedding it in the middle of the test like it is now does not. 
  • If you start the test with EMATHFR (IDInfo is a one-pager that we don't remove because it's where we input the batch-level parameter, age), then it will run and so will the remaining tests.
Not sure if this gives you any clues, but I wanted to let you know what we've learned in case it helps. 

Thanks,
Jeff


Thanks, this is very helpful. Based on what you describe, the switch from single-monitor to dual-monitor in the middle of things is what throws things off (not sure yet whether the problem originates in Inquisit itself or the graphics card driver refusing to switch / returning an error).

A workaround might be to put a small script that uses both monitors at the very start of the batch -- that script need not do anything, need not take any input or last long (it can just time out after a second or two). I'll be looking into your files later on (the download is taking a while) and let you know what I find.

Hi Dave, 
Would you happen to know anything about batch file discrepancies between different types of computers (if that even exists)? At the moment our batch file runs through most of the files when we run it on our desktops. However, on our surface pros they crash somewhere in the middle. Let me know if you have any ideas, thank you!

Assuming the scripts are those available under the Box link, the only thing I can think of would be resource-exhaustion on the Surface Pros. The scripts involve lots and lots of files, so I can imagine that the Surface devices *might* for example run out of memory somewhere along the way whereas more powerful desktop machines can cope with the overall load. Can you give me a better sense of when / where crashes occur, i.e. is it always during a particular script or even a particular trial or block in a script, or does it seem somewhat random, i.e. sometimes happens sooner in script X, other times it happens later in a different script Y? The latter would support the idea that the Surface devices can't quite cope with the combined load imposed by the scripts.

I think you're right, Dave. We were just about to post with some tests we've run. When we first posted, the script would run if only MATHFR were removed. We added a very small script at the end to collect the timing of the last test, and that was too much. So we then tried breaking up the batch file even further; roughly half of the tests in one batch, half in another, and MATHFR by itself. So far, this is the only way that it works. Am I understanding correctly that Inquisit needs memory for the entire series of tests, not just the one it is currently running? This is unfortunate because one of the key reasons why we purchased Inquisit 5 was for the ability to pass a common, randomly-generated code across all tests administered during a batch. Every time we break up the batch file means it's harder to merge a participant's individual tests back together when there's a problem with the subject id. 


While memory is *mostly* consumed by the currently running script, running several scripts via <batch> causes some additional overhead -- the <batch> script runs in its own thread, has to keep track of things (which script is currently running, is it over, which script is next, etc.?) and the <batch> element also needs to preserve some state beyond that for <batch> <parameters> and <values> used to pass information from script to script.

If you can forego the <paramaters> and <values>, you could get rid of the <batch> and its overhead and execute your script's via a command line script instead -- that command line script could generate a random subject ID which would be identical across all scripts it executes, and thus allow for relatively easy data matching and merging of all tests as before. Since technically Inquisit would shut down briefly between scripts and would not need to keep <batch> state and do housekeeping, the memory consumption overall should be lower.

https://www.millisecond.com/support/docs/v5/html/howto/howtocommandline.htm

https://www.millisecond.com/support/docs/v5/html/articles/batchscripts.htm

Might that be an option?



Jeff
Jeff
Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)
Group: Forum Members
Posts: 49, Visits: 123
jmwotw - Wednesday, June 20, 2018
Dave - Monday, June 18, 2018
jmwotw - Monday, June 18, 2018
Dave - Monday, June 18, 2018
desiree80 - Monday, June 18, 2018
Dave - Monday, June 11, 2018
jmwotw - Saturday, June 9, 2018
We've been doing some testing and figured out a few things.
  • The batch file seems to work if you remove the EMATHFR test from the batch code, which is the only test that uses a second monitor. Currently, we're using this monitor
  • If you run the EMATHFR code by itself, it also seems to work, but embedding it in the middle of the test like it is now does not. 
  • If you start the test with EMATHFR (IDInfo is a one-pager that we don't remove because it's where we input the batch-level parameter, age), then it will run and so will the remaining tests.
Not sure if this gives you any clues, but I wanted to let you know what we've learned in case it helps. 

Thanks,
Jeff


Thanks, this is very helpful. Based on what you describe, the switch from single-monitor to dual-monitor in the middle of things is what throws things off (not sure yet whether the problem originates in Inquisit itself or the graphics card driver refusing to switch / returning an error).

A workaround might be to put a small script that uses both monitors at the very start of the batch -- that script need not do anything, need not take any input or last long (it can just time out after a second or two). I'll be looking into your files later on (the download is taking a while) and let you know what I find.

Hi Dave, 
Would you happen to know anything about batch file discrepancies between different types of computers (if that even exists)? At the moment our batch file runs through most of the files when we run it on our desktops. However, on our surface pros they crash somewhere in the middle. Let me know if you have any ideas, thank you!

Assuming the scripts are those available under the Box link, the only thing I can think of would be resource-exhaustion on the Surface Pros. The scripts involve lots and lots of files, so I can imagine that the Surface devices *might* for example run out of memory somewhere along the way whereas more powerful desktop machines can cope with the overall load. Can you give me a better sense of when / where crashes occur, i.e. is it always during a particular script or even a particular trial or block in a script, or does it seem somewhat random, i.e. sometimes happens sooner in script X, other times it happens later in a different script Y? The latter would support the idea that the Surface devices can't quite cope with the combined load imposed by the scripts.

I think you're right, Dave. We were just about to post with some tests we've run. When we first posted, the script would run if only MATHFR were removed. We added a very small script at the end to collect the timing of the last test, and that was too much. So we then tried breaking up the batch file even further; roughly half of the tests in one batch, half in another, and MATHFR by itself. So far, this is the only way that it works. Am I understanding correctly that Inquisit needs memory for the entire series of tests, not just the one it is currently running? This is unfortunate because one of the key reasons why we purchased Inquisit 5 was for the ability to pass a common, randomly-generated code across all tests administered during a batch. Every time we break up the batch file means it's harder to merge a participant's individual tests back together when there's a problem with the subject id. 


While memory is *mostly* consumed by the currently running script, running several scripts via <batch> causes some additional overhead -- the <batch> script runs in its own thread, has to keep track of things (which script is currently running, is it over, which script is next, etc.?) and the <batch> element also needs to preserve some state beyond that for <batch> <parameters> and <values> used to pass information from script to script.

If you can forego the <paramaters> and <values>, you could get rid of the <batch> and its overhead and execute your script's via a command line script instead -- that command line script could generate a random subject ID which would be identical across all scripts it executes, and thus allow for relatively easy data matching and merging of all tests as before. Since technically Inquisit would shut down briefly between scripts and would not need to keep <batch> state and do housekeeping, the memory consumption overall should be lower.

https://www.millisecond.com/support/docs/v5/html/howto/howtocommandline.htm

https://www.millisecond.com/support/docs/v5/html/articles/batchscripts.htm

Might that be an option?



Thanks, Dave. We've looked into the strategies you've proposed, and although they solve the multiple experiment in a row problem, we just don't think they're feasible for us. This is primarily because our participants are young children and the people who administer the tests to these children have a wide-ranging ability level in technology. Therefore, using command line and batch capabilities just isn't an option. 

Furthermore, we were really hoping that Inquisit 5's new batch file capabilities would solve a couple of issues for us. Most importantly, due to human error with our examiners, we end up with a sizable number of mis-entered subject IDs. This is bound to happen in research with many subjects, but when you have several duplicate subject IDs, it would be nice to have an additional ID that joined the various tests during an assessment. We were hoping to add a randomly-generated assessment ID to complement the subject ID that would persist across all of the scripts of a batch file. Therefore, even if someone mis-entered child #3's subject id as 4 so that we had two assessments with subjectid=4, then we could at least easily keep the multiple tests from each one separate. This would facilitate all of the data merging on the back end. 

Also, I'm a little surprised at the overhead of the batch file. We're only running about 7-8 tests, and the parameters we're trying to use across scripts is minimal so it's hard for me to understand how it might take up a lot of memory. Given our testing results, it seems like the memory load is increasingly roughly linearly with the number of scripts we add to the batch file and the number of files associated with each script, as if it's trying to load all of the files at one time, instead of one at a time. At the present moment, given the number of items we have, there doesn't seem to be a way to administer all of the scripts back-to-back. Instead we have to break up the batch files into 2-3 parts, which eliminates the benefits of an assessment ID to join scripts from the same assessment session. 

I hope this makes sense.

Jeff



Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 12K, Visits: 98K
jmwotw - Wednesday, June 20, 2018

Thanks, Dave. We've looked into the strategies you've proposed, and although they solve the multiple experiment in a row problem, we just don't think they're feasible for us. This is primarily because our participants are young children and the people who administer the tests to these children have a wide-ranging ability level in technology. Therefore, using command line and batch capabilities just isn't an option. 

Furthermore, we were really hoping that Inquisit 5's new batch file capabilities would solve a couple of issues for us. Most importantly, due to human error with our examiners, we end up with a sizable number of mis-entered subject IDs. This is bound to happen in research with many subjects, but when you have several duplicate subject IDs, it would be nice to have an additional ID that joined the various tests during an assessment. We were hoping to add a randomly-generated assessment ID to complement the subject ID that would persist across all of the scripts of a batch file. Therefore, even if someone mis-entered child #3's subject id as 4 so that we had two assessments with subjectid=4, then we could at least easily keep the multiple tests from each one separate. This would facilitate all of the data merging on the back end. 

Also, I'm a little surprised at the overhead of the batch file. We're only running about 7-8 tests, and the parameters we're trying to use across scripts is minimal so it's hard for me to understand how it might take up a lot of memory. Given our testing results, it seems like the memory load is increasingly roughly linearly with the number of scripts we add to the batch file and the number of files associated with each script, as if it's trying to load all of the files at one time, instead of one at a time. At the present moment, given the number of items we have, there doesn't seem to be a way to administer all of the scripts back-to-back. Instead we have to break up the batch files into 2-3 parts, which eliminates the benefits of an assessment ID to join scripts from the same assessment session. 

I hope this makes sense.

Jeff



The <batch> overhead isn't huge, but when you have a system that's already operating near its limit resource-wise (because, among other things the scripts themselves are relatively demanding / resource-heavy), even small things can make a difference (such as e.g. adding one more script; cf. https://www.millisecond.com/forums/FindPost25111.aspx ). One possible strategy might be to make the scripts themselves a little less resource heavy; for example, you could try using MP3s in conjunction with <video> elements instead of relatively large WAVs in conjunction with <sound> elements. Avoiding massive image rescaling can also save considerable amounts of memory, i.e. you could try converting any images to a resolution near where it needs to be instead of relying on Inquisit to scale e.g. a high-resolution image down to fit a relatively small space on the screen. I haven't reviewed all the images involved in your scripts enough to really say whether there is much to be gained in this particular case, but at least the MP3 vs WAV modification seems worthwhile and may well make a noticeable difference.

Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 12K, Visits: 98K
Dave - Wednesday, June 20, 2018
jmwotw - Wednesday, June 20, 2018

Thanks, Dave. We've looked into the strategies you've proposed, and although they solve the multiple experiment in a row problem, we just don't think they're feasible for us. This is primarily because our participants are young children and the people who administer the tests to these children have a wide-ranging ability level in technology. Therefore, using command line and batch capabilities just isn't an option. 

Furthermore, we were really hoping that Inquisit 5's new batch file capabilities would solve a couple of issues for us. Most importantly, due to human error with our examiners, we end up with a sizable number of mis-entered subject IDs. This is bound to happen in research with many subjects, but when you have several duplicate subject IDs, it would be nice to have an additional ID that joined the various tests during an assessment. We were hoping to add a randomly-generated assessment ID to complement the subject ID that would persist across all of the scripts of a batch file. Therefore, even if someone mis-entered child #3's subject id as 4 so that we had two assessments with subjectid=4, then we could at least easily keep the multiple tests from each one separate. This would facilitate all of the data merging on the back end. 

Also, I'm a little surprised at the overhead of the batch file. We're only running about 7-8 tests, and the parameters we're trying to use across scripts is minimal so it's hard for me to understand how it might take up a lot of memory. Given our testing results, it seems like the memory load is increasingly roughly linearly with the number of scripts we add to the batch file and the number of files associated with each script, as if it's trying to load all of the files at one time, instead of one at a time. At the present moment, given the number of items we have, there doesn't seem to be a way to administer all of the scripts back-to-back. Instead we have to break up the batch files into 2-3 parts, which eliminates the benefits of an assessment ID to join scripts from the same assessment session. 

I hope this makes sense.

Jeff



The <batch> overhead isn't huge, but when you have a system that's already operating near its limit resource-wise (because, among other things the scripts themselves are relatively demanding / resource-heavy), even small things can make a difference (such as e.g. adding one more script; cf. https://www.millisecond.com/forums/FindPost25111.aspx ). One possible strategy might be to make the scripts themselves a little less resource heavy; for example, you could try using MP3s in conjunction with <video> elements instead of relatively large WAVs in conjunction with <sound> elements. Avoiding massive image rescaling can also save considerable amounts of memory, i.e. you could try converting any images to a resolution near where it needs to be instead of relying on Inquisit to scale e.g. a high-resolution image down to fit a relatively small space on the screen. I haven't reviewed all the images involved in your scripts enough to really say whether there is much to be gained in this particular case, but at least the MP3 vs WAV modification seems worthwhile and may well make a noticeable difference.

To add, I do think the issues and demands (random assessment code independent of the subject ID as a data matching fall-back) are solvable via a command line script, too. That command line script can be made relatively user-friendly and pretty, requiring no more than a double-click to execute essentially. It would then prompt the experimenter to enter a subject and group ID as usual. To generate the fall-back assessment code, the command line script could generate a random number (just like your <batch> does currently) write that to a small file, and that file could then be <include>d in all the actual scripts, i.e. the assessment code would be present in and consistent across all the participant's data files.

I'll see if I can code up a small demo of the above a little later today and post it here to better illustrate what I mean.

Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 12K, Visits: 98K
Dave - Wednesday, June 20, 2018
Dave - Wednesday, June 20, 2018
jmwotw - Wednesday, June 20, 2018

Thanks, Dave. We've looked into the strategies you've proposed, and although they solve the multiple experiment in a row problem, we just don't think they're feasible for us. This is primarily because our participants are young children and the people who administer the tests to these children have a wide-ranging ability level in technology. Therefore, using command line and batch capabilities just isn't an option. 

Furthermore, we were really hoping that Inquisit 5's new batch file capabilities would solve a couple of issues for us. Most importantly, due to human error with our examiners, we end up with a sizable number of mis-entered subject IDs. This is bound to happen in research with many subjects, but when you have several duplicate subject IDs, it would be nice to have an additional ID that joined the various tests during an assessment. We were hoping to add a randomly-generated assessment ID to complement the subject ID that would persist across all of the scripts of a batch file. Therefore, even if someone mis-entered child #3's subject id as 4 so that we had two assessments with subjectid=4, then we could at least easily keep the multiple tests from each one separate. This would facilitate all of the data merging on the back end. 

Also, I'm a little surprised at the overhead of the batch file. We're only running about 7-8 tests, and the parameters we're trying to use across scripts is minimal so it's hard for me to understand how it might take up a lot of memory. Given our testing results, it seems like the memory load is increasingly roughly linearly with the number of scripts we add to the batch file and the number of files associated with each script, as if it's trying to load all of the files at one time, instead of one at a time. At the present moment, given the number of items we have, there doesn't seem to be a way to administer all of the scripts back-to-back. Instead we have to break up the batch files into 2-3 parts, which eliminates the benefits of an assessment ID to join scripts from the same assessment session. 

I hope this makes sense.

Jeff



The <batch> overhead isn't huge, but when you have a system that's already operating near its limit resource-wise (because, among other things the scripts themselves are relatively demanding / resource-heavy), even small things can make a difference (such as e.g. adding one more script; cf. https://www.millisecond.com/forums/FindPost25111.aspx ). One possible strategy might be to make the scripts themselves a little less resource heavy; for example, you could try using MP3s in conjunction with <video> elements instead of relatively large WAVs in conjunction with <sound> elements. Avoiding massive image rescaling can also save considerable amounts of memory, i.e. you could try converting any images to a resolution near where it needs to be instead of relying on Inquisit to scale e.g. a high-resolution image down to fit a relatively small space on the screen. I haven't reviewed all the images involved in your scripts enough to really say whether there is much to be gained in this particular case, but at least the MP3 vs WAV modification seems worthwhile and may well make a noticeable difference.

To add, I do think the issues and demands (random assessment code independent of the subject ID as a data matching fall-back) are solvable via a command line script, too. That command line script can be made relatively user-friendly and pretty, requiring no more than a double-click to execute essentially. It would then prompt the experimenter to enter a subject and group ID as usual. To generate the fall-back assessment code, the command line script could generate a random number (just like your <batch> does currently) write that to a small file, and that file could then be <include>d in all the actual scripts, i.e. the assessment code would be present in and consistent across all the participant's data files.

I'll see if I can code up a small demo of the above a little later today and post it here to better illustrate what I mean.

So here's a small example. The CMD file could look along the following lines:

@echo off
set INQUISITPATH="C:\Program Files\Millisecond Software\Inquisit 5\Inquisit.exe"
set SCRIPT_1_PATH="C:\Users\David\Desktop\example\iat\iat.iqx"
set SCRIPT_2_PATH="C:\Users\David\Desktop\example\spr\spr.iqx"

set /p SID=Enter subject ID:
set /p GID=Enter group ID:

set /a CODE=%RANDOM%

@echo ^<values^> > code.txt
@echo /code = %CODE% >> code.txt
@echo ^</values^> >> code.txt

start "" /wait %INQUISITPATH% %SCRIPT_1_PATH% -s %SID% -g %GID%
start "" /wait %INQUISITPATH% %SCRIPT_2_PATH% -s %SID% -g %GID%


At the top the path to the Inquisit executable is defined, and then the paths for the respective scripts that are supposed to be executed (here: just two for the sake of simplicity).

set INQUISITPATH="C:\Program Files\Millisecond Software\Inquisit 5\Inquisit.exe"
set SCRIPT_1_PATH="C:\Users\David\Desktop\example\iat\iat.iqx"
set SCRIPT_2_PATH="C:\Users\David\Desktop\example\spr\spr.iqx"

Then the file prompts the experimenter for both the subject and group number.

set /p SID=Enter subject ID:
set /p GID=Enter group ID:

Then a random number is generated and the result is written to a file:

set /a CODE=%RANDOM%

@echo ^<values^> > code.txt
@echo /code = %CODE% >> code.txt
@echo ^</values^> >> code.txt

The contents resulting file "code.txt" would look like this, for example:

<values>
/code = 1120
</values>

Then the script fires up the actual scripts in order, passing in the subject and group id

start "" /wait %INQUISITPATH% %SCRIPT_1_PATH% -s %SID% -g %GID%
start "" /wait %INQUISITPATH% %SCRIPT_2_PATH% -s %SID% -g %GID%

The randomly generated code.txt is <include>d by the scripts per

<include>
/ file = "C:\Users\David\Desktop\example\code.txt"
</include>

and values.code is logged to all data file.

I'm attaching the full example files below, if you would like to run it on your system, you would have to adjust the file paths accordingly first. (It's of course possible to have the command line script do even more, but hopefully this relatively small / simple example is sufficient to determine whether something like this may be viable for your purposes.)

Attachments
example.zip (373 views, 17.00 KB)
Jeff
Jeff
Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)Guru (8.8K reputation)
Group: Forum Members
Posts: 49, Visits: 123
Dave - Wednesday, June 20, 2018
Dave - Wednesday, June 20, 2018
Dave - Wednesday, June 20, 2018
jmwotw - Wednesday, June 20, 2018

Thanks, Dave. We've looked into the strategies you've proposed, and although they solve the multiple experiment in a row problem, we just don't think they're feasible for us. This is primarily because our participants are young children and the people who administer the tests to these children have a wide-ranging ability level in technology. Therefore, using command line and batch capabilities just isn't an option. 

Furthermore, we were really hoping that Inquisit 5's new batch file capabilities would solve a couple of issues for us. Most importantly, due to human error with our examiners, we end up with a sizable number of mis-entered subject IDs. This is bound to happen in research with many subjects, but when you have several duplicate subject IDs, it would be nice to have an additional ID that joined the various tests during an assessment. We were hoping to add a randomly-generated assessment ID to complement the subject ID that would persist across all of the scripts of a batch file. Therefore, even if someone mis-entered child #3's subject id as 4 so that we had two assessments with subjectid=4, then we could at least easily keep the multiple tests from each one separate. This would facilitate all of the data merging on the back end. 

Also, I'm a little surprised at the overhead of the batch file. We're only running about 7-8 tests, and the parameters we're trying to use across scripts is minimal so it's hard for me to understand how it might take up a lot of memory. Given our testing results, it seems like the memory load is increasingly roughly linearly with the number of scripts we add to the batch file and the number of files associated with each script, as if it's trying to load all of the files at one time, instead of one at a time. At the present moment, given the number of items we have, there doesn't seem to be a way to administer all of the scripts back-to-back. Instead we have to break up the batch files into 2-3 parts, which eliminates the benefits of an assessment ID to join scripts from the same assessment session. 

I hope this makes sense.

Jeff



The <batch> overhead isn't huge, but when you have a system that's already operating near its limit resource-wise (because, among other things the scripts themselves are relatively demanding / resource-heavy), even small things can make a difference (such as e.g. adding one more script; cf. https://www.millisecond.com/forums/FindPost25111.aspx ). One possible strategy might be to make the scripts themselves a little less resource heavy; for example, you could try using MP3s in conjunction with <video> elements instead of relatively large WAVs in conjunction with <sound> elements. Avoiding massive image rescaling can also save considerable amounts of memory, i.e. you could try converting any images to a resolution near where it needs to be instead of relying on Inquisit to scale e.g. a high-resolution image down to fit a relatively small space on the screen. I haven't reviewed all the images involved in your scripts enough to really say whether there is much to be gained in this particular case, but at least the MP3 vs WAV modification seems worthwhile and may well make a noticeable difference.

To add, I do think the issues and demands (random assessment code independent of the subject ID as a data matching fall-back) are solvable via a command line script, too. That command line script can be made relatively user-friendly and pretty, requiring no more than a double-click to execute essentially. It would then prompt the experimenter to enter a subject and group ID as usual. To generate the fall-back assessment code, the command line script could generate a random number (just like your <batch> does currently) write that to a small file, and that file could then be <include>d in all the actual scripts, i.e. the assessment code would be present in and consistent across all the participant's data files.

I'll see if I can code up a small demo of the above a little later today and post it here to better illustrate what I mean.

So here's a small example. The CMD file could look along the following lines:

@echo off
set INQUISITPATH="C:\Program Files\Millisecond Software\Inquisit 5\Inquisit.exe"
set SCRIPT_1_PATH="C:\Users\David\Desktop\example\iat\iat.iqx"
set SCRIPT_2_PATH="C:\Users\David\Desktop\example\spr\spr.iqx"

set /p SID=Enter subject ID:
set /p GID=Enter group ID:

set /a CODE=%RANDOM%

@echo ^<values^> > code.txt
@echo /code = %CODE% >> code.txt
@echo ^</values^> >> code.txt

start "" /wait %INQUISITPATH% %SCRIPT_1_PATH% -s %SID% -g %GID%
start "" /wait %INQUISITPATH% %SCRIPT_2_PATH% -s %SID% -g %GID%


At the top the path to the Inquisit executable is defined, and then the paths for the respective scripts that are supposed to be executed (here: just two for the sake of simplicity).

set INQUISITPATH="C:\Program Files\Millisecond Software\Inquisit 5\Inquisit.exe"
set SCRIPT_1_PATH="C:\Users\David\Desktop\example\iat\iat.iqx"
set SCRIPT_2_PATH="C:\Users\David\Desktop\example\spr\spr.iqx"

Then the file prompts the experimenter for both the subject and group number.

set /p SID=Enter subject ID:
set /p GID=Enter group ID:

Then a random number is generated and the result is written to a file:

set /a CODE=%RANDOM%

@echo ^<values^> > code.txt
@echo /code = %CODE% >> code.txt
@echo ^</values^> >> code.txt

The contents resulting file "code.txt" would look like this, for example:

<values>
/code = 1120
</values>

Then the script fires up the actual scripts in order, passing in the subject and group id

start "" /wait %INQUISITPATH% %SCRIPT_1_PATH% -s %SID% -g %GID%
start "" /wait %INQUISITPATH% %SCRIPT_2_PATH% -s %SID% -g %GID%

The randomly generated code.txt is <include>d by the scripts per

<include>
/ file = "C:\Users\David\Desktop\example\code.txt"
</include>

and values.code is logged to all data file.

I'm attaching the full example files below, if you would like to run it on your system, you would have to adjust the file paths accordingly first. (It's of course possible to have the command line script do even more, but hopefully this relatively small / simple example is sufficient to determine whether something like this may be viable for your purposes.)

Dave, you are the absolute best! Not only did that solve our concerns, but it solved a few others as well! I can't tell you how grateful we are for your assistance with this. 

https://www.millisecond.com/forums/Uploads/Images/7acb2ca2-ee4f-4408-a859-5f95.gif




Tags
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Explore
Messages
Mentions
Search