Job waiting on after script when it shouldn't
Posted: Tue Mar 01, 2022 2:43 pm
I recently upgraded from version 7 to 9.39 and migrated all the existing jobs. Some of the jobs included an "after" script to be run. For these jobs this command looks like:
#HTTP GET www.somedomain.com/somepath
This script is built to parse the Syncovery log file for the job that calls it and handle some extra file work that's needed. The script waits for the log file name to updated to include the word "copied" as an indicator the job is complete. If it doesn't find the file, it waits for a few seconds and tries again. If no file is still found, it exits with an error.
Under version 7, this seemed to work as expected: the job would copy the files, call the external script (not wait), and finish updating/renaming the log file. The script would then read the log file and do its thing. But since the upgrade the script started failing when called directly from Syncovery. I confirmed the script is successfully being called, it just can't find the indicated log file with "copied" in its name and is thus exiting with an error. If I copy/paste the intended script and run it manually, it runs exactly as expected.
It seems that even though the script setting starts with # the job is waiting until the external script finishes before completing the updates to the log file. So when the script tries to find the target file, it doesn't exist.
Is this the correct behavior for scripts, even with the #? Is there some other setting I should be looking at? I compared all the settings from version to version but maybe I missed something?
#HTTP GET www.somedomain.com/somepath
This script is built to parse the Syncovery log file for the job that calls it and handle some extra file work that's needed. The script waits for the log file name to updated to include the word "copied" as an indicator the job is complete. If it doesn't find the file, it waits for a few seconds and tries again. If no file is still found, it exits with an error.
Under version 7, this seemed to work as expected: the job would copy the files, call the external script (not wait), and finish updating/renaming the log file. The script would then read the log file and do its thing. But since the upgrade the script started failing when called directly from Syncovery. I confirmed the script is successfully being called, it just can't find the indicated log file with "copied" in its name and is thus exiting with an error. If I copy/paste the intended script and run it manually, it runs exactly as expected.
It seems that even though the script setting starts with # the job is waiting until the external script finishes before completing the updates to the log file. So when the script tries to find the target file, it doesn't exist.
Is this the correct behavior for scripts, even with the #? Is there some other setting I should be looking at? I compared all the settings from version to version but maybe I missed something?