In the previous section, we created a task.
A task has corresponding ecf script which defines the work to be carried out. Scripts are similar to UNIX shell scripts.
However, ecf script includes the addition of “c” like pre-processing directives and variable s.
Variables defined in the suite definition can be used in ecf script, and provide a configuration mechanism.
The default pre-processing directives are specified by default using the % character.
One of the pre-processing directives is 'include'.
The include is used to inject code into a script and provide a mechanism for code reuse. If the same code appears in several different ecf script files, it should be placed in an include file instead. This then provides a single point of maintenance. For example, every task needs to set up the communication with the ecflow_server and then tell the server that it (task) has started. This ‘boilerplate’ code is placed in an include file.
head.h
The head.h 'include' file is placed at the start of ecf script. It:
- Provides the environment for communication with the ecflow_server
- Defines script error handling. When the script fails a trap is raised, we inform the server the task has aborted.
- Issues a child command to inform the server that the job has started.
#!%SHELL:/bin/ksh% set -e # stop the shell on first error set -u # fail when using an undefined variable set -x # echo script lines as they are executed set -o pipefail # fail if last(rightmost) command exits with a non-zero status # Defines the variables that are needed for any communication with ECF export ECF_PORT=%ECF_PORT% # The server port number export ECF_HOST=%ECF_HOST% # The host name where the server is running export ECF_NAME=%ECF_NAME% # The name of this current task export ECF_PASS=%ECF_PASS% # A unique password, used for job validation & zombie detection export ECF_TRYNO=%ECF_TRYNO% # Current try number of the task export ECF_RID=$$ # record the process id. Also used for zombie detection # export NO_ECF=1 # uncomment to run as a standalone task on the command line # Define the path where to find ecflow_client # make sure client and server use the *same* version. # Important when there are multiple versions of ecFlow export PATH=/usr/local/apps/ecflow/%ECF_VERSION%/bin:$PATH # Tell ecFlow we have started ecflow_client --init=$$ # Define a error handler ERROR() { set +e # Clear -e flag, so we don't fail wait # wait for background process to stop ecflow_client --abort=trap # Notify ecFlow that something went wrong, using 'trap' as the reason trap 0 # Remove the trap exit 0 # End the script cleanly, server monitors child, an exit 1, will cause another abort and zombie } # Trap any calls to exit and errors caught by the -e flag trap ERROR 0 # Trap any signal that may cause the script to fail trap '{ echo "Killed by a signal"; ERROR ; }' 1 2 3 4 5 6 7 8 10 12 13 15
tail.h
The tail.h include file is placed at the end of ecf script and is used to inform the server that the job has completed. It issues the complete child command
wait # wait for background process to stop ecflow_client --complete # Notify ecFlow of a normal end trap 0 # Remove all traps exit 0 # End the shell
2 Comments
Dominique Lucas
Is the 'exit 0' in the ERROR correct? Shouldn't it be a value different to 0?
Axel Bonet
ERROR is just to report the ecFlow server about the aborted job
script can then exit with rc=0, yes.
in operation, we have "exit 1"
ksh task wrappers and functions trapping explains the difference between defining function xx {} and xx() {} which behave differently regarding to trap.