Contents:
The description on this page follows on from the previous part of the user guide for the OpenIFS 48r1 global 3D forecasting model. It is assumed that the model has been extracted, built and tested on your local computing system. If you have not done this please prefer to the previous part Getting started.
1. Set up a forecast experiment
An example forecast experiment has been prepared for OpenIFS 48r1. The experiment ID is ab2a.
You should first download the tarball for this example experiment from here: https://sites.ecmwf.int/openifs/openifs-data/case_studies/48r1/karl/ab2a.tar.gz
Extract the example forecast experiment ab2a.tar.gz
into a folder in a location suitable for model experiments. The global OpenIFS configuration file (oifs-config.edit_me.sh
) sets the variable OIFS_EXPT
which should point to the root directory for your OpenIFS experiments. You should extract ab2a.tar.gz
to this location, and this folder will become your experiment directory.
The experiment directory would ideally be in a different location from the earlier model installation path $OIFS_HOME
. In general, you will need more disk space for experiments, depending on the model grid resolution, the duration of the forecast experiment and the output fequency of model results. In oifs-config.edit_me.sh
you should set $OIFS_EXPT
from its default location to a suitable directory in your local filesystem (e.g. a data area or the $SCRATCH
space on the ECMWF hpc2020 system). Make sure that your source the configuration file again after applying the change.
Example:
On the ECMWF hpc2020 our model has been previously installed to $OIFS_HOME
which is in $HOME/openifs-48r1
. For the experiment we extract the ab2a package to $OIFS_EXPT
which is in a different location on the file system. The experiment directory shall therefore be $OIFS_EXPT/ab2a/2016092500
.
You can download the ab2a.tar.gz
package either using a web browser via the URL in the box below or you could the wget utility as shown below:
# download the experiment tarball and extract to the experiment folder: cp ab2a.tar.gz $OIFS_EXPT tar -xvzf ab2a.tar.gz
Ensure the namelist files for the atmospheric model (fort.4) and for the wave model (wam_namelist) are found in the experiment directory. If they are not already there then you can find them in a subfolder (called ecmwf
) inside the experiment directory.
cd $OIFS_EXPT/ab2a/2016092500 cp ./ecmwf/fort.4 . cp ./ecmwf/wam_namelist .
You will need to copy three further scripts from the OpenIFS package into your experiment directory:
oifs-run:
this is a generic run script which executes the binary model program file.exp-config.h:
this is the experiment configuration file that determines settings for your experiment. It will be read by oifs-run.run
.ecmwf-hpc2020.job:
this is the wrapper script to submit non-interactive jobs on hpc2020
Copy these files from $OIFS_HOME/scripts
into your experiment directory.
cd $OIFS_EXPT/ab2a/2016092500 cp $OIFS_HOME/scripts/exp_3d/oifs-run . cp $OIFS_HOME/scripts/exp_3d/exp-config.h . cp $OIFS_HOME/scripts/exp_3d/run.ecmwf-hpc2020.job .
1.1. Determine experiment parameters
Namelist:
- You can edit the atmospheric model namelist file fort.4. It contains Fortran namelists which control model settings and switches.
- An important switch to edit is in namelist NAMRIP the variable CSTOP. Set this to the desired length of the forecast experiment.
- Experiment ab2a can be run for up to 144 hours (6 days) by setting
CSTOP='h144'
.
Experiment configuration file:
- You can edit the exp-config.h file which determines settings for this experiment.
- The oifs-run script will read the settings from this file.
- Alternatively, the settings can be passed to the oifs-run script via command line parameters, which takes precedence over the exp-config.h settings.
You should always set up an exp-config.h for each experiment. If no exp-config.h file is found in the experiment directory, and if also no command line parameters are provided when calling oifs-run, then oifs-run will revert to its own default values which are not appropriate. In any case you should either edit the exp-config.h file appropriately or provide the correct command line parameters.
The exp-config.h file contains the following settings:
#--- required variables for this experiment: OIFS_EXPID="ab2a" # your experiment ID OIFS_RES="255" # the spectral grid resolution (here: T255) OIFS_GRIDTYPE="l" # the grid type, either 'l' for linear reduced grid, or 'o' for the cubic octahedral grid OIFS_NPROC=8 # the number of MPI tasks OIFS_NTHREAD=4 # the number of OpenMP threads OIFS_PPROC=true # enable postprocessing of model output after the model run OUTPUT_ROOT=$(pwd) # folder where pproc output is created (only used if OIFS_PPROC=true). In this case an output folder is created in the experiment directory. LFORCE=true # overwrite existing symbolic links in the experiment directory LAUNCH="" # the platform specific run command for the MPI environment (e.g. "mpirun", "srun", etc). #--- optional variables that can be set for this experiment: #OIFS_NAMELIST='my-fort.4' # custom atmospheric model namelist file #OIFS_EXEC="<custom-path>/ifsMASTER.DP" # model exec to be used for this experiment
Order of precedence for how OpenIFS evaluates variables:
- exp-config.h: These variables have the highest precedence and are used for the experiment (Best practice to use this).
Example: Here you are setting the experiment ID, parameters for the model grid and for parallel execution of this specific experiment. Each experiment directory should contain its own exp-config.h file. - oifs-run: If no exp-config.h is found, and if no command-line parameters are provided when calling oifs-run, then the default settings found inside oifs-run are used instead (This is not recommended! Use an exp-config.h file instead).
Example: For some variables the defaults are usually fine. For instance, you do not need to specify the namelist file 'fort.4' in exp-config.h, because oifs-run will use this file name as a default value. - oifs-config.edit_me.sh: This file contains global configuration settings required for the correct functioning of OpenIFS, and therefore this file needs to be always sourced first. However, it does not contain variables that are specific to a forecast experiment, and any variables that are declared in either exp-config.h or in oifs-run will overwrite their previous settings in this global configuration file.
Example: In your global configuration file you may have set the double precision variable as your standard model executable. If you wish to use single precision for a specific experiment, then you can set OIFS_EXEC in exp-config.h to the SP binary executable which will overwrite the global setting for this experiment.
2. Running the experiment
After all optional edits to the namelists (fort.4) and to the experiment configuration file (exp-config.h) have been completed the model run can be started.
Depending on the available hardware experiments can either be run interactively or as a batch job.
2.1. Running a batch job
This method is the preferred way to run OpenIFS, as it is more efficient and it allows more flexibility in using the available hardware resources.
- A job wrapper script that is suitable for the locally available batch scheduler needs to be used to call
oifs-run
. - We include an example job wrapper script
run.ecmwf-hpc2020.job
in$OIFS_HOME/scripts
, which is suitable for the ECMWF hpc2020 Atos HPC. This uses the SLURM batch job scheduler.- In section 3, this script is copied to the experiment directory because it needs to be located here, to run an experiment.
run.ecmwf-hpc2020.job
needs to be edited with the following essential and optional changess- Intially
run.ecmwf-hpc2020.job
sets thePLATFORM_CFG
variable as follows
- Intially
# set OpenIFS platform environment: PLATFORM_CFG="/path/to/your/config/oifs-config.edit_me.sh"
- It is important to change
"/path/to/your/config/oifs-config.edit_me.sh"
to the actual path for theoifs-config.edit_me.sh,
e.g.,"$HOME/openifs-48r1/oifs-config.edit_me.sh"
- The default resources requested in
run.ecmwf-hpc2020.job
are 8 nodes on the ECMWF hpc2020 machine, with a total of 256 MPI tasks and 4 OpenMP threads. This can be changed as required. - For information, the
LAUNCH
command for batch job submission is set to "srun
" without any further options, because all required parallel environment settings are provided through the SLURM script headers.
- It is important to change
Once you have made the appropriate changes to run.ecmwf-hpc2020.job
, you can submit it and, hence, run the experiment with the following commands
# run as batch job: cd $OIFS_EXPT/ab2a/2016092500 sbatch ./run.ecmwf-hpc2020.job
The job wrapper script will read the exp-config.h file and adopt the selected values. The exceptions are LAUNCH, which is set to "srun" for batch jobs, and OIFS_NPROC & OIFS_NTHREAD for which values from the batch job headers are used. The job wrapper script modifies the exp-config.h file accordingly prior to calling the oifs-run script.
2.2. Running interactively
On the ECMWF hpc2020, running the model script interactively should be fine for lower grid resolutions up to T255L91.
- In order to run the experiment interactively, execute the oifs-run script from the command line in your terminal.
- If no command line parameters are provided with the oifs-run command, then the values from the exp-config.h will be used.
- In exp-config.h set OIFS_NPROC=8 and OIFS_NTHREAD to 4.
- In exp-config.h the LAUNCH variable should remain empty, i.e. LAUNCH="" and no --runcmd parameter should be provided in the command line.
The oifs-run script will in this case use its default launch parameters: srun -c${OIFS_NPROC} --mem=64GB --time=60
which will work fine with OIFS_NPROC=8 for experiment ab2a.
# run interactively: cd $OIFS_EXPT/ab2a/2016092500 ./oifs-run
3. Postprocessing with oifs-run
If in the exp-config.h file the OIFS_PPROC
variable has been set to true
(or if the --pproc command line parameter was used) then the model output in the experiment directory is further processed after completing the model run.
- In this case the script will generate a folder called
output_YYYMMDD_HHMMSS
, with YYYYMMDD being the current date and HHMMSS the current time. - This avoids accidental modification or overwriting of any previous results when the model experiment is repeated.
- For convenience a symbolic link
output
is set to the most recently generated model output. If the model run is repeated and a newoutput_YYYMMDD_HHMMSS
folder is generated, the symbolic link will be updated to point to the most recent output folder. - The variable
OUTPUT_ROOT
in exp-config.h determines where this ouput folder will be created. The default location is inside the experiment directory, but when assigning another path toOUTPUT_ROOT
this could be created elsewhere.
The postprocessing groups all model output fields and diagnostics into individual GRIB files with ascending forecast time step. Also, a copy of the atmospheric model namelist file fort.4, as well as the ifs.stat and NODE.01_001 log files are moved into the output folder.
This postprocessing is required if the Metview Python script is to be used later to visualise the model output. This is described in the following section.
4. Plotting the model output
Here we describe in a brief summary how plots from the model results can be generated. This permits a first-order sanity check whether the model results look sensible.
Please note that this section does not aim to present an in-depth description of how to visualise the model results. There are many possible ways to read the model output (in GRIB format) and present its content in graphical form. Here we provide only an abridged route to generate a limited number of basic longitude-latitude plots.
For this example we use the Metview graphics package, developed at ECMWF, which is used within an example Jupyter Notebook.
This requires the use of Jupyter Notebooks with a conda environment that contains the Metview and Metview-Python libraries.
At ECMWF you can access a Jupyter server on the hpc2020 either via the Linux Virtual Desktop (VDI) environment or via access to the JupytherHub service.
Step 1: Copy the Metview processing code to your $OIFS_EXPT
location:
Again, download the Metview data package from this site: https://sites.ecmwf.int/openifs/openifs-data/case_studies/48r1/karl/mv.tar.gz
# download the Metview data package and extract in the experiment directory: cp mv.tar.gz $OIFS_EXPT tar -xvzf mv.tar.gz cd mv
In the following steps we will process the OpenIFS model output into a dataset format that can be easily interpreted by Metview using a simplified plotting procedure.
Step 2: Edit the file process.sh
and change the path variable if required:
- in_dir: This needs to point to the
output
folder where the postprocessed OpenIFS model experiment (from the previous section) is found. Note that absolute file paths are required for this variable! By default this path is linked to theOIFS_EXPT
variable.
Step 3: Execute the script by running the command:
cd $OIFS_EXPT/mv ./process.sh
- This data processing may take a couple of minutes to complete.
- Occasionally the message "ERROR: input file does not exist!" may occur which can be safely ignored. This happens when the script attempts to convert model output which was not generated by OpenIFS. The script will not fail but simply carry on looking for the next file.
- After successful completing the conversion process "Done." should appear on the terminal.
- As a result of this processing, regular gridded and compressed GRIB files are generated in
$OIFS_EXPT/mv/data/ab2a
which can be visualised with by running the enclosed Jupyter Notebooksingle.ipynb
Step 4: Now proceed with the following steps to visualise the processed data:
- We describe the process on the example of the ECMWF Linux Virtual Desktop (VDI)
- In the VDI, open a terminal, log into the hpc2020 with command:
ssh hpc-login
- In the terminal start the Jupyter session on an interactive node, using the command:
ecinteractive -j
- After the interactive node has started you will be given a weblink to connect to the Jupyterlab session ("To manually re-attach go to <weblink>").
- Open a web browser (e.g. Chrome) inside the VDI and paste the weblink into the browser's URL address field; this will connect to the Jupyter session.
- In the file explorer, on the left side of the Jupyter window, navigate to the folder
$PERM/mv/ipynb/
and select Notebooksingle.ipynb
- Open this Notebook by double-clicking in the explorer window.
- Once it has opened, run all its cells in sequence (e.g. use the command "Run All Cells" in menu "Run").
- This will generate a series of plots from the model output which are displayed inside the Notebook.
- Optional: After completing the Jupyter session it is good practice to release the reserved interactive node using this command in the terminal window:
ecinteractive -p hpc -k
and confirm cancellation of the job; if this is not done the interactive job will timeout after 12 hours.