The Starting the build process section on the OpenIFS 48r1 Getting Started page shows how openifs-test can be used to test whether a suite of t21 case run successfully on a system. This is very useful to testing a successful build, basic simulations and the parallel set-up of the system. This testing does not, however, automatically test for bit identical results. This page presents some notes about how to perform such tests.

OpenIFS does not include a reference set of known-good output (KGO) because the KGO is system and compiler dependent. Hence, in order to perform a bit identical it is necessary to create the KGO from, ideally you a "known-good" source before making any changes to the source code. KGO can be created by running the test with the environment variable IFS_TEST_BITIDENTICAL=init:

IFS_TEST_BITIDENTICAL=init $OIFS_TEST/openifs-test.sh -t 

Note that only -t  is used, so the above assumes that OpenIFS has already been built.

When each test that supports bit identical testing runs, the above will filter out all the norms from the NODE file and write the data to a SAVED_NORMS file. These SAVED_NORMS represent the KGO to which other tests can be compared

Once you have made your changes to source, you can recompile and check that the new norms are identical to the previous ones in SAVED_NORMS , by setting IFS_TEST_BITIDENTICAL=check:

IFS_TEST_BITIDENTICAL=check $OIFS_TEST/openifs-test.sh -t  

This time, each supported test will compare the norms in the NODE file with the previously created reference in SAVED_NORMS. If they differ then the test will fail. The tests will also fail if there is no reference file because the tests haven't been run in "init" mode.

It can be useful to do this incrementally as you make a series of changes for a single feature that you expect to be bit-identical, to catch any point at which bit-reproducibility is accidentally lost.

It is important to note that running with --clean will wipe any existing SAVED_NORMS; also note that --build-type=BIT and --build-type=DEBUG will produce different norms.

Finally, at present there's currently no built-in mechanism for automatically testing that two different tests produce identical results (e.g. with different namelist settings, or different numbers of MPI tasks or OpenMP threads), although this could be a useful extension in future.