CL 2: Qualitative Verification

For credibility assessment level 2, the model is build and applied in a co-simulation. OpenMCx is used as a co-simulation platform. There are two possibilities for generating the input for the model.

  1. The model input is provided by an OSI trace file. This trace file is read by an OSI trace file player FMU which is connected to the model FMU. The connection interface is OSI with data according to the model type. In case of a sensor model, the connection interface is osi3::SensorView. For other models it can be osi3::SensorData, osi3::TrafficCommand etc. The co-simulation setup of this approach is depicted in the first image below.

  2. The model input is provided by a scenario engine. The OpenScenario player esmini is used to play an OpenScenario file within an OSMP FMU and output osi3::SensorView data for the model. The co-simulation setup of this approach is depicted in the second image below. In a future adaptation, the esmini FMU will be enabled to receive osi3::TrafficCommand data to test further model types.

Tracefile test setup for CL 2
Figure 1. Tracefile test setup for CL 2
Esmini test setup for CL 2
Figure 2. Esmini test setup for CL 2

The output of the model can be used in three different ways:

  1. It can be disregarded, just to test if the model runs at all. This is referred to as a smoke test. An example can be found here.

  2. It can be connected to a processing FMU evaluating the model output. This way, the model output is tested "on-the-fly" step by step during the co-simulation. This can for example be an interface test, e.g. here.

  3. A trace file writer can be used as a processing FMU to generate a trace file from the model output during the co-simulation. This trace file will be supplied as a GitHub Action Artifact, so it can be downloaded for offline analysis. But the trace file can also be directly analyzed in the pipeline by a python script. This functionality is part of CL3.

All tests in CL 2 are considered integration tests, since they all require a co-simulation with at least one other FMU. Therefore, every test is located in an individual folder in test/integration. The individual test folder SHALL follow the naming scheme "xxx_short_description", where xxx is a three digit consecutive number. The folder SHALL contain a system structure definition file (.ssd). In this file, the utilized model input (trace file player or esmini) as well as the processing FMU (evaluation or trace file writer) are specified. The test folder SHALL additionally contain a README.md file, which describes the test system, scenario and pass/fail criterion. A template is provided with the corresponding sections for this readme file. Other simulation artefacts such as the trace file or scenario to be played as well as auxiliary files for the evaluation FMU and a python analysis script are also to be placed in that test folder. Example implementations in the test/integration folder can be found in the sensor model template repository.

An implementation of the GitHub action for this credibility assessment level can also be found in the sensor model template repository.

The three different test methods are described in more detail in the following sections.

CL 2.1 Smoke Tests

Typically in software testing, smoke test are done by just running the system under test stand-alone. But since OSMP models always require a model input, smoke test are also located in the integration test folder. For the smoke tests, the model is employed in a co-simulation and fed with inputs. The model is just connected either to a trace file player or a scenario engine to generate the model input. The output of the model is not used. The model SHALL go through the tests without any run time errors.

An example implementation of this test can be found in the sensor model template repository.

CL 2.2 Interface Tests

In this test category, in addition to a trace file player or scenario engine, the model output is connected to a trace file writer FMU. With this setup, the model output is tested against various criteria.

OSI Validation

In this test, the model output(s) is/are connected to the OSI Trace File Writer FMU provided in sub-library 5. If a model has multiple outputs, e.g. a TrafficUpdate and a TrafficCommandUpdate in case of a traffic participant model, multiple trace files are generated. The output trace files are passed as data input to the osi-validation tool. Osi-validation checks OSI trace files against a defined set of rules.

In a first test, the trace files are checked against the OSI default rules. These rules are generated automatically in the pipeline based on the rules defined in the .proto files of the OSI standard. This way, general standard compliance is validated.

In a second test, the trace files are checked against custom rules. These custom rules can be defined for each model, separated by input and output. For the model output, rules according to the osi-validation documentation are to be places in a rules/output_rules folder. This for example enables using the is_set rule to test, whether output messages defined in the readme are actually set by the model.

Not only the model output is tested. When a test contains an input trace file, it is also tested against the standard rules and a custom rule set defined in rules/input_rules.

An example implementation of this test can be found in the traffic participant model template repository.

Value Range Check

(not yet implemented)
Furthermore, the value ranges of the output parameters are tested. E.g. if a lidar sensor has intensity outputs in the interval [0, 100], the sensor model SHALL NOT output any values outside of this range. These tests on value ranges might also include the timing of the model. Additional test can be performed on the SensorViewConfigRequest during the initialization of the model FMU, if implemented.