Calibrate a model
Introduction​🔗
This tutorial will walk through the steps of calibrating the example model Calibration.Examples.Condenser
in the Calibration library. It will follow the general workflow outlined in the library documentation.
This specific tutorial will utilize a heat exchanger model, however the steps are generic and the same approach can be followed for most types of models.
Preparations🔗
Before doing this tutorial it is recommended read through the following content:
- Calibration library - Concepts and terminology
- Calibration library - General workflow
This will familiarize you with the concepts terminology and general workflow before doing the tutorial which will be helpful in understanding the steps involved. The primary goal of this guide is to walk through the steps involved of calibrating a model, and might therefore leave out some details found in the above articles.
Libraries/ Extensions🔗
Prepare the tutorial by creating an empty workspace including with the following libraries loaded:
- Calibration library 1.0-beta
- Heat Exchanger 2.13 (dependency - will be loaded automatically by adding the Calibration library)
- Modelon 4.5 (dependency - will be loaded automatically by adding the Calibration library)
Instructions🔗
In this tutorial we will use the example model Calibration.Examples.Condenser
packaged in the Calibration library - no changes to the model needs to be applied to follow along. However you will be asked to look at specific parts of the model to what is going on so that the same principles can be applied to your own modelling.
The model includes primarily a heat exchanger (condenser) along with boundary conditions. In this tutorial we try to find the best calibration parameters (friction and heat transfer) that best reproduces given measurement data (heat transfer and pressure drop).
1. Set up a baseline experiment🔗
Start by duplicating the example model Calibration.Examples.Condenser
into an editable package. Open the model and go to the experiments tab. It has a number of prepared experiments:
Run the baseline
experiment. This is the analysis used to produce the results that is supposed to be calibrated. The Steady state with derivatives is used to speed up execution of the model compared to running a normal Dynamic simulation.
2. Set up a reference result🔗
Download the data file:
The data in this file should contain the data label on the first row, these names need to coincide with the variable and parameter names from the model. Each subsequent row corresponds to a case in the reference result.
Open the reference file loader app.
Follow the given workflow of the app
- Select the result just created.
- Upload the data file just downloaded earlier.
- Generate the reference result. Optionally a label column can be specified to give the cases specific labels to make it easier to identify them from the result browser.
Close the tab and go back to Modelon Impact. A refresh is needed for the generated result to show up in the result browser.
3. Select calibration parameters and set bounds🔗
Open the "calibrate" experiment now. We can see that the calibration parameters has already been set up:
Locate them in the model and inspect the attributes (they are in located on the top-level -> properties). Click the "three dots" icon next to the paramter name to see its attributes.
We can see that both parameters have a value of 1.0 with min=0.5 and max=1.5. This corresponds to that the calibration algorithm will start from 1.0 and not consider values larger than 1.5 or smaller than 0.5.
Note
It is only possible to edit variable/parameter attributes from the model editing mode.
4. Select reference variables and set nominals🔗
Open the "calibrate" experiment again. We can see that also reference variables has been set up:
Locate them in the model (click HX -> properties -> Sub-components -> Summary) and inspect the attributes:
We can see that both the variables' nominals are set. The values set should be relatively close to the values recorded in the reference result. These nominals will be used to scale the objective function contributions from each reference value. This is to prevent the algorithm from prioritizing any specific reference variable.
5. Run the calibrate function🔗
After finalizing the previous steps, it is now possible to run the calibration.
Use the prepared "calibrate" experiment. Utilize the:
- Baseline experiment
- Reference result
That was created in earlier steps:
As mentioned earlier, the calibration parameter and reference variables has already been stated here, but can be edited if necessary.
The other options relates to the algorithm settings - see a detailed explanation here. For this tutorial, the default values will work fine.
When you are ready, execute the experiment.
The experiment will take a couple of minutes to finish. The progress can be followed in the status indicator next to the simulation button, it will print out the current objective function and calibration parameter values.
Every iteration of the used Nelder-Mead-method contains the number of calibration parameters + 1 function evaluations. Every function evaluation runs one simulation per reference result case. So in this case that would be \(3*16=48\) simulations per iteration.
Simulations performed as part of the function evaluations can be parallelized, why a larger profile with more available cores will speed up execution.
6. Analyzing the results🔗
When the calibration is finished you will get access to a result entry containing a summary case, and one case containing the result of every operating condition from the reference result simulated with calibrated parameters.
If the summary case is successful, it means the calibration has converged to a solution successfully. Inspect the simulation log for more information about the calibration.
The results are best analyzed through scatter plots from the analysis view:
To simplify analysis of the calibration result, a report is automatically generated accessible from the Artifacts tab:
Clicking "preview" will open the report (that is in html format and can be viewed directly in the browser).
The report shows output from the calibration algorithm as well as the final result values along with plots that compare the reference values from the reference result, baseline simulation and simulation using calibrated parameters.
The calibrated parameters can now be applied to model by for example extending the model and applying the calibrated parameters to it in the modelling mode.
Conclusion🔗
By completing this tutorial, you’ve calibrated the Calibration.Examples.Condenser
model using the Calibration library. You've learned how to set up the baseline experiment, the reference result and how to specify calibration parameters and reference variables and also how to interpret the results of the calibration. The steps are general, and can be applied to any model.