Config:Plan
Generated for SUMO toolbox version 7.0. We are well aware that documentation is not always complete and possibly even out of date in some cases. We try to document everything as best we can but much is limited by available time and manpower. We are an university research group after all. The most up to date documentation can always be found (if not here) in the default.xml configuration file and, of course, in the source files. If something is unclear please dont hesitate to ask.
Plan
ContextConfig
Default components, these should normally not be changed unless you know what you are doing
<!--Default components, these should normally not be changed unless you know what you are doing-->
<[[Config:ContextConfig|ContextConfig]]>default</[[Config:ContextConfig|ContextConfig]]>
SUMO
Default components, these should normally not be changed unless you know what you are doing
<!--Default components, these should normally not be changed unless you know what you are doing-->
<[[Config:SUMO|SUMO]]>default</[[Config:SUMO|SUMO]]>
LevelPlot
Default components, these should normally not be changed unless you know what you are doing
<!--Default components, these should normally not be changed unless you know what you are doing-->
<[[Config:LevelPlot|LevelPlot]]>default</[[Config:LevelPlot|LevelPlot]]>
Simulator
This is the problem we are going to model, it refers to the name of a project directory in the examples/ folder. It is also possible to specify an absolute path or to specify a particular xml file within a project directory
<!--This is the problem we are going to model, it refers to the name of a project directory in the examples/ folder. It is also possible to specify an absolute path or to specify a particular xml file within a project directory-->
<[[Config:Simulator|Simulator]]>Math/Academic2DTwice</[[Config:Simulator|Simulator]]>
Run
Runs can given a custom name by using the name attribute, a repeat attribute is also possible to repeat a run multiple times. Placeholders available for run names include: #adaptivemodelbuilder# #simulator# #sampleselector# #output# #measure#
<!--Runs can given a custom name by using the name attribute, a repeat attribute is also possible to repeat a run multiple times. Placeholders available for run names include: #adaptivemodelbuilder# #simulator# #sampleselector# #output# #measure#-->
<[[Config:Run|Run]] name="" repeat="1">
<!-- Enties listed here override those defined on plan level -->
<!-- What experimental design to use for the very first set of samples -->
<[[Config:InitialDesign|InitialDesign]]>lhdWithCornerPoints</[[Config:InitialDesign|InitialDesign]]>
<!--
The method to use for selecting new samples. Again 'default' is an id that refers to a
SampleSelector tag defined below. To switch off sampling simply remove this tag. -->
<[[Config:SampleSelector|SampleSelector]]>default</[[Config:SampleSelector|SampleSelector]]>
<!--
How is the simulator implemented (ie, where does the data come from):
- Matlab script (matlab)
- scattered dataset (scatteredDataset),
- local executable or script (local)
- etc
Make sure this entry matches what is declared in the simulator xml file
in the project directory. For example, it makes no sense to put matlab here if you only
have a scattered dataset to work with.
-->
<[[Config:SampleEvaluator|SampleEvaluator]]>matlab</[[Config:SampleEvaluator|SampleEvaluator]]>
<!--
The AdaptiveModelBuilder specifies the model type and the hyperparameter optimization
algorithm (= the algorithm to choose the model parameters, also referred to as the
modeling algorithm or model builder) to use. The default value 'kriging' refers to Kriging models.
'kriging' is an id that refers to an AdaptiveModelBuilder tag that is defined below.
-->
<[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>kriging</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
<!-- How the quality of a model is assesed is determined by one or more Measures. You can try different combinations
of measures by specifying different measure tags. It is the measure score(s) that drive the model parameter optimization.
We recommend you do not use more than one measure unless you know what you are doing.
If the use attribute is set to 'off' then the measure score is printed and logged, but is not used in the modeling itself.
More examples of measures are shown below.
-->
<[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target="0.01" errorFcn="rootRelativeSquareError" use="on"/>
<!-- By default all inputs are modeled. If you want to only model a couple of inputs you can specify an Inputs tag as follows:
<[[Config:Inputs|Inputs]]>
<[[Config:Input|Input]] name="x" />
<[[Config:Input|Input]] name="y" />
// Setting a simulator input to a constant (default is 0):
<[[Config:Input|Input]] name="z" value="14.6"/>
</[[Config:Inputs|Inputs]]>
-->
<!--
By default the toolbox will model every single output using a separate model. If you want to change this
e.g., you only want to model a specific output, or you want to use different settings for each output; then you
can specify an Outputs tag.
The following is an example for the Academic2DTwice problem used in this file. Remember that if you change
the problem you are modeling, you will have to change this section too.
-->
<[[Config:Outputs|Outputs]]>
<[[Config:Output|Output]] name="out">
<!--
You can specify output specific configuration here
<[[Config:SampleSelector|SampleSelector]]>lola</[[Config:SampleSelector|SampleSelector]]>
<[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
<[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".01" errorFcn="meanSquareError" use="on" />
-->
</[[Config:Output|Output]]>
<[[Config:Output|Output]] name="outinverse">
<!--
<[[Config:SampleSelector|SampleSelector]]>delaunay</[[Config:SampleSelector|SampleSelector]]>
<[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rbf</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
<[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".05" use="on" />
-->
</[[Config:Output|Output]]>
</[[Config:Outputs|Outputs]]>
<!--
This is a more complex example of how you can have different configurations per output.
-->
<!--
<[[Config:Outputs|Outputs]]>
* Model the modulus of complex output S22 using cross-validation and the default model
builder and sample selector.
<[[Config:Output|Output]] name="S22" complexHandling="modulus">
<[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
</[[Config:Output|Output]]>
* Model the real part of complex output S22, but introduce some normally-distributed noise
(variance .01 by default).
<[[Config:Output|Output]] name="S22" complexHandling="real">
<[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
* for other types of modifiers see the datamodifiers subdirectory
<[[Config:Modifier|Modifier]] type="[[Modifier#Noise|Noise]]" />
</[[Config:Output|Output]]>
-->
<!--
More complex examples of how you can use measures:
* 5-fold crossvalidation (warning expensive on some model types!)
<[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".001" use="on">
<Option key="folds" value="5"/>
</[[Config:Measure|Measure]]>
* Using a validation set, the size taken as 20% of the available samples
<[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001" errorFcn="meanAbsoluteError">
<Option key="percentUsed" value="20"/>
</[[Config:Measure|Measure]]>
* Using a validation set defined in an external file (scattered data)
<[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
* the validation set come from a file
<Option key="type" value="file"/>
* the test data is scattered data so we need a scattered sample evaluator
to load the data and evaluate the points. The filename is taken from the
<[[Config:ScatteredDataFile|ScatteredDataFile]]> tag in the simulator xml file.
Optionally you can specify an option with key "id" to specify a specifc
dataset if there is more than one choice.
<[[Config:SampleEvaluator|SampleEvaluator]]
type="ibbt.sumo.sampleevaluators.datasets.ScatteredDatasetSampleEvaluator"/>
</[[Config:Measure|Measure]]>
* Used for testing optimization problems
* Calculates the (relative) error between the current minimum and a known minimum.
Often one uses this just as a stopping criterion for benchmarking problems.
* trueValue: a known global minimum
<[[Config:Measure|Measure]] type="[[Measure#TestMinimum|TestMinimum]]" errorFcn="relativeError" trueValue="-5.0" target="0.1" use="on" />
-->
</[[Config:Run|Run]]>