Config:Plan

From SUMOwiki
Revision as of 17:09, 17 February 2009 by Icouckuy (talk | contribs)
Jump to navigationJump to search

Generated for SUMO toolbox version 6.1. We are well aware that documentation is not always complete and possibly even out of date in some cases. We try to document everything as best we can but much is limited by available time and manpower. We are an university research group after all. The most up to date documentation can always be found (if not here) in the default.xml configuration file and, of course, in the source files. If something is unclear please dont hesitate to ask.

Plan

LevelPlot

Only change if you need to generate level plots

<!--Only change if you need to generate level plots-->
<[[Config:LevelPlot|LevelPlot]]>default</[[Config:LevelPlot|LevelPlot]]>

ContextConfig

ContextConfig should (normally) always be set to 'default'

<!--ContextConfig should (normally) always be set to 'default'-->
<[[Config:ContextConfig|ContextConfig]]>default</[[Config:ContextConfig|ContextConfig]]>

SUMO

SUMO should (normally) always be set to 'default'

<!--SUMO should (normally) always be set to 'default'-->
<[[Config:SUMO|SUMO]]>default</[[Config:SUMO|SUMO]]>

Simulator

This is the problem we are going to model, it refers to the name of a project directory in the examples/ folder. It is also possible to specify an absolute path or to specify a particular xml file within a project directory

<!--This is the problem we are going to model, it refers to the name of a project directory in the examples/ folder. It is also possible to specify an absolute path or to specify a particular xml file within a project directory-->
<[[Config:Simulator|Simulator]]>Academic2DTwice</[[Config:Simulator|Simulator]]>

Run

Runs can given a custom name by using the name attribute, a repeat attribute is also possible to repeat a run multiple times. Placeholders available for run names include: #adaptivemodelbuilder# #simulator# #sampleselector#

<!--Runs can given a custom name by using the name attribute, a repeat attribute is also possible to repeat a run multiple times. Placeholders available for run names include: #adaptivemodelbuilder# #simulator# #sampleselector#-->
<[[Config:Run|Run]] name="" repeat="1">
   <!-- Enties listed here override those defined on plan level -->
   
   <!--
       The AdaptiveModelBuilder specifies the model type and the hyperparameter optimization
       algorithm (= the algorithm to choose the model parameters, also referred to as the
       modeling algorithm or model builder) to use. The default value 'rational' refers to rational functions.
       'rational' is an id that refers to an AdaptiveModelBuilder tag that is defined below.
   -->
   <[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
   
   <!-- What experimental design to use for the very first batch of samples -->
   <[[Config:InitialDesign|InitialDesign]]>lhdWithCornerPoints</[[Config:InitialDesign|InitialDesign]]>
   
   <!--
       The method to use for selecting new samples. Again 'default' is an id that refers to a
       SampleSelector tag defined below.  To switch off sampling simply remove this tag. -->
   <[[Config:SampleSelector|SampleSelector]]>default</[[Config:SampleSelector|SampleSelector]]>
   
   <!--
   How is the simulator implemented (ie, where your data comes from): 
     - Matlab script (matlab)
     - scattered dataset (scatteredDataset), 
     - local executable (local)
     - etc
     
     Make sure this entry matches what is declared in the simulator xml file
     in the project directory.  It makes no sense to put matlab here if you only
     have a scattered dataset to work with.
   -->
   <[[Config:SampleEvaluator|SampleEvaluator]]>matlab</[[Config:SampleEvaluator|SampleEvaluator]]>
   
   <!--
   The default behavior is to model all outputs with separate models and score models using 
   CrossValidation and MinMax.  See below how to override this. Note that crossvalidation is a very
   expensive measure and can significantly   slow things down when using computationally
   expensive model types (e.g., neural networks)
   -->

   <!-- Define which inputs should be modeled (optional). This setting 
        reduces the dimension of the problem by keeping inputs that were not
        selected at 0. If an <[[Config:Inputs|Inputs]]> tag is not specified, the default behavior is to
        model all inputs.
        In this example, both inputs x and y are selected
   -->
   <[[Config:Inputs|Inputs]]>
      <[[Config:Input|Input]] name="x"/>
      <[[Config:Input|Input]] name="y"/>
      <!-- Setting a simulator input to a constant -->
      <!-- <[[Config:Input|Input]] name="y"  value="14.6"/> -->
   </[[Config:Inputs|Inputs]]>
   
   <!--          
   An example configuration for the Academic2DTwice example used here.
   Each output can be configured to use separate Modelbuilders, measures and sample selectors
   
   Again it is not necessary to specify an Outputs tag.  If you dont, all outputs are modeled
   in parallel.
   -->
   <[[Config:Outputs|Outputs]]>
      <[[Config:Output|Output]] name="out">
         <!--
             You can specify output specific configuration here
             
         <[[Config:SampleSelector|SampleSelector]]>lola</[[Config:SampleSelector|SampleSelector]]>
         <[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
         <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".0001" use="on" />
         -->
      </[[Config:Output|Output]]>
      
      <[[Config:Output|Output]] name="outinverse">
         <!--
         <[[Config:SampleSelector|SampleSelector]]>grid</[[Config:SampleSelector|SampleSelector]]>
         <[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>krigingps</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
         <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".05" use="on" />
         -->
      </[[Config:Output|Output]]>
   </[[Config:Outputs|Outputs]]>

   <!--   
      Complex example of a modeling run of the InductivePosts example with many different
      output configurations.
   -->
   <!--
   <[[Config:Outputs|Outputs]]>

      Model the modulus of complex output S22 using cross-validation and the default model
      builder   and sample selector.
      
      <[[Config:Output|Output]] name="S22" complexHandling="modulus">
         <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
      </[[Config:Output|Output]]>
      
      
      Model the real part of complex output S22, but introduce some normally-distributed noise
      (variance .01 by default).
      
      <[[Config:Output|Output]] name="S22" complexHandling="real">
         <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
         * for other types of modifiers see the datamodifiers subdirectory
         <[[Config:Modifier|Modifier]] type="[[Modifier#Noise|Noise]]" />
      </[[Config:Output|Output]]>
   -->

      <!-- Model selection measure to use for this run (how models are scored)
        If you put a measure to off its value is printed but not used for modeling. 
        If multiple measures are on, the weighted average value is optimized
        (unless a pareto enabled modelbuilder is used -->
        
   <!--
   Measure examples:

   * 5-fold crossvalidation (warning expensive on some model types!)
   <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".001" use="on">
      <Option key="folds" value="5"/>
   </[[Config:Measure|Measure]]>   

   * Using a validation set, the size taken as 20% of the available samples
   <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
      <Option key="percentUsed" value="20"/>
   </[[Config:Measure|Measure]]>

   * Using a validation set defined in an external file (scattered data)
          <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
         * the validation set come from a file
         <Option key="type" value="file"/>
         * the test data is scattered data so we need a scattered sample evaluator
         to load the data and evaluate the points. The filename is taken from the
         <[[Config:ScatteredDataFile|ScatteredDataFile]]> tag in the simulator xml file.
         Optionally you can specify an option with key "id" to specify a specifc
         dataset if there is more than one choice.
         <[[Config:SampleEvaluator|SampleEvaluator]]
         type="ibbt.sumo.SampleEvaluators.datasets.ScatteredDatasetSampleEvaluator"/>
                     </[[Config:Measure|Measure]]>

   * Used for testing optimization problems
      * Calculates the (relative) error between the current minimum and a known minimum.
        Often one uses this just as a stopping criterion for benchmarking problems.
      * trueValue: a known global minimum
   <[[Config:Measure|Measure]] type="[[Measure#TestMinimum|TestMinimum]]" errorFcn="relativeError" trueValue="-5.0" target="0.1" use="on" />   
   -->
</[[Config:Run|Run]]>