Difference between revisions of "Config:Plan"

From SUMOwiki
Jump to navigationJump to search
m
m
Line 2: Line 2:
 
=== LevelPlot ===
 
=== LevelPlot ===
 
Only change if you are using levelplots
 
Only change if you are using levelplots
<source lang="xml">
+
<source xmlns:saxon="http://icl.com/saxon" lang="xml">
<[[Config:LevelPlot|LevelPlot]]>default</[[Config:LevelPlot|LevelPlot]]>
+
 
</source>
+
<!--Only change if you are using levelplots-->
 +
<[[Config:LevelPlot|LevelPlot]]>default</[[Config:LevelPlot|LevelPlot]]></source>
 
=== ContextConfig ===
 
=== ContextConfig ===
ContextConfig and SUMO should (normally) always be set to 'default'
+
ContextConfig should (normally) always be set to 'default'
<source lang="xml">
+
<source xmlns:saxon="http://icl.com/saxon" lang="xml">
<[[Config:ContextConfig|ContextConfig]]>default</[[Config:ContextConfig|ContextConfig]]>
+
 
</source>
+
<!--ContextConfig should (normally) always be set to 'default'-->
 +
<[[Config:ContextConfig|ContextConfig]]>default</[[Config:ContextConfig|ContextConfig]]></source>
 
=== SUMO ===
 
=== SUMO ===
ContextConfig and SUMO should (normally) always be set to 'default'
+
SUMO should (normally) always be set to 'default'
<source lang="xml">
+
<source xmlns:saxon="http://icl.com/saxon" lang="xml">
<[[Config:SUMO|SUMO]]>default</[[Config:SUMO|SUMO]]>
+
 
</source>
+
<!--SUMO should (normally) always be set to 'default'-->
 +
<[[Config:SUMO|SUMO]]>default</[[Config:SUMO|SUMO]]></source>
 
=== AdaptiveModelBuilder ===
 
=== AdaptiveModelBuilder ===
 
The AdaptiveModelBuilder specifies the model type and the modeling algorithm to use The default value 'rational' refers to rational functions. 'rational' is an id that refers to an AdaptiveModelBuilder tag that is defined below
 
The AdaptiveModelBuilder specifies the model type and the modeling algorithm to use The default value 'rational' refers to rational functions. 'rational' is an id that refers to an AdaptiveModelBuilder tag that is defined below
<source lang="xml">
+
<source xmlns:saxon="http://icl.com/saxon" lang="xml">
<[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
+
 
</source>
+
<!--The AdaptiveModelBuilder specifies the model type and the modeling algorithm to use The default value 'rational' refers to rational functions. 'rational' is an id that refers to an AdaptiveModelBuilder tag that is defined below-->
 +
<[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]></source>
 
=== SampleSelector ===
 
=== SampleSelector ===
 
The method to use for selecting new samples. Again 'gradient' is an id that refers to a SampleSelector tag defined below
 
The method to use for selecting new samples. Again 'gradient' is an id that refers to a SampleSelector tag defined below
<source lang="xml">
+
<source xmlns:saxon="http://icl.com/saxon" lang="xml">
<[[Config:SampleSelector|SampleSelector]]>gradient</[[Config:SampleSelector|SampleSelector]]>
+
 
</source>
+
<!--The method to use for selecting new samples. Again 'gradient' is an id that refers to a SampleSelector tag defined below-->
 +
<[[Config:SampleSelector|SampleSelector]]>gradient</[[Config:SampleSelector|SampleSelector]]></source>
 
=== Run ===
 
=== Run ===
 
Runs can given a custom name by adding a name="the_name" attribute, a repeat attribute is also possible to repeat a run multiple times
 
Runs can given a custom name by adding a name="the_name" attribute, a repeat attribute is also possible to repeat a run multiple times
<source lang="xml">
+
<source xmlns:saxon="http://icl.com/saxon" lang="xml">
 +
 
 +
<!--Runs can given a custom name by adding a name="the_name" attribute, a repeat attribute is also possible to repeat a run multiple times-->
 
<[[Config:Run|Run]] name="" repeat="1">
 
<[[Config:Run|Run]] name="" repeat="1">
 +
  <!-- Configuration components, refer to those defined below
 +
        Enties listed here override those defined on plan level -->
 +
 
 +
  <!-- This is the problem we are going to model, refers to an xml file in the examples/ directory -->
 
   <[[Config:Simulator|Simulator]]>Academic2DTwice.xml</[[Config:Simulator|Simulator]]>
 
   <[[Config:Simulator|Simulator]]>Academic2DTwice.xml</[[Config:Simulator|Simulator]]>
 +
 
 +
  <!--
 +
  How is the simulator implemented:
 +
    - Matlab script (matlab)
 +
    - scattered dataset (scattered),
 +
    - local executable (local)
 +
    - etc
 +
  -->
 
   <[[Config:SampleEvaluator|SampleEvaluator]]>matlab</[[Config:SampleEvaluator|SampleEvaluator]]>
 
   <[[Config:SampleEvaluator|SampleEvaluator]]>matlab</[[Config:SampleEvaluator|SampleEvaluator]]>
</[[Config:Run|Run]]>
+
 
</source>
+
  <!--
 +
  The default behavior is to model all outputs and score models using
 +
  crossvalidation.  See below how to override this. Note that
 +
  crossvalidation is very a expensive measure and can significantly
 +
  slow things down when using computationally expensive model types
 +
  (eg. neural networks)
 +
  -->
 +
 
 +
  <!-- Define inputs that are to be modelled this run. This optional setting
 +
        reduces the dimension of the problem by keeping inputs that were not
 +
        selected at 0. When this section is not specified, all inputs are used.
 +
        In this example, input x is filtered out (not mentioned) and input z is set to a constant and will have
 +
      no role in the modelling process. -->
 +
  <!--
 +
  <[[Config:Inputs|Inputs]]>
 +
      <[[Config:Input|Input]] name="y" />
 +
      <[[Config:Input|Input]] name="z" value="1.5" />
 +
  </[[Config:Inputs|Inputs]]>
 +
  -->
 +
 
 +
 
 +
 
 +
  <!--  Complex example of a modeling run of the InductivePosts example with many different
 +
      output configurations.
 +
 
 +
  <[[Config:Outputs|Outputs]]>
 +
     
 +
      Model the modulus of complex output S22 using cross-validation and the default model builder
 +
      and sample selector.
 +
     
 +
      <[[Config:Output|Output]] name="S22" complexHandling="modulus">
 +
        <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
 +
      </[[Config:Output|Output]]>
 +
     
 +
     
 +
      Model the modulus of complex output S22, but introduce some normally-distributed noise
 +
      (variance .01 by default).
 +
     
 +
      <[[Config:Output|Output]] name="S22" complexHandling="modulus">
 +
        <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
 +
        <[[Config:Modifier|Modifier]] type="[[Modifier#Noise|Noise]]" />
 +
      </[[Config:Output|Output]]>
 +
     
 +
     
 +
      Model the modulus of complex output S22, but introduce normally-distributed noise
 +
      with variance .1. However, when Nan or Inf values are returned from the simulator,
 +
      we ignore these errors and let the toolbox process them normally. By default,
 +
      samples with NaN or Inf values are ignored.
 +
     
 +
      <[[Config:Output|Output]] name="S22" ignoreNaN="no" ignoreInf="no">
 +
        <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
 +
        <[[Config:Modifier|Modifier]] type="[[Modifier#Noise|Noise]]" distribution="normal" variance=".1" />
 +
      </[[Config:Output|Output]]>
 +
  </[[Config:Outputs|Outputs]]>
 +
  -->
 +
 
 +
 
 +
  <!--
 +
 
 +
  An example configuration for the Academic2DTwice example used here.
 +
     
 +
  <[[Config:Outputs|Outputs]]>
 +
      <[[Config:Output|Output]] name="out">
 +
        <[[Config:SampleSelector|SampleSelector]]>gradient</[[Config:SampleSelector|SampleSelector]]>
 +
        <[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
 +
        <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".0001" use="on" />
 +
      </[[Config:Output|Output]]>
 +
     
 +
      <[[Config:Output|Output]] name="outinverse">
 +
        <[[Config:SampleSelector|SampleSelector]]>grid</[[Config:SampleSelector|SampleSelector]]>
 +
        <[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>kriging</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
 +
        <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".05" use="on" />
 +
      </[[Config:Output|Output]]>
 +
  </[[Config:Outputs|Outputs]]>
 +
  -->
 +
 
 +
  <!--
 +
  Measure examples:
 +
 
 +
  * 5-fold crossvalidation (warning expensive on some model types (eg: takes a long time on neural networks))
 +
  <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".001" use="on">
 +
                                <Option key="folds" value="5"/>
 +
                        </[[Config:Measure|Measure]]> 
 +
 
 +
  * Using a validation set, the size taken as 20% of the available samples
 +
    <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
 +
                                <Option key="percentUsed" value="20"/>
 +
                        </[[Config:Measure|Measure]]>
 +
 
 +
 
 +
  * Using a validation set defined in an external file (scattered data)
 +
      <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
 +
      * the validation set come from a file
 +
      <Option key="type" value="file"/>
 +
      * the test data is scattered data so we need a scattered sample evaluator to load the data
 +
      and evaluate the points. The filename is taken from the <[[Config:ScatteredDataFile|ScatteredDataFile]]> tag in the simulator
 +
      xml file.  Optionally you can specify an option with key "id" to specify a specifc dataset if there
 +
      is more than one choice.
 +
      <[[Config:SampleEvaluator|SampleEvaluator]] type="ibbt.sumo.SampleEvaluators.datasets.ScatteredDatasetSampleEvaluator"/>
 +
                        </[[Config:Measure|Measure]]>
 +
 
 +
  * Used for testing optimization problems
 +
      * Calculates the (relative) error between the current minimum and a known minimum.
 +
        Often one uses this just as a stopping criterion for benchmarking problems.
 +
      * trueValue: a known global minimum
 +
  <[[Config:Measure|Measure]] type="[[Measure#TestMinimum|TestMinimum]]" errorFcn="relativeError" trueValue="-5.0" target="0.1" use="on" />
 +
 
 +
 
 +
 
 +
  * Examples of combined measures:
 +
  Measure the model based on a set of test samples, taken as a subset from the list of evaluated samples.
 +
  This subset is selected to cover the design space as good as possible.
 +
  <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
 +
     
 +
      <Option key="percentUsed" value="20"/>
 +
      <Option key="type" value="gridded"/>
 +
      <Option key="randomThreshold" value="1000"/>
 +
     
 +
      Submeasures can be defined to work on the model produced by the supermeasure.
 +
      In this case, the ValidationSet measure will generate a new model using a subset
 +
      of the entire list of evaluated samples, and will then do an additional
 +
      cross-validation check on this new model.
 +
      <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".001" use="on">
 +
        <Option key="folds" value="5"/>
 +
      </[[Config:Measure|Measure]]>
 +
     
 +
  </[[Config:Measure|Measure]]>
 +
 
 +
  <[[Config:Measure|Measure]] type="[[Measure#ModelDifference|ModelDifference]]" target=".001" use="off">
 +
      <Option key="LHS" value="1000"/>
 +
 
 +
      <[[Config:Measure|Measure]] type="[[Measure#SampleError|SampleError]]" target=".001" use="off">
 +
       
 +
        <[[Config:Measure|Measure]] type="[[Measure#LeaveNOut|LeaveNOut]]" target=".001" use="off">
 +
            <Option key="count" value="5"/>
 +
        </[[Config:Measure|Measure]]>
 +
       
 +
      </[[Config:Measure|Measure]]>
 +
     
 +
  </[[Config:Measure|Measure]]>
 +
  -->
 +
</[[Config:Run|Run]]></source>

Revision as of 13:52, 13 February 2008

Plan

LevelPlot

Only change if you are using levelplots

<!--Only change if you are using levelplots-->
<[[Config:LevelPlot|LevelPlot]]>default</[[Config:LevelPlot|LevelPlot]]>

ContextConfig

ContextConfig should (normally) always be set to 'default'

<!--ContextConfig should (normally) always be set to 'default'-->
<[[Config:ContextConfig|ContextConfig]]>default</[[Config:ContextConfig|ContextConfig]]>

SUMO

SUMO should (normally) always be set to 'default'

<!--SUMO should (normally) always be set to 'default'-->
<[[Config:SUMO|SUMO]]>default</[[Config:SUMO|SUMO]]>

AdaptiveModelBuilder

The AdaptiveModelBuilder specifies the model type and the modeling algorithm to use The default value 'rational' refers to rational functions. 'rational' is an id that refers to an AdaptiveModelBuilder tag that is defined below

<!--The AdaptiveModelBuilder specifies the model type and the modeling algorithm to use The default value 'rational' refers to rational functions. 'rational' is an id that refers to an AdaptiveModelBuilder tag that is defined below-->
<[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>

SampleSelector

The method to use for selecting new samples. Again 'gradient' is an id that refers to a SampleSelector tag defined below

<!--The method to use for selecting new samples. Again 'gradient' is an id that refers to a SampleSelector tag defined below-->
<[[Config:SampleSelector|SampleSelector]]>gradient</[[Config:SampleSelector|SampleSelector]]>

Run

Runs can given a custom name by adding a name="the_name" attribute, a repeat attribute is also possible to repeat a run multiple times

<!--Runs can given a custom name by adding a name="the_name" attribute, a repeat attribute is also possible to repeat a run multiple times-->
<[[Config:Run|Run]] name="" repeat="1">
   <!-- Configuration components, refer to those defined below 
        Enties listed here override those defined on plan level -->
   
   <!-- This is the problem we are going to model, refers to an xml file in the examples/ directory -->
   <[[Config:Simulator|Simulator]]>Academic2DTwice.xml</[[Config:Simulator|Simulator]]>
   
   <!--
   How is the simulator implemented: 
     - Matlab script (matlab)
     - scattered dataset (scattered), 
     - local executable (local)
     - etc
   -->
   <[[Config:SampleEvaluator|SampleEvaluator]]>matlab</[[Config:SampleEvaluator|SampleEvaluator]]>
   
   <!--
   The default behavior is to model all outputs and score models using 
   crossvalidation.  See below how to override this. Note that 
   crossvalidation is very a expensive measure and can significantly
   slow things down when using computationally expensive model types
   (eg. neural networks)
   -->
   
   <!-- Define inputs that are to be modelled this run. This optional setting 
        reduces the dimension of the problem by keeping inputs that were not
        selected at 0. When this section is not specified, all inputs are used.
        In this example, input x is filtered out (not mentioned) and input z is set to a constant and will have
       no role in the modelling process. -->
   <!--
   <[[Config:Inputs|Inputs]]>
      <[[Config:Input|Input]] name="y" />
      <[[Config:Input|Input]] name="z" value="1.5" />
   </[[Config:Inputs|Inputs]]>
   -->
   
   
   
   <!--   Complex example of a modeling run of the InductivePosts example with many different
      output configurations.
   
   <[[Config:Outputs|Outputs]]>
      
      Model the modulus of complex output S22 using cross-validation and the default model builder
      and sample selector.
      
      <[[Config:Output|Output]] name="S22" complexHandling="modulus">
         <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
      </[[Config:Output|Output]]>
      
      
      Model the modulus of complex output S22, but introduce some normally-distributed noise
      (variance .01 by default).
      
      <[[Config:Output|Output]] name="S22" complexHandling="modulus">
         <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
         <[[Config:Modifier|Modifier]] type="[[Modifier#Noise|Noise]]" />
      </[[Config:Output|Output]]>
      
      
      Model the modulus of complex output S22, but introduce normally-distributed noise
      with variance .1. However, when Nan or Inf values are returned from the simulator,
      we ignore these errors and let the toolbox process them normally. By default,
      samples with NaN or Inf values are ignored.
      
      <[[Config:Output|Output]] name="S22" ignoreNaN="no" ignoreInf="no">
         <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".05" />
         <[[Config:Modifier|Modifier]] type="[[Modifier#Noise|Noise]]" distribution="normal" variance=".1" />
      </[[Config:Output|Output]]>
   </[[Config:Outputs|Outputs]]>
   -->
   
   
   <!--
   
   An example configuration for the Academic2DTwice example used here.
       
   <[[Config:Outputs|Outputs]]>
      <[[Config:Output|Output]] name="out">
         <[[Config:SampleSelector|SampleSelector]]>gradient</[[Config:SampleSelector|SampleSelector]]>
         <[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>rational</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
         <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".0001" use="on" />
      </[[Config:Output|Output]]>
      
      <[[Config:Output|Output]] name="outinverse">
         <[[Config:SampleSelector|SampleSelector]]>grid</[[Config:SampleSelector|SampleSelector]]>
         <[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>kriging</[[Config:AdaptiveModelBuilder|AdaptiveModelBuilder]]>
         <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".05" use="on" />
      </[[Config:Output|Output]]>
   </[[Config:Outputs|Outputs]]>
   -->

   <!--
   Measure examples:

   * 5-fold crossvalidation (warning expensive on some model types (eg: takes a long time on neural networks))
   <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".001" use="on">
                                <Option key="folds" value="5"/>
                        </[[Config:Measure|Measure]]>   

   * Using a validation set, the size taken as 20% of the available samples
    <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
                                <Option key="percentUsed" value="20"/>
                        </[[Config:Measure|Measure]]>


   * Using a validation set defined in an external file (scattered data)
       <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
      * the validation set come from a file
      <Option key="type" value="file"/>
      * the test data is scattered data so we need a scattered sample evaluator to load the data
      and evaluate the points. The filename is taken from the <[[Config:ScatteredDataFile|ScatteredDataFile]]> tag in the simulator
      xml file.  Optionally you can specify an option with key "id" to specify a specifc dataset if there
      is more than one choice.
      <[[Config:SampleEvaluator|SampleEvaluator]] type="ibbt.sumo.SampleEvaluators.datasets.ScatteredDatasetSampleEvaluator"/>
                        </[[Config:Measure|Measure]]>

   * Used for testing optimization problems
      * Calculates the (relative) error between the current minimum and a known minimum.
        Often one uses this just as a stopping criterion for benchmarking problems.
      * trueValue: a known global minimum
   <[[Config:Measure|Measure]] type="[[Measure#TestMinimum|TestMinimum]]" errorFcn="relativeError" trueValue="-5.0" target="0.1" use="on" />
   
   
   
   * Examples of combined measures:
   Measure the model based on a set of test samples, taken as a subset from the list of evaluated samples.
   This subset is selected to cover the design space as good as possible.
   <[[Config:Measure|Measure]] type="[[Measure#ValidationSet|ValidationSet]]" target=".001">
      
      <Option key="percentUsed" value="20"/>
      <Option key="type" value="gridded"/>
      <Option key="randomThreshold" value="1000"/>
      
      Submeasures can be defined to work on the model produced by the supermeasure.
      In this case, the ValidationSet measure will generate a new model using a subset
      of the entire list of evaluated samples, and will then do an additional
      cross-validation check on this new model.
      <[[Config:Measure|Measure]] type="[[Measure#CrossValidation|CrossValidation]]" target=".001" use="on">
         <Option key="folds" value="5"/>
      </[[Config:Measure|Measure]]>
      
   </[[Config:Measure|Measure]]>
   
   <[[Config:Measure|Measure]] type="[[Measure#ModelDifference|ModelDifference]]" target=".001" use="off">
      <Option key="LHS" value="1000"/>
   
      <[[Config:Measure|Measure]] type="[[Measure#SampleError|SampleError]]" target=".001" use="off">
         
         <[[Config:Measure|Measure]] type="[[Measure#LeaveNOut|LeaveNOut]]" target=".001" use="off">
            <Option key="count" value="5"/>
         </[[Config:Measure|Measure]]>
         
      </[[Config:Measure|Measure]]>
      
   </[[Config:Measure|Measure]]>
   -->
</[[Config:Run|Run]]>