Difference between revisions of "Multi-Objective Modeling"

From SUMOwiki
Jump to navigationJump to search
Line 1: Line 1:
 +
THIS PAGE IS UNDER CONSTRUCTION
  
  
THIS PAGE IS UNDER CONSTRUCTION
+
== Motivation ==
  
 +
Please first read [http://www.sumo.intec.ugent.be/files/techreport-mo-09-08.pdf the technical report available here] and the page about [[Measures]].
  
== Motivation ==
+
Often it makes sense to use multiple measures.  For example you may be interested in minimizing the average relative error AND the maximum absolute error.  Alternatively you may have a problem with multiple outputs (see [[Running#Models_with_multiple_outputs]]) and it may make sense to model them in a model them together in a multi-objective way.  This section is about those topics.
  
 
== Using Multiple Measures ==
 
== Using Multiple Measures ==
 +
 +
To enable multiple measures simply specify multiple <Measure> tags in your configuration file and make sure the ''use'' attribute is set to ''on''.  For example:
 +
 +
<pre><nowiki>
 +
<Measure type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/>
 +
<Measure type="LRMMeasure" target="0" use="on"/>
 +
</nowiki></pre>
 +
 +
What the toolbox then does with this depends on some other settings.
  
 
=== Weighted Single Objective ===
 
=== Weighted Single Objective ===
 +
 +
If you specify nothing else the toolbox will simply minimize the sum of both (= scalarization) and everything continues as normal.  However, if the scale of the measures differs greatly this might not be very fair.  Or, it might be that you consider one more important than the other.  In that case you can add weights as follows:
 +
 +
<pre><nowiki>
 +
<Measure weight="0.6" type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/>
 +
<Measure weight="0.4" type="LRMMeasure" target="0" use="on"/>
 +
</nowiki></pre>
 +
 +
Now the toolbox will minimize:
 +
 +
<pre><nowiki>
 +
0.6*(validation score) + 0.4*(LRM score)
 +
</nowiki></pre>
 +
 +
So this gives you more fine grained control of the importance of each measure.  Note that it is up to you to ensure the weights are normalized.  If no weight is specified it defaults to 1.
  
 
=== Multi-Objective ===
 
=== Multi-Objective ===

Revision as of 00:17, 8 February 2009

THIS PAGE IS UNDER CONSTRUCTION


Motivation

Please first read the technical report available here and the page about Measures.

Often it makes sense to use multiple measures. For example you may be interested in minimizing the average relative error AND the maximum absolute error. Alternatively you may have a problem with multiple outputs (see Running#Models_with_multiple_outputs) and it may make sense to model them in a model them together in a multi-objective way. This section is about those topics.

Using Multiple Measures

To enable multiple measures simply specify multiple <Measure> tags in your configuration file and make sure the use attribute is set to on. For example:

<Measure type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/>
<Measure type="LRMMeasure" target="0" use="on"/>

What the toolbox then does with this depends on some other settings.

Weighted Single Objective

If you specify nothing else the toolbox will simply minimize the sum of both (= scalarization) and everything continues as normal. However, if the scale of the measures differs greatly this might not be very fair. Or, it might be that you consider one more important than the other. In that case you can add weights as follows:

<Measure weight="0.6" type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/>
<Measure weight="0.4" type="LRMMeasure" target="0" use="on"/>

Now the toolbox will minimize:

0.6*(validation score) + 0.4*(LRM score)

So this gives you more fine grained control of the importance of each measure. Note that it is up to you to ensure the weights are normalized. If no weight is specified it defaults to 1.

Multi-Objective

Multi-output modeling

-- the best alternative is ValidationSet, which by default behaves as cross validation in which only one fold is considered, reducing the cost of the measure by a factor 5. A second measure, called MinMax, is also activated by default, enabling the user to force the model to remain within certain bounds, to speed up the convergence. See below for more details in how to set these bounds.

However, in certain situations it might be very effective to use different measures, or use multiple measures together. When multiple measures are used, an intelligent pareto-based method is used to decide which model is the best choice. Models that score high on a particular measure but low on another are not discarded immediately, but are given a chance to set things right in further iterations of the toolbox. This encourages variety in the models, while still ensuring convergence to the optimal accuracy for each measure. An often used combination is CrossValidation with the MinMax measure, to ensure that no poles are present in the model domain.