Difference between revisions of "Multi-Objective Modeling"
(10 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
+ | == Motivation == | ||
+ | Please first read [http://www.springerlink.com/content/24104526223221u3/ this paper] and the page about [[Measures]]. | ||
− | + | Often it makes sense to use multiple measures. For example you may be interested in minimizing the average relative error AND the maximum absolute error. Alternatively you may have a problem with multiple outputs (see [[Running#Models_with_multiple_outputs]]) and it may make sense to model them in a model them together in a multi-objective way. This section is about those topics. Note that everything works with or without sample selection. | |
+ | '''Note that we are talking about optimization of the model parameters (hyperparameter optimization), NOT optimization of the simulator!!''' | ||
− | == | + | == Using Multiple Measures == |
+ | |||
+ | To enable multiple measures simply specify multiple <Measure> tags in your configuration file and make sure the ''use'' attribute is set to ''on''. For example: | ||
+ | |||
+ | <source lang="xml"> | ||
+ | <Measure type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/> | ||
+ | <Measure type="LRMMeasure" target="0" use="on"/> | ||
+ | </source> | ||
+ | |||
+ | What the toolbox then does with this depends on some other settings. | ||
+ | |||
+ | === Selecting the best model === | ||
− | + | The toolbox keeps track of the k best models found so far. Each time a model is found that is better than the previous best model it will be processed (plotted, profiled, ...) and saved to disk. If only one measure is active the best model is simply the model with lowest measure score. | |
+ | However, when multiple measures are used, an intelligent pareto-based method is used to decide which model is the best choice. Models that score high on a particular measure but low on another are not discarded immediately, but are given a chance to set things right in future iterations of the toolbox. This encourages variety in the models, while still ensuring convergence to the optimal accuracy for each measure. An often used combination is CrossValidation with the MinMax measure, to ensure that no poles are present in the model domain when using rational models. | ||
=== Weighted Single Objective === | === Weighted Single Objective === | ||
+ | |||
+ | If you specify nothing else the toolbox will simply minimize the sum of both (= scalarization) and everything continues as normal. However, if the scale of the measures differs greatly this might not be very fair. Or, it might be that you consider one more important than the other. In that case you can add weights as follows: | ||
+ | |||
+ | <source lang="xml"> | ||
+ | <Measure weight="0.6" type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/> | ||
+ | <Measure weight="0.4" type="LRMMeasure" target="0" use="on"/> | ||
+ | </source> | ||
+ | |||
+ | Now the toolbox will generate models (by optimizing the model parameters) that minimize: | ||
+ | |||
+ | <source lang="matlab"> | ||
+ | 0.6*(validation score) + 0.4*(LRM score) | ||
+ | </source> | ||
+ | |||
+ | So this gives you more fine grained control of the importance of each measure. Note that weights are normalized since version 6.2. If no weight is specified it defaults to 1. | ||
=== Multi-Objective === | === Multi-Objective === | ||
+ | |||
+ | Sometimes a weighting scheme is not enough and you want to do true multi-objective optimization (e.g., to see the trade-off between the measures). In this case there is no longer a single best model but a set of models. This set of models will be saved every k iterations of the multi-objective optimization routine. | ||
+ | |||
+ | To enable multi-objective hyperparameter optimization there are two adaptive model builders you can use: | ||
+ | |||
+ | # GeneticModelBuilder (e.g., anngenetic) : this uses the multi-objective version of the GA implemented in the [http://www.mathworks.com/products/gads/ Matlab GADS Toolbox]. | ||
+ | # ParetoModelBuilder (e.g., krigingnsga) : this uses the standard NSGA-II algorithm (but other algorithms could be easily added). | ||
+ | |||
+ | In both cases you must set the following option: ''paretoMode="true"'' in the configuration of the ModelBuilder. | ||
+ | Note that when using the GeneticModelBuilder you should increase the population size and number of generations in order to get a good pareto front. | ||
+ | |||
+ | If you want to plot the search trace of a multi-objective run you can use ''plotModelParetoFront'' function in the tools directory. To extract the k-th Pareto front you can use the ''nonDominatedSort'' function. | ||
== Multi-output modeling == | == Multi-output modeling == | ||
− | + | Finally, it is also possible to generate models with multiple outputs in a multi-objective way. In this case you would usually use only a single measure but have a simulator with more than one output. You also have to set combineOutputs true (see [[Running#Models_with_multiple_outputs]]). | |
− | + | If you then use one of the mult-objective model builders the toolbox will attempt to find the Pareto front of models that score well on ouput 1 vs models that score well on output 2. This gives information about the correlation between both outputs in the hyperparameter space. The final models may also be used to generate diverse ensembles. | |
− | + | Finally, when combined with the automatic model type selection algorithm this allows one to automatically select the best model type for each output without having to perform multiple runs. |
Latest revision as of 09:06, 17 March 2014
Motivation
Please first read this paper and the page about Measures.
Often it makes sense to use multiple measures. For example you may be interested in minimizing the average relative error AND the maximum absolute error. Alternatively you may have a problem with multiple outputs (see Running#Models_with_multiple_outputs) and it may make sense to model them in a model them together in a multi-objective way. This section is about those topics. Note that everything works with or without sample selection.
Note that we are talking about optimization of the model parameters (hyperparameter optimization), NOT optimization of the simulator!!
Using Multiple Measures
To enable multiple measures simply specify multiple <Measure> tags in your configuration file and make sure the use attribute is set to on. For example:
<Measure type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/>
<Measure type="LRMMeasure" target="0" use="on"/>
What the toolbox then does with this depends on some other settings.
Selecting the best model
The toolbox keeps track of the k best models found so far. Each time a model is found that is better than the previous best model it will be processed (plotted, profiled, ...) and saved to disk. If only one measure is active the best model is simply the model with lowest measure score. However, when multiple measures are used, an intelligent pareto-based method is used to decide which model is the best choice. Models that score high on a particular measure but low on another are not discarded immediately, but are given a chance to set things right in future iterations of the toolbox. This encourages variety in the models, while still ensuring convergence to the optimal accuracy for each measure. An often used combination is CrossValidation with the MinMax measure, to ensure that no poles are present in the model domain when using rational models.
Weighted Single Objective
If you specify nothing else the toolbox will simply minimize the sum of both (= scalarization) and everything continues as normal. However, if the scale of the measures differs greatly this might not be very fair. Or, it might be that you consider one more important than the other. In that case you can add weights as follows:
<Measure weight="0.6" type="ValidationSet" errorFcn="rootMeanSquareError" target=".001" use="on"/>
<Measure weight="0.4" type="LRMMeasure" target="0" use="on"/>
Now the toolbox will generate models (by optimizing the model parameters) that minimize:
0.6*(validation score) + 0.4*(LRM score)
So this gives you more fine grained control of the importance of each measure. Note that weights are normalized since version 6.2. If no weight is specified it defaults to 1.
Multi-Objective
Sometimes a weighting scheme is not enough and you want to do true multi-objective optimization (e.g., to see the trade-off between the measures). In this case there is no longer a single best model but a set of models. This set of models will be saved every k iterations of the multi-objective optimization routine.
To enable multi-objective hyperparameter optimization there are two adaptive model builders you can use:
- GeneticModelBuilder (e.g., anngenetic) : this uses the multi-objective version of the GA implemented in the Matlab GADS Toolbox.
- ParetoModelBuilder (e.g., krigingnsga) : this uses the standard NSGA-II algorithm (but other algorithms could be easily added).
In both cases you must set the following option: paretoMode="true" in the configuration of the ModelBuilder. Note that when using the GeneticModelBuilder you should increase the population size and number of generations in order to get a good pareto front.
If you want to plot the search trace of a multi-objective run you can use plotModelParetoFront function in the tools directory. To extract the k-th Pareto front you can use the nonDominatedSort function.
Multi-output modeling
Finally, it is also possible to generate models with multiple outputs in a multi-objective way. In this case you would usually use only a single measure but have a simulator with more than one output. You also have to set combineOutputs true (see Running#Models_with_multiple_outputs). If you then use one of the mult-objective model builders the toolbox will attempt to find the Pareto front of models that score well on ouput 1 vs models that score well on output 2. This gives information about the correlation between both outputs in the hyperparameter space. The final models may also be used to generate diverse ensembles.
Finally, when combined with the automatic model type selection algorithm this allows one to automatically select the best model type for each output without having to perform multiple runs.