Difference between revisions of "General guidelines"
Line 11: | Line 11: | ||
* Cross-validation might give a biased result when combined with the GridSampleSelector. This is because the GridSampleSelector tends to cluster samples around one point, which will result in very accurate metamodels for all the points in this cluster (and thus good results with cross-validation). So when using CrossValidation and GridSampleSelector together, keep in mind that the real accuracy might be slightly lower than the estimated one. | * Cross-validation might give a biased result when combined with the GridSampleSelector. This is because the GridSampleSelector tends to cluster samples around one point, which will result in very accurate metamodels for all the points in this cluster (and thus good results with cross-validation). So when using CrossValidation and GridSampleSelector together, keep in mind that the real accuracy might be slightly lower than the estimated one. | ||
* When using the Polynomial modeller, you might want to manually add a MinMaxMeasure (if you got a rough estimate of the minimum and maximum values for your outputs) and use it together with CrossValidation. By adding the MinMaxMeasure, you eliminate models which have poles in the design space, because these poles always break the minimum and maximum bounds. This usually results in better models and quicker convergence. | * When using the Polynomial modeller, you might want to manually add a MinMaxMeasure (if you got a rough estimate of the minimum and maximum values for your outputs) and use it together with CrossValidation. By adding the MinMaxMeasure, you eliminate models which have poles in the design space, because these poles always break the minimum and maximum bounds. This usually results in better models and quicker convergence. | ||
+ | |||
+ | == Sample Selectors == | ||
+ | |||
+ | == Adaptive Model Builders == |
Revision as of 12:32, 9 October 2007
The default.xml file can be used as a starting point for default behaviour for the M3-Toolbox. If you are a new user, you should initially leave most options at their default values. The default settings were chosen as such because they provide the most robust modelling behaviour, and work properly and produce good results in most cases.
There are, however, situations in which the best choice of components depends on the problem itself, so that the default settings aren't necessarily the best. This page will give the user general guidelines to decide which component to use for each situation they may encounter. The user is of course free to ignore these rules and experiment with other settings; these guidelines are available to offer some sense of direction to users.
Measures
The default measure is CrossValidation. Even though this is a very good, accurate, overall measure, there are three considerations to make:
- If it is relatively expensive to train a model (for example, with neural networks), cross-validation is also very slow, because it has to train a model for each fold (which is 5 by default). If modelling takes too long, you might want to use a faster alternative, such as TestSamples.
- Cross-validation might give a biased result when combined with the GridSampleSelector. This is because the GridSampleSelector tends to cluster samples around one point, which will result in very accurate metamodels for all the points in this cluster (and thus good results with cross-validation). So when using CrossValidation and GridSampleSelector together, keep in mind that the real accuracy might be slightly lower than the estimated one.
- When using the Polynomial modeller, you might want to manually add a MinMaxMeasure (if you got a rough estimate of the minimum and maximum values for your outputs) and use it together with CrossValidation. By adding the MinMaxMeasure, you eliminate models which have poles in the design space, because these poles always break the minimum and maximum bounds. This usually results in better models and quicker convergence.