Difference between revisions of "Whats new"
Line 27: | Line 27: | ||
=== Optimization related changes === | === Optimization related changes === | ||
− | + | * The Optimization framework was removed due to [[FAQ#What_about_surrogate_driven_optimization.3F|several reasons]] | |
− | * The Optimization framework was removed | ||
* Added an optimizer class hierarchy for solving subproblems transparantly | * Added an optimizer class hierarchy for solving subproblems transparantly | ||
+ | * Added several criterions for optimization, available through the [[Config:SampleSelector#isc|InfillSamplingCriterion]]. | ||
=== Various changes === | === Various changes === |
Revision as of 11:39, 20 February 2008
This page gives a high level overview of the major changes in each toolbox version. For the detailed list of changes please refer to the changelog.
5.0 - Released February 2008
Rebranding to SUMO Toolbox
From now on the M3-Toolbox will be known as the SUrrogate MOdeling (SUMO) Toolbox.
Part of the reason for this rebranding is that the governing institution has changed. All research and development related to the SUMO toolbox is now conducted at Ghent University (UGent) (instead of Antwerp University (UA)).
The sample selection and evaluation backends have seen some major improvements. The number of samples selected each iteration need no longer be chosen a priori but is determined on the fly based on the time needed for modeling, the average length of the past 'n' simulations and the number of compute nodes (or CPU cores) available. A user specified upper bound can still be specified of course. It is now also possible to evaluate data points in batches instead of always one-by-one. This is useful if, for example, there is a considerable overhead for submitting one point.
In addition data points can be assigned priorities by the sample selection algorithm. These priorities are then reflected in the scheduling decisions made by the sample evaluator. It now also becomes possible to add different priority management policies. For example, one could require that interest in sample points be renewed, else their priorities will degrade with time.
A new sample selection algorithm as been added in this version that can use any function as a criterion of where to select new samples. This function is able to use all the information the surrogate provides to calculate how interesting a certain sample is. Internally a numeric global optimizer is applied on the criterion to determine the next sample point(s). There are several criterions implemented, mostly for global optimization. For instance the expected improvement criterion is very efficient for global optimization as it balances between optimization itself and refining the surrogate.
Finally the handling of failed or 'lost' data points has become much more robust. Pending points are automatically removed if their evaluation time exceeds a multiple of the average evaluation time. Failed points can also be re-submitted a number of times before being regarded as permanently failed.
The modeling code has seen some much needed cleanups. Adding new model types and improving the existing ones is now much more straightforward.
Since the default neural network model implementation is quite slow, two additional implementations were added based on FANN and NNSYSID which are much faster. In addition the NNSYSID implementation also supports pruning. However, though these two implementations are faster, the Matlab implementation still outperforms them accuracy wise.
An intelligent seeding strategy has been enabled. The starting point/population of each new model parameter optimization run is now chosen intelligently in order to achieve a more optimal search of the model parameter space. This leads to better models faster.
- The Optimization framework was removed due to several reasons
- Added an optimizer class hierarchy for solving subproblems transparantly
- Added several criterions for optimization, available through the InfillSamplingCriterion.
Various changes
The default error function is now the root relative square error (= a global relative error) instead of the absolute root mean square error. The memory usage has been drastically reduced when performing many runs with multiple datasets (datasets are loaded only once).
The default settings have been harmonized and much improved. For example the SVM parameter space is now searched in log10 instead of loge. The MinMax measure is now also enabled by default if you do not specify any other measure. This means that if you specify minimum and maximum bounds in the simulator xml file, models which do not respect these bounds are penalized.
Finally this release has seen countless cleanups, bugfixes and feature enhancements.