Difference between revisions of "Whats new"

From SUMOwiki
Jump to navigationJump to search
Line 35: Line 35:
 
=== Various changes ===
 
=== Various changes ===
  
The default error function is now the root relative square error (= a global relative error) instead of the absolute root mean square error. The memory usage has been drastically reduced when performing many runs with multiple datasets (datasets are loaded only once).
+
The default 'error function' is now the root relative square error (= a global relative error) instead of the absolute root mean square error.  
 +
 
 +
The memory usage has been drastically reduced when performing many runs with multiple datasets (datasets are loaded only once).
  
 
The default settings have been harmonized and much improved.  For example the SVM parameter space is now searched in log10 instead of loge.  The MinMax measure is now also enabled by default if you do not specify any other measure.  This means that if you specify minimum and maximum bounds in the simulator xml file, models which do not respect these bounds are penalized.
 
The default settings have been harmonized and much improved.  For example the SVM parameter space is now searched in log10 instead of loge.  The MinMax measure is now also enabled by default if you do not specify any other measure.  This means that if you specify minimum and maximum bounds in the simulator xml file, models which do not respect these bounds are penalized.
  
Finally this release has seen countless cleanups, bugfixes and feature enhancements.
+
Finally this release has seen countless cleanups, bug fixes and feature enhancements.

Revision as of 14:57, 6 June 2008

This page gives a high level overview of the major changes in each toolbox version. For the detailed list of changes please refer to the Changelog page.

5.0 - Released April 2008

SUMO Toolbox

In April 2008, the first public release of the SUrrogate MOdeling (SUMO) Toolbox occurred.

Sampling related changes

The sample selection and evaluation backends have seen some major improvements.

The number of samples selected each iteration need no longer be chosen a priori but is determined on the fly based on the time needed for modeling, the average length of the past 'n' simulations and the number of compute nodes (or CPU cores) available. Of course, a user specified upper bound can still be specified. It is now also possible to evaluate data points in batches instead of always one-by-one. This is useful if, for example, there is a considerable overhead for submitting one point.

In addition, data points can be assigned priorities by the sample selection algorithm. These priorities are then reflected in the scheduling decisions made by the sample evaluator. It now also becomes possible to add different priority management policies. For example, one could require that 'interest' in sample points be renewed, else their priorities will degrade with time.

A new sample selection algorithm has been added that can use any function as a criterion of where to select new samples. This function is able to use all the information the surrogate provides to calculate how interesting a certain sample is. Internally, a numeric global optimizer is applied on the criterion to determine the next sample point(s). There are several criterions implemented, mostly for global optimization. For instance the 'expected improvement criterion' is very efficient for global optimization as it balances between optimization itself and refining the surrogate.

Finally the handling of failed or 'lost' data points has become much more robust. Pending points are automatically removed if their evaluation time exceeds a multiple of the average evaluation time. Failed points can also be re-submitted a number of times before being regarded as permanently failed.

Modeling related changes

The modeling code has seen some much needed cleanups. Adding new model types and improving the existing ones is now much more straightforward.

Since the default Matlab neural network model implementation is quite slow, two additional implementations were added based on FANN and NNSYSID which are much faster. In addition the NNSYSID implementation also supports pruning. However, though these two implementations are faster, the Matlab implementation still outperforms them accuracy wise.

An intelligent seeding strategy has been enabled. The starting point/population of each new model parameter optimization run is now chosen intelligently in order to achieve a more optimal search of the model parameter space. This leads to better models faster.

Optimization related changes

Various changes

The default 'error function' is now the root relative square error (= a global relative error) instead of the absolute root mean square error.

The memory usage has been drastically reduced when performing many runs with multiple datasets (datasets are loaded only once).

The default settings have been harmonized and much improved. For example the SVM parameter space is now searched in log10 instead of loge. The MinMax measure is now also enabled by default if you do not specify any other measure. This means that if you specify minimum and maximum bounds in the simulator xml file, models which do not respect these bounds are penalized.

Finally this release has seen countless cleanups, bug fixes and feature enhancements.