\(\renewcommand\AA{\unicode{x212B}}\)
Which minimizers to use with Mantid¶
Below are listed the current recommendations for which minimizers to use with Mantid:
By default Mantid uses LevenbergMarquardt
We can also recommend Trust Region, in particular where accuracy is important
The above recommendations are based on the results presented in sections below.
We are expanding the set of fitting problems we test against, which may, for example, provide enough evidence to recommend different minimizers for different subsets of neutron fitting problems in the future. And, we are constantly looking for new example, in particular, where a user has found a fitting difficult or slow.
Also, if the fit minimizer benchmarking tool is available for anyone to test new minimizers and modifications to existing minimizers.
For the task of Bayesian probability sampling: this is supported with the FABADA minimizer.
Comparing Minimizers¶
Minimizers play a central role when Fitting a model in Mantid. Given the following elements:
a dataset (e.g. a spectrum),
a model or function to fit (e.g. a peak or background function, with parameters),
an initial guess or starting point for the parameters of the function,
a cost function (e.g., squared residuals (fitting errors) weighted by the spectrum errors),
and a minimizer.
The minimizer is the method that adjusts the function parameters so that the model fits the data as closely as possible. The cost function defines the concept of how close a fit is to the data. See the general concept page on Fitting for a broader discussion of how these components interplay when fitting a model with Mantid.
Several minimizers are included with Mantid and can be selected in the Fit Function property browser or when using the algorithm Fit The following options are available:
LevenbergMarquardt (default)
All these algorithms are iterative. The Simplex algorithm, also known as Nelder–Mead method, belongs to the class of optimization algorithms without derivatives, or derivativefree optimization. Note that here simplex refers to downhill simplex optimization. Steepest descent and the two variants of Conjugate Gradient included with Mantid (FletcherReeves and PolakRibiere) belong to the class of optimization or minimization algorithms generally known as conjugate gradient, which use firstorder derivatives. The derivatives are calculated with respect to the cost function to drive the iterative process towards a local minimum.
BFGS and the LevenbergMarquardt algorithms belong to the secondorder class of algorithms, in the sense that they use secondorder information of the cost function (secondorder partial derivatives of a Hessian matrix). Some algorithms like BFGS approximate the Hessian by the gradient values of successive iterations. The LevenbergMarquard algorithm is a modified GaussNewton that introduces an adaptive term to prevent instability when the approximated Hessian is not positive defined. An indepth description of the methods is beyond the scope of these pages. More information can be found from the links and general references on optimization methods such as [Kelley1999] and [NocedalAndWright2006].
Finally, FABADA is an algorithm for Bayesian data analysis. It is excluded from the comparison described below, as it is a substantially different algorithm.
In most cases, the implementation of these algorithms is based on the GSL (GNU Scientific Library) library, and more specifically on the GSL routines for leastsquares fitting
Comparison of relative goodness of fit and run time¶
Here we describe a comparison of minimizers available in Mantid, in terms of how they perform when fitting several benchmark problems. This is a relative comparison in the sense that for every problem the best possible results with Mantid minimizers are given a top score of “1”. The ranking is continuous and the score of a minimizer represents the ratio between its performance and the performance of the best. We compare accuracy and run time.
For example, a ranking of 1.25 for a minimizer for a given problem means:
Referring to the accuracy of a minimizer, it produces a solution with squared residuals 25% larger than the best solution in Mantid.
Referring to the run time, it takes 25% more time than the fastest minimizer.
All the minimizers available in Mantid 3.7 were compared, with the exception of FABADA which belongs to a different class of methods and would not be compared in a fair manner. For all the minimizers compared here the algorithm Fit was run using the same initialization or starting points for test every problem, as specified in the test problem definitions.
Accuracy is measured using the sum of squared fitting errors as metric, or “ChiSquared” as defined in Fit, where the fitting errors are the difference between the expected outputs and the outputs calculated by the model fitted: \(\chi_{1}^{2} = \sum_{i} (y_i  f_i)^2\) (see CalculateChiSquared for full details and different variants). Run time is measured as the time it takes to execute the Fit algorithm, i.e. the time it takes to fit one model with one set of initial values of the model parameters against one dataset
The cost function used in this general comparison is ‘Least squares’ but without using input error estimates (see details below).
Benchmark problems¶
Each test problem included in this comparison is defined by the following information:
Dataset in the form of any number of pairs \(x_i\), \(y_i\) with optional \(y_i\) error estimates
Function to fit, with parameters
Initial values (starting point) of the function parameters
Optional: reference best values for the parameters (some may refer to these as certified values), i.e. target parameter values for the minimizers
The current problems have been obtained from the following sources:
CUTEst Constrained and Unconstrained Testing Environment on steroids
A set of problems extracted from Mantid usage examples and system tests called here Neutron data. This is a first attempt at evaluating different minimizers using specific neutron datasets with real spectra and observational errors. Significant improvements are expected for next releases of Mantid
As the NIST and CUTEst test problems do not define observational errors the comparison shown below does not use the weights of the least squares cost function. An alternative comparison that uses observational errors as weights in the cost function is also available, with similar results overall.
Comparison in terms of accuracy¶
Summary, median ranking¶
The summary table shows the median ranking across all the test problems. See detailed results by test problem (accuracy) (also accessible by clicking on the cells of the table).
Alternatively, see the detailed results when using weighted least squares as cost function.
BFGS 
Conjugate gradient (FletcherReeves imp.) 
Conjugate gradient (PolakRibiere imp.) 
Damping 
LevenbergMarquardt 
LevenbergMarquardtMD 
Simplex 
SteepestDescent 
Trust Region 


CUTEst 

Neutron data 
Comparison in terms of run time¶
Summary, median ranking¶
The summary table shows the median ranking across all the test problems. See detailed results by test problem (run time).
Alternatively, see the detailed results when using weighted least squares as cost function.
BFGS 
Conjugate gradient (FletcherReeves imp.) 
Conjugate gradient (PolakRibiere imp.) 
Damping 
LevenbergMarquardt 
LevenbergMarquardtMD 
Simplex 
SteepestDescent 
Trust Region 


CUTEst 

Neutron data 
Technical details for reproducibility¶
Note that fitting results may be sensitive to the platform and versions of the algorithms and underlying libraries used. All the results shown here have been produced using the same version of Mantid and on the same system:
Mantid release 3.8
Debian 8 GNU/Linux system with an Intel Core i74790 processor, using GSL version 1.16.
References:
Kelley C.T. (1999). Iterative Methods for Optimization. SIAM series in Applied Mathematics. Frontiers in Applied Mathematics, vol. 18. ISBN: 9780898714333.
Nocedal J, Wright S. (2006). Numerical Optimization, 2nd edition. pringer Series in Operations Research and Financial Engineering. DOI: 10.1007/9780387400655
Category: Concepts