NLopt Algorithms

Revision as of 05:19, 12 November 2008; Stevenj (Talk | contribs)
(diff) ←Older revision | Current revision | Newer revision→ (diff)

NLopt includes implementations of a number of different optimization algorithms. These algorithms are listed below, including links to the original source code (if any) and citations to the relevant articles in the literature (see Citing NLopt).

Even where I found available free/open-source code for the various algorithms, I modified the code at least slightly (and in some cases noted below, substantially) for inclusion into NLopt. I apologize in advance to the authors for any new bugs I may have inadvertantly introduced into their code.

Nomenclature

Each algorithm in NLopt is identified by a named constant, which is passed to the NLopt routines in the various languages in order to select a particular algorithm. These constants are of the form `NLOPT_{G,L}{N,D}_xxxx`, where `G`/`L` denotes global/local optimization and `N`/`D` denotes derivative-free/gradient-based algorithms, respectively.

For example, the `NLOPT_LN_COBYLA` constant refers to the COBYLA algorithm (described below), which is a local (`L`) derivative-free (`N`) optimization algorithm.

Many of the algorithms have several variants, which are grouped together below.

Global optimization

All of the global-optimization algorithms currently require you to specify bound constraints on all the optimization parameters.

DIRECT and DIRECT-L

DIRECT is the DIviding RECTangles algorithm for global optimization, described in:

• D. R. Jones, C. D. Perttunen, and B. E. Stuckmann, "Lipschitzian optimization without the lipschitz constant," J. Optimization Theory and Applications, vol. 79, p. 157 (1993).

and DIRECT-L is the "locally biased" variant proposed by:

• J. M. Gablonsky and C. T. Kelley, "A locally-biased form of the DIRECT algorithm," J. Global Optimization, vol. 21 (1), p. 27-37 (2001).

These is are deterministic-search algorithms based on systematic division of the search domain into smaller and smaller hyperrectangles. The Gablonsky version makes the algorithm "more biased towards local search" so that it is more efficient for functions without too many local minima. NLopt contains several implementations of both of these algorithms. I would tend to try `NLOPT_GN_DIRECT_L` first; YMMV.

First, it contains a from-scratch re-implementation of both algorithms, specified by the constants `NLOPT_GN_DIRECT` and `NLOPT_GN_DIRECT_L`, respectively.

Second, there is a slightly randomized variant of DIRECT-L, specified by `NLOPT_GLOBAL_DIRECT_L_RAND`, which uses some randomization to help decide which dimension to halve next in the case of near-ties.

The DIRECT and DIRECT-L algorithms start by rescaling the bound constraints to a hypercube, which gives all dimensions equal weight in the search procedure. If your dimensions do not have equal weight, e.g. if you have a "long and skinny" search space and your function varies at about the same speed in all directions, it may be better to use unscaled variants of these algorthms, which are specified as `NLOPT_GLOBAL_DIRECT_NOSCAL`, `NLOPT_GLOBAL_DIRECT_L_NOSCAL`, and `NLOPT_GLOBAL_DIRECT_L_RAND_NOSCAL`, respectively. These implementations have a number of hard-coded limitations on things like the number of function evaluations; I removed several of these limitations, but some remain.

Finally, NLopt also includes separate implementations based on the original Fortran code by Gablonsky et al. (1998-2001), which are specified as `NLOPT_GN_ORIG_DIRECT` and `NLOPT_GN_ORIG_DIRECT_L`.

Most of the above algorithms only handle bound constraints, and in fact require finite bound constraints (they are not applicable to unconstrained problems). They do not handle arbitrary nonlinear constraints. However, the `ORIG` versions by Gablonsky et al. include some support for arbitrary constraints, if you represent constraint violations by having your objective function return NaN or infinity (`HUGE_VAL`, in C).

Controlled Random Search (CRS) with local mutation

My implementation of the "controlled random search" (CRS) algorithm (in particular, the CRS2 variant) with the "local mutation" modification, as defined by:

• P. Kaelo and M. M. Ali, "Some variants of the controlled random search algorithm for global optimization," J. Optim. Theory Appl. 130 (2), 253-264 (2006).

The original CRS2 algorithm was described by:

• W. L. Price, "A controlled random search procedure for global optimization," in Towards Global Optimization 2, p. 71-84 edited by L. C. W. Dixon and G. P. Szego (North-Holland Press, Amsterdam, 1978).

The CRS algorithms are sometimes compared to genetic algorithms, in that they start with a random "population" of points, and randomly "evolve" these points by heuristic rules. In this case, the "evolution" somewhat resembles a randomized Nelder-Mead algorithm.

Only bound-constrained problems are supported by this algorithm.

This is my implementation of the "Multi-Level Single-Linkage" (MLSL) algorithm for global optimization by a sequence random local optimizations, proposed by:

• A. H. G. Rinnooy Kan and G. T. Timmer, "Stochastic global optimization methods," Mathematical Programming, vol. 39, p. 27-78 (1987). (Actually 2 papers — part I: clustering methods, p. 27, then part II: multilevel methods, p. 57.)

We also include a modification of MLSL use a Sobol low-discrepancy sequence (LDS) instead of pseudorandom numbers, which was argued to improve the convergence rate by:

• Sergei Kucherenko and Yury Sytsko, "Application of deterministic low-discrepancy sequences in global optimization," Computational Optimization and Applications, vol. 30, p. 297-318 (2005).

In either case, MLSL is a "multistart" algorithm: it works by doing a sequence of local optimizations (using some other local optimization algorithm) from random or low-discrepancy starting points. MLSL is distinguished, however by a "clustering" heuristic that helps it to avoid repeated searches of the same local optima, and has some theoretical guarantees of finding all local optima in a finite number of local minimizations.

The local-search portion of MLSL can use any of the other algorithms in NLopt, and in particular can use either gradient-based (`D`) or derivative-free algorithms (`N`) The local search uses the derivative/nonderivative algorithm set by nlopt_set_local_search_algorithm (currently defaulting to `NLOPT_LD_MMA` and `NLOPT_LN_COBYLA` for derivative/nonderivative searches, respectively).

LDS-based MLSL with gradient-based local search is specified as `NLOPT_GD_MLSL_LDS`, while LDS-based MLSL with derivative-free local search is specified by `NLOPT_GN_MLSL_LDS`. The non-LDS original MLSL (using pseudo-random numbers, currently via the Mersenne twister algorithm) is indicated by `NLOPT_GD_MLSL` and `NLOPT_GN_MLSL`, respectively.

StoGO

by Madsen et al. StoGO is a global optimization algorithm that works by systematically dividing the search space (which must be bound-constrained) into smaller hyper-rectangles via a branch-and-bound technique, and searching them by a gradient-based local-search algorithm (a BFGS variant), optionally including some randomness (hence the "Sto", which stands for "stochastic" I believe).

StoGO is written in C++, which means that it is only included when you compile the [[NLopt Installation#NLopt with C++ algorithms|C++ algorithms]] enabled.

StoGO is specified within NLopt by `NLOPT_GD_STOGO`, or `NLOPT_GD_STOGO_RAND` for the randomized variant.

Some referenes on StoGO are:

• S. Gudmundsson, "Parallel Global Optimization," M.Sc. Thesis, IMM, Technical University of Denmark, 1998.
• K. Madsen, S. Zertchaninov, and A. Zilinskas, "Global Optimization using Branch-and-Bound," unpublished (1998). A preprint of this paper is included in the `stogo` subdirectory of NLopt as `paper.pdf`.
• S. Zertchaninov and K. Madsen, "A C++ Programme for Global Optimization," IMM-REP-1998-04, Department of Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark, 1998. A copy of this report is included in the `stogo` subdirectory of NLopt as `techreport.pdf`.

Local derivative-free optimization

Of these algorithms, only COBYLA currently supports arbitrary nonlinear inequality constraints; the rest of them support bound-constrained or unconstrained problems only.

COBYLA (Constrained Optimization BY Linear Approximations)

This is a derivative of Powell's implementation of the COBYLA (Constrained Optimization BY Linear Approximations) algorithm for derivative-free optimization with nonlinear inequality constraints, by M. J. D. Powell, described in:

• M. J. D. Powell, "A direct search optimization method that models the objective and constraint functions by linear interpolation," in Advances in Optimization and Numerical Analysis, eds. S. Gomez and J.-P. Hennart (Kluwer Academic: Dordrecht, 1994), p. 51-67.

and reviewed in:

• M. J. D. Powell, "Direct search algorithms for optimization calculations," Acta Numerica 7, 287-336 (1998).

It constructs successive linear approximations of the objective function and constraints via a simplex of n+1 points (in n dimensions), and optimizes these approximations in a trust region at each step.

The original code itself was written in Fortran by Powell and was converted to C in 2004 by Jean-Sebastien Roy (js@jeannot.org) for the SciPy project. The version in NLopt was based on Roy's C version, downloaded from:

and slightly modified (e.g. to incorporate all the NLopt termination criteria).