NLopt Reference

From AbInitio

(Difference between revisions)
Jump to: navigation, search
Revision as of 17:49, 16 June 2010 (edit)
Stevenj (Talk | contribs)
(Nonlinear constraints)
← Previous diff
Revision as of 17:56, 16 June 2010 (edit)
Stevenj (Talk | contribs)
(Nonlinear constraints)
Next diff →
Line 87: Line 87:
==Nonlinear constraints== ==Nonlinear constraints==
-TO DO.+Several of the algorithms in NLopt (<code>MMA</code>, <code>COBYLA</code>, and <code>ORIG_DIRECT</code>) also support arbitrary nonlinear inequality constraints, and some additionally allow nonlinear equality constraints (<code>ISRES</code> and <code>AUGLAG</code>). For these algorithms, you can specify as many nonlinear constraints as you wish by calling the following functions multiple times.
 + 
 +In particular, a nonlinear inequality constraint of the form <code>fc</code>(''x'') &le; 0, where the function <code>fc</code> is of the same form as the objective function described above, can be specified by calling:
 + 
 + nlopt_result nlopt_add_inequality_constraint(nlopt_opt opt, nlopt_func fc, void* fc_data, double tol);
 + 
 +Just as for the objective function, <code>fc_data</code> is a pointer to arbitrary user data that will be passed through to the fc function whenever it is called. The parameter <code>tol</code> is a tolerance that is used for the purpose of stopping criteria ''only'': a point ''x'' is considered feasible for judging whether to stop the optimization if <code>fc</code>(''x'') &le; <code>tol</code>. A tolerance of zero means that NLopt will try not to consider any x to be converged unless <code>fc</code> is strictly non-positive; generally, at least a small positive tolerance is advisable to reduce sensitivity to rounding errors.
 + 
 +(The [[#Return Values|return value]] is negative if there was an error, e.g. an invalid argument or an out-of-memory situation.)
 + 
 +Similarly, a nonlinear equality constraint of the form <code>h</code>(''x'') = 0, where the function <code>h</code> is of the same form as the objective function described above, can be specified by calling:
 + 
 + nlopt_result nlopt_add_equality_constraint(nlopt_opt opt, nlopt_func h, void* h_data, double tol);
 + 
 +Just as for the objective function, <code>h_data</code> is a pointer to arbitrary user data that will be passed through to the <code>h</code> function whenever it is called. The parameter tol is a tolerance that is used for the purpose of stopping criteria ''only'': a point ''x'' is considered feasible for judging whether to stop the optimization if |<code>h</code>(''x'')| &le; <code>tol</code>. For equality constraints, a small positive tolerance is strongly advised in order to allow NLopt to converge even if the equality constraint is slightly nonzero.
 + 
 +(For any algorithm listed as "derivative-free" below, the <code>grad</code> argument to <code>fc</code> or <code>h</code> will always be <code>NULL</code> and need never be computed.)
 + 
 +To remove all of the inequality and/or equality constraints from a given problem <code>opt</code>, you can call the following functions:
 + 
 + nlopt_result nlopt_remove_inequality_constraints(nlopt_opt opt);
 + nlopt_result nlopt_remove_equality_constraints(nlopt_opt opt);
==Stopping criteria== ==Stopping criteria==

Revision as of 17:56, 16 June 2010

NLopt
Download
Release notes
FAQ
NLopt manual
Introduction
Installation
Tutorial
Reference
Algorithms
License and Copyright

NLopt is a library, not a stand-alone program—it is designed to be called from your own program in C, C++, Fortran, Matlab, GNU Octave, or other languages. This reference section describes the programming interface (API) of NLopt in the C language. The reference manuals for other languages can be found at:

The old API from versions of NLopt prior to 2.0 is deprecated, but continues to be supported for backwards compatibility. You can find it described in the NLopt Deprecated API Reference.

Other sources of information include the Unix man page: On Unix, you can run e.g. man nlopt for documentation of C API. In Matlab and GNU Octave, the corresponding command is to type help nlopt_optimize.

Contents

Compiling and linking your program to NLopt

An NLopt program in C should include the NLopt header file:

#include <nlopt.h>

For programs in compiled languages like C or Fortran, when you compile your program you will have to link it to the NLopt library. This is in addition to including the header file (#include <nlopt.h> in C or #include <nlopt.hpp> in C++). On Unix, you would normally link with a command something like:

compiler ...source/object files... -lnlopt -lm -o myprogram

where compiler is cc, f77, g++, or whatever is appropriate for your machine/language.

Note: the -lnlopt -lm options, which link to the NLopt library (and the math library, which it requires), must come after your source/object files. In general, the rule is that if A depends upon B, then A must come before B in the link command.

Note: the above example assumes that you have installed the NLopt library in a place where the compiler knows to find it (e.g. in a standard directory like /usr/lib or /usr/local/lib). If you installed somewhere else (e.g. in your home directory if you are not a system administrator), then you will need to use a -L flag to tell the compiler where to find the library. See the installation manual.

The nlopt_opt object

The NLopt API revolves around an "object" of type nlopt_opt (an opaque pointer type). Via this object, all of the parameters of the optimization are specified (dimensions, algorithm, stopping criteria, constraints, objective function, etcetera), and then one finally passes this object to nlopt_optimize in order to perform the optimization. The object is created by calling:

nlopt_opt nlopt_create(nlopt_algorithm algorithm, unsigned n);

which returns a newly allocated nlopt_opt object (or NULL if there was an error, e.g. out of memory), given an algorithm (see NLopt Algorithms for possible values) and the dimensionality of the problem (n, the number of design parameters).

When you are finished with the object, you must deallocate it by calling:

void nlopt_destroy(nlopt_opt opt);

Simple assignment (=) makes two pointers to the same object. To make an independent copy of an object, use:

nlopt_opt nlopt_copy(const nlopt_opt opt);

The algorithm and dimension parameters of the object are immutable (cannot be changed without creating a new object), but you can query them for a given object by calling:

nlopt_algorithm nlopt_get_algorithm(const nlopt_opt opt);
unsigned nlopt_get_dimension(const nlopt_opt opt);

Objective function

The objective function is specified by calling one of:

nlopt_result nlopt_set_min_objective(nlopt_opt opt, nlopt_func f, void* f_data);
nlopt_result nlopt_set_max_objective(nlopt_opt opt, nlopt_func f, void* f_data);

depending on whether one wishes to minimize or maximize the objective function f, respectively. The function f should be of the form:

 double f(unsigned n, const double* x, double* grad, void* f_data);

The return value should be the value of the function at the point x, where x points to an array of length n of the design variables. The dimension n is identical to the one passed to nlopt_create.

In addition, if the argument grad is not NULL, then grad points to an array of length n which should (upon return) be set to the gradient of the function with respect to the design variables at x. That is, grad[i] should upon return contain the partial derivative \partial f / \partial x_i, for 0 \leq i < n, if grad is non-NULL. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the grad argument will always be NULL and need never be computed. (For algorithms that do use gradient information, however, grad may still be NULL for some calls.)

The f_data argument is the same as the one passed to nlopt_set_min_objective or nlopt_set_max_objective, and may be used to pass any additional data through to the function. (That is, it may be a pointer to some caller-defined data structure/type containing information your function needs, which you convert from void* by a typecast.)

Bound constraints

Most of the algorithms in NLopt are designed for minimization of functions with simple bound constraints on the inputs. That is, the input vectors x[i] are constrainted to lie in a hyperrectangle lb[i]x[i]ub[i] for 0 ≤ i < n. These bounds are specified by passing arrays lb and ub of length n (the dimension of the problem, from nlopt_create) to one or both of the functions:

nlopt_result nlopt_set_lower_bounds(nlopt_opt opt, const double* lb);
nlopt_result nlopt_set_upper_bounds(nlopt_opt opt, const double* ub);

(Note that these functions make a copy of the lb and ub arrays, so subsequent changes to the caller's lb and ub arrays have no effect on the opt object.)

If a lower/upper bound is not set, the default is no bound (unconstrained, i.e. a bound of infinity); it is possible to have lower bounds but not upper bounds or vice versa. Alternatively, the user can call one of the above functions and explicitly pass a lower bound of -HUGE_VAL and/or an upper bound of +HUGE_VAL for some design variables to make them have no lower/upper bound, respectively. (HUGE_VAL is the standard C constant for a floating-point infinity, found in the math.h header file.)

Note, however, that some of the algorithms in NLopt, in particular most of the global-optimization algorithms, do not support unconstrained optimization and will return an error in nlopt_optimize if you do not supply finite lower and upper bounds.

For convenience, the following two functions are supplied in order to set the lower/upper bounds for all design variables to a single constant (so that you don’t have to fill an array with a constant value):

nlopt_result nlopt_set_lower_bounds1(nlopt_opt opt, double lb);
nlopt_result nlopt_set_upper_bounds1(nlopt_opt opt, double ub);

Nonlinear constraints

Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also support arbitrary nonlinear inequality constraints, and some additionally allow nonlinear equality constraints (ISRES and AUGLAG). For these algorithms, you can specify as many nonlinear constraints as you wish by calling the following functions multiple times.

In particular, a nonlinear inequality constraint of the form fc(x) ≤ 0, where the function fc is of the same form as the objective function described above, can be specified by calling:

nlopt_result nlopt_add_inequality_constraint(nlopt_opt opt, nlopt_func fc, void* fc_data, double tol);

Just as for the objective function, fc_data is a pointer to arbitrary user data that will be passed through to the fc function whenever it is called. The parameter tol is a tolerance that is used for the purpose of stopping criteria only: a point x is considered feasible for judging whether to stop the optimization if fc(x) ≤ tol. A tolerance of zero means that NLopt will try not to consider any x to be converged unless fc is strictly non-positive; generally, at least a small positive tolerance is advisable to reduce sensitivity to rounding errors.

(The return value is negative if there was an error, e.g. an invalid argument or an out-of-memory situation.)

Similarly, a nonlinear equality constraint of the form h(x) = 0, where the function h is of the same form as the objective function described above, can be specified by calling:

nlopt_result nlopt_add_equality_constraint(nlopt_opt opt, nlopt_func h, void* h_data, double tol);

Just as for the objective function, h_data is a pointer to arbitrary user data that will be passed through to the h function whenever it is called. The parameter tol is a tolerance that is used for the purpose of stopping criteria only: a point x is considered feasible for judging whether to stop the optimization if |h(x)| ≤ tol. For equality constraints, a small positive tolerance is strongly advised in order to allow NLopt to converge even if the equality constraint is slightly nonzero.

(For any algorithm listed as "derivative-free" below, the grad argument to fc or h will always be NULL and need never be computed.)

To remove all of the inequality and/or equality constraints from a given problem opt, you can call the following functions:

nlopt_result nlopt_remove_inequality_constraints(nlopt_opt opt);
nlopt_result nlopt_remove_equality_constraints(nlopt_opt opt);

Stopping criteria

TO DO.

Return values

TO DO.

Local/subsidiary optimization algorithm

TO DO.

Initial step size

TO DO.

Stochastic population

TO DO.

Pseudorandom numbers

For stochastic optimization algorithms, we use pseudorandom numbers generated by the Mersenne Twister algorithm, based on code from Makoto Matsumoto. By default, the seed for the random numbers is generated from the system time, so that you will get a different sequence of pseudorandom numbers each time you run your program. If you want to use a "deterministic" sequence of pseudorandom numbers, i.e. the same sequence from run to run, you can set the seed by calling:

void nlopt_srand(unsigned long seed);

Some of the algorithms also support using low-discrepancy sequences (LDS), sometimes known as quasi-random numbers. NLopt uses the Sobol LDS, which is implemented for up to 1111 dimensions.

To reset the seed based on the system time, you can call:

void nlopt_srand_time(void);

(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling nlopt_srand to set a deterministic seed.)

Version number

To determine the version number of NLopt at runtime, you can call:

void nlopt_version(int *major, int *minor, int *bugfix);

For example, NLopt version 3.1.4 would return *major=3, *minor=1, and *bugfix=4.

Personal tools