NLopt Python Reference

From AbInitio

(Difference between revisions)
Jump to: navigation, search
Revision as of 05:05, 17 June 2010 (edit)
Stevenj (Talk | contribs)

← Previous diff
Revision as of 05:10, 17 June 2010 (edit)
Stevenj (Talk | contribs)
(The <code>nlopt.opt</code> class)
Next diff →
Line 16: Line 16:
==The <code>nlopt.opt</code> class== ==The <code>nlopt.opt</code> class==
-The NLopt API revolves around an object of type <code>nlopt.opt</code>. Via methods of this object, all of the parameters of the optimization are specified (dimensions, algorithm, stopping criteria, constraints, objective function, etcetera), and then one finally calls the <code>nlopt::opt::optimize</code> method in order to perform the optimization. The object should normally be created via the constructor:+The NLopt API revolves around an object of type <code>nlopt.opt</code>. Via methods of this object, all of the parameters of the optimization are specified (dimensions, algorithm, stopping criteria, constraints, objective function, etcetera), and then one finally calls the <code>opt.optimize</code> method in order to perform the optimization. The object should normally be created via the constructor:
opt = nlopt.opt(algorithm, n) opt = nlopt.opt(algorithm, n)

Revision as of 05:10, 17 June 2010

NLopt
Download
Release notes
FAQ
NLopt manual
Introduction
Installation
Tutorial
Reference
Algorithms
License and Copyright

The NLopt includes an interface callable from the Python programming language.

The main purpose of this section is to document the syntax and unique features of the Python API; for more detail on the underlying features, please refer to the C documentation in the NLopt Reference.

Contents

Using the NLopt Python API

To use NLopt in Python, your Python program should include the lines:

import nlopt
from numpy import *

which imports the nlopt module, and also imports the numpy (NumPy) that defines the array data types used for communicating with NLopt.

The nlopt.opt class

The NLopt API revolves around an object of type nlopt.opt. Via methods of this object, all of the parameters of the optimization are specified (dimensions, algorithm, stopping criteria, constraints, objective function, etcetera), and then one finally calls the opt.optimize method in order to perform the optimization. The object should normally be created via the constructor:

opt = nlopt.opt(algorithm, n)

given an algorithm (see NLopt Algorithms for possible values) and the dimensionality of the problem (n, the number of design parameters). Whereas the C algorithms are specified by nlopt_algorithm constants of the form NLOPT_MMA, NLOPT_COBYLA, etcetera, the Python algorithm values are of the form nlopt.MMA, nlopt.COBYLA, etcetera (with the NLOPT_ prefix replaced by the nlopt. namespace).

There are also a copy constructor nlopt.opt(opt) to make a copy of a given object (equivalent to nlopt_copy in the C API).

If there is an error in the constructor (or copy constructor, or assignment), a MemoryError exception is thrown.

The algorithm and dimension parameters of the object are immutable (cannot be changed without constructing a new object), but you can query them for a given object by the methods:

opt.get_algorithm()
opt.get_dimension()

You can get a string description of the algorithm via:

opt.get_algorithm_name()

Objective function

The objective function is specified by calling one of the methods:

opt.set_min_objective(f)
opt.set_max_objective(f)

depending on whether one wishes to minimize or maximize the objective function f, respectively. The function f should be of the form:

def f(x, grad):
   if grad.size > 0:
       ...set grad to gradient, in-place...
   return ...value of f(x)...


The return value should be the value of the function at the point x, where x is a NumPy array of length n of the design variables (the same as the dimension passed to the constructor).

In addition, if the argument grad is not empty [i.e. grad.size>0], then grad is a NumPy array of length n which should (upon return) be set to the gradient of the function with respect to the design variables at x. That is, grad[i] should upon return contain the partial derivative \partial f / \partial x_i, for 0 \leq i < n, if grad is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as "derivative-free," the grad argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, grad may still be empty for some calls.)

Note that grad must be modified in-place by your function f. Assigning values to grad[i] components one by one operates in-place, but some of the other NumPy whole-array operations may not be in-place.

Bound constraints

The bound constraints can be specified by calling the methods:

opt.set_lower_bounds(lb)
opt.set_upper_bounds(ub)

where lb and ub are arrays (NumPy arrays or Python lists) of length n (the same as the dimension passed to the nlopt::opt constructor). For convenience, these are overloaded with functions that take a single number as arguments, in order to set the lower/upper bounds for all design variables to a single constant.

To retrieve the values of the lower/upper bounds, you can call one of:

opt.get_lower_bounds()
opt.get_upper_bounds()

both of which return NumPy arrays.

To specify an unbounded dimension, you can use ±float('inf') in Python to specify ±∞.

Nonlinear constraints

Just as for nonlinear constraints in C, you can specify nonlinear inequality and equality constraints by the methods:

opt.add_inequality_constraint(fc, tol=0)
opt.add_equality_constraint(h, tol=0)

where the arguments fc and h have the same form as the objective function above. The optional tol arguments specify a tolerance in judging feasibility for the purposes of stopping the optimization, as in C.

To remove all of the inequality and/or equality constraints from a given problem, you can call the following methods:

opt.remove_inequality_constraints()
opt.remove_equality_constraints()

Stopping criteria

As explained in the C API Reference and the Introduction), you have multiple options for different stopping criteria that you can specify. (Unspecified stopping criteria are disabled; i.e., they have innocuous defaults.)

For each stopping criteria, there are (at least) two method: a set method to specify the stopping criterion, and a get method to retrieve the current value for that criterion. The meanings of each criterion are exactly the same as in the C API.

opt.set_stopval(stopval)
opt.get_stopval()

Stop when an objective value of at least stopval is found.

opt.set_ftol_rel(tol)
opt.get_ftol_rel()

Set relative tolerance on function value.

opt.set_ftol_abs(tol)
opt.get_ftol_abs()

Set absolute tolerance on function value.

opt.set_xtol_rel(tol)
opt.get_xtol_rel()

Set relative tolerance on design variables.

opt.set_xtol_abs(tol)
opt.get_xtol_abs()

Set absolute tolerances on design variables. The tol input must be an array (NumPy array or Python list) of length n (the dimension specified in the nlopt.opt constructor); alternatively, you can pass a single number in order to set the same tolerance for all design variables. get_xtol_abs() returns the tolerances as a NumPy array.

opt.set_maxeval(maxeval)
opt.get_maxeval()

Stop when the number of function evaluations exceeds maxeval. (0 or negative for no limit.)

opt.set_maxtime(maxtime)
opt.get_maxtime()

Stop when the optimization time (in seconds) exceeds maxtime. (0 or negative for no limit.)

Forced termination

In certain cases, the caller may wish to force the optimization to halt, for some reason unknown to NLopt. For example, if the user presses Ctrl-C, or there is an error of some sort in the objective function. You can do this by raise any exception inside your objective/constraint functions:the optimization will be halted gracefully, and the same exception will be raised to the caller. See Exceptions, below. The Python equivalent of nlopt_forced_stop from the C API is to throw an nlopt.ForcedStop exception.

Performing the optimization

Once all of the desired optimization parameters have been specified in a given object opt, you can perform the optimization by calling:

xopt = opt.optimize(x);

On input, x is an array (NumPy array or Python list) of length n (the dimension of the problem from the nlopt::opt constructor) giving an initial guess for the design parameters. The return value xopt is a NumPy array containing the optimized values of the design parameters.

You can call the following methods to retrieve the optimized objective function value from the last optimize call, and also the return code (including negative/failure return values) from the last optimize call:

opt_val = opt.last_optimum_value();
result = opt.last_optimize_result();

The return code (see below) is positive on success, indicating the reason for termination. On failure (negative return codes), optimize() throws an exception (see Exceptions, below).

Return values

The possible return values are the same as the return values in the C API, except that the NLOPT_ prefix is replaced with the nlopt. namespace. That is, NLOPT_SUCCESS becomes nlopt.SUCCESS, etcetera.

Exceptions

The Error codes (negative return values) in the C API are replaced in the Python API by thrown exceptions. The following exceptions are thrown by the various routines:

RunTimeError
Generic failure, equivalent to NLOPT_FAILURE.
ValueError
Invalid arguments (e.g. lower bounds are bigger than upper bounds, an unknown algorithm was specified, etcetera), equivalent to NLOPT_INVALID_ARGS.
MemoryError
Ran out of memory (a memory allocation failed), equivalent to NLOPT_OUT_OF_MEMORY.
nlopt.RoundoffLimited (subclass of Exception)
Halted because roundoff errors limited progress, equivalent to NLOPT_ROUNDOFF_LIMITED.
nlopt.ForcedStop (subclass of Exception)
Halted because of a forced termination: the user called opt.force_stop() from the user’s objective function or threw an nlopt.ForcedStop exception. Equivalent to NLOPT_FORCED_STOP.

If your objective/constraint functions throw any exception during the execution of opt.optimize, it will be caught by NLopt and the optimization will be halted gracefully, and opt.optimize will re-throw the same exception to its caller.

Local/subsidiary optimization algorithm

Some of the algorithms, especially MLSL and AUGLAG, use a different optimization algorithm as a subroutine, typically for local optimization. You can change the local search algorithm and its tolerances by calling:

opt.set_local_optimizer(local_opt)

Here, local_opt is another nlopt.opt object whose parameters are used to determine the local search algorithm, its stopping criteria, and other algorithm parameters. (However, the objective function, bounds, and nonlinear-constraint parameters of local_opt are ignored.) The dimension n of local_opt must match that of opt.

This function makes a copy of the local_opt object, so you can freely change your original local_opt afterwards without affecting opt.

Initial step size

Just as in the C API, you can get and set the initial step sizes for derivative-free optimization algorithms. The C++ equivalents of the C functions are the following methods:

opt.set_initial_step(dx)
opt.get_initial_step(x)

Here, dx is an array (NumPy array or Python list) of the (nonzero) initial steps for each dimension, or a single number if you wish to use the same initial steps for all dimensions. opt.get_initial_step(x) returns the initial step that will be used for a starting guess of x in opt.optimize(x).

Stochastic population

Just as in the C API, you can get and set the initial population for stochastic optimization algorithms, by the methods:

opt.set_population(pop)
opt.get_population()

(A pop of zero implies that the heuristic default will be used.)

Pseudorandom numbers

For stochastic optimization algorithms, we use pseudorandom numbers generated by the Mersenne Twister algorithm, based on code from Makoto Matsumoto. By default, the seed for the random numbers is generated from the system time, so that you will get a different sequence of pseudorandom numbers each time you run your program. If you want to use a "deterministic" sequence of pseudorandom numbers, i.e. the same sequence from run to run, you can set the seed by calling:

nlopt.srand(seed)

To reset the seed based on the system time, you can call:

nlopt.srand_time()

(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling nlopt::srand to set a deterministic seed.)

Version number

To determine the version number of NLopt at runtime, you can call:

nlopt.version_major()
nlopt.version_minor()
nlopt.version_bugfix()

For example, NLopt version 3.1.4 would return major=3, minor=1, and bugfix=4.

Personal tools