http://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&action=history&feed=atomNLopt Reference - Revision history2024-03-29T05:30:23ZRevision history for this page on the wikiMediaWiki 1.7.3http://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4699&oldid=prevStevenj at 20:51, 28 March 20132013-03-28T20:51:32Z<p></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:51, 28 March 2013</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 8:</strong></td>
<td colspan="2" align="left"><strong>Line 8:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">* [[NLopt Python Reference]]</td><td> </td><td style="background: #eee; font-size: smaller;">* [[NLopt Python Reference]]</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">* [[NLopt Guile Reference]]</td><td> </td><td style="background: #eee; font-size: smaller;">* [[NLopt Guile Reference]]</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">* [https://github.com/stevengj/NLopt.jl NLopt Julia Reference]</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">The old API from versions of NLopt prior to 2.0 is deprecated, but continues to be supported for backwards compatibility. You can find it described in the [[NLopt Deprecated API Reference]].</td><td> </td><td style="background: #eee; font-size: smaller;">The old API from versions of NLopt prior to 2.0 is deprecated, but continues to be supported for backwards compatibility. You can find it described in the [[NLopt Deprecated API Reference]].</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4450&oldid=prevStevenj: /* Preconditioning with approximate Hessians */2012-07-20T20:46:49Z<p><span class="autocomment">Preconditioning with approximate Hessians</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:46, 20 July 2012</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 326:</strong></td>
<td colspan="2" align="left"><strong>Line 326:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"> void pre(unsigned n, const double *x, const double *v, double *vpre, void *f_data);</td><td> </td><td style="background: #eee; font-size: smaller;"> void pre(unsigned n, const double *x, const double *v, double *vpre, void *f_data);</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">This function <span style="color: red; font-weight: bold;">takes </span>a vector ''v'' and should compute ''vpre = H(x) v'' where ''H'' is an approximate second derivative at ''x''. The CCSAQ algorithm '''requires''' that your matrix ''H'' be [[w:Positive-definite_matrix#Positive-semidefinite|positive semidefinite]], i.e. that it be real-symmetric with nonnegative eigenvalues.</td><td>+</td><td style="background: #cfc; font-size: smaller;">This function <span style="color: red; font-weight: bold;">should take </span>a vector ''v'' and should compute ''vpre = H(x) v'' where ''H'' is an approximate second derivative at ''x''. The CCSAQ algorithm '''requires''' that your matrix ''H'' be [[w:Positive-definite_matrix#Positive-semidefinite|positive semidefinite]], i.e. that it be real-symmetric with nonnegative eigenvalues.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4449&oldid=prevStevenj: /* Preconditioning with approximate Hessians */2012-07-20T20:46:36Z<p><span class="autocomment">Preconditioning with approximate Hessians</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:46, 20 July 2012</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 317:</strong></td>
<td colspan="2" align="left"><strong>Line 317:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">If you know the Hessian (second-derivative) matrix of your objective function, i.e. the matrix ''H'' with <math>H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}</math> for an objective ''f'', then in principle this could be used to accelerate local optimization. In fact, even a reasonable ''approximation'' for ''H'' could be useful if it captures information about the largest eigenvalues of ''H'' and the corresponding eigenvectors. Such an approximate Hessian is often called a ''preconditioner'' in the context of iterative solvers, so we adopt that terminology here.</td><td> </td><td style="background: #eee; font-size: smaller;">If you know the Hessian (second-derivative) matrix of your objective function, i.e. the matrix ''H'' with <math>H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}</math> for an objective ''f'', then in principle this could be used to accelerate local optimization. In fact, even a reasonable ''approximation'' for ''H'' could be useful if it captures information about the largest eigenvalues of ''H'' and the corresponding eigenvectors. Such an approximate Hessian is often called a ''preconditioner'' in the context of iterative solvers, so we adopt that terminology here.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;"><span style="color: red; font-weight: bold;">T</span></td><td>+</td><td style="background: #cfc; font-size: smaller;"><span style="color: red; font-weight: bold;">Currently, support for preconditioners in NLopt is somewhat experimental, and is only used in the <code>NLOPT_LD_CCSAQ</code> algorithm. You specify a preconditioned objective function by calling one of:</span></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"> </td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"><span style="color: red; font-weight: bold;"> nlopt_result nlopt_set_precond_min_objective(nlopt_opt opt, nlopt_func f, nlopt_precond pre, void *f_data);</span></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"><span style="color: red; font-weight: bold;"> nlopt_result nlopt_set_precond_min_objective(nlopt_opt opt, nlopt_func f, nlopt_precond pre, void *f_data);</span></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"> </td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"><span style="color: red; font-weight: bold;">which are identical to <code>nlopt_set_min_objective</code> and <code>nlopt_set_max_objective</code>, respectively, except that they additionally specify a preconditioner <code>pre</code>, which is a function of the form:</span></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"> </td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"><span style="color: red; font-weight: bold;"> void pre(unsigned n, const double *x, const double *v, double *vpre, void *f_data);</span></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"> </td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"><span style="color: red; font-weight: bold;">This function takes a vector ''v'' and should compute ''vpre = H(x) v'' where ''H'' is an approximate second derivative at ''x''. The CCSAQ algorithm '''requires''' that your matrix ''H'' be [[w:Positive-definite_matrix#Positive-semidefinite|positive semidefinite]], i.e. that it be real-symmetric with nonnegative eigenvalues.</span></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4448&oldid=prevStevenj: /* Vector storage for limited-memory quasi-Newton algorithms */2012-07-20T20:41:03Z<p><span class="autocomment">Vector storage for limited-memory quasi-Newton algorithms</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:41, 20 July 2012</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 312:</strong></td>
<td colspan="2" align="left"><strong>Line 312:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Passing ''M''=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets ''M'' to 10 or at most 10&nbsp;[[W:Mebibyte|MiB]] worth of vectors, whichever is larger.</td><td> </td><td style="background: #eee; font-size: smaller;">Passing ''M''=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets ''M'' to 10 or at most 10&nbsp;[[W:Mebibyte|MiB]] worth of vectors, whichever is larger.</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">==Preconditioning with approximate Hessians==</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">If you know the Hessian (second-derivative) matrix of your objective function, i.e. the matrix ''H'' with <math>H_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}</math> for an objective ''f'', then in principle this could be used to accelerate local optimization. In fact, even a reasonable ''approximation'' for ''H'' could be useful if it captures information about the largest eigenvalues of ''H'' and the corresponding eigenvectors. Such an approximate Hessian is often called a ''preconditioner'' in the context of iterative solvers, so we adopt that terminology here.</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">T</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4447&oldid=prevStevenj: /* Vector-valued constraints */2012-07-20T20:34:31Z<p><span class="autocomment">Vector-valued constraints</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:34, 20 July 2012</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 133:</strong></td>
<td colspan="2" align="left"><strong>Line 133:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"> nlopt_mfunc c, void* c_data, const double *tol);</td><td> </td><td style="background: #eee; font-size: smaller;"> nlopt_mfunc c, void* c_data, const double *tol);</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">Here, <code>m</code> is the dimensionality of the constraint result and <code>tol</code> points to an array of length <code>m</code> of the tolerances in each constraint dimension. The constraint function must be of the form:</td><td>+</td><td style="background: #cfc; font-size: smaller;">Here, <code>m</code> is the dimensionality of the constraint result and <code>tol</code> points to an array of length <code>m</code> of the tolerances in each constraint dimension <span style="color: red; font-weight: bold;">(or <code>NULL</code> for zero tolerances)</span>. The constraint function must be of the form:</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"> void c(unsigned m, double *result, unsigned n, const double* x, double* grad, void* f_data);</td><td> </td><td style="background: #eee; font-size: smaller;"> void c(unsigned m, double *result, unsigned n, const double* x, double* grad, void* f_data);</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4416&oldid=prevStevenj: /* Vector storage for limited-memory quasi-Newton methods */2011-05-26T18:29:04Z<p><span class="autocomment">Vector storage for limited-memory quasi-Newton methods</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 18:29, 26 May 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 304:</strong></td>
<td colspan="2" align="left"><strong>Line 304:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling <code>nlopt_srand</code> to set a deterministic seed.)</td><td> </td><td style="background: #eee; font-size: smaller;">(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling <code>nlopt_srand</code> to set a deterministic seed.)</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">==Vector storage for limited-memory quasi-Newton <span style="color: red; font-weight: bold;">methods</span>==</td><td>+</td><td style="background: #cfc; font-size: smaller;">==Vector storage for limited-memory quasi-Newton <span style="color: red; font-weight: bold;">algorithms</span>==</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">Some of the NLopt algorithms are limited-memory "quasi-Newton" <span style="color: red; font-weight: bold;">methods</span>, which "remember" the gradients from a finite number ''M'' of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger ''M'' is, the more storage the algorithms require, but on the other hand they ''may'' converge faster for larger ''M''. By default, NLopt chooses a heuristic value of ''M'', but this can be changed/retrieved by calling:</td><td>+</td><td style="background: #cfc; font-size: smaller;">Some of the NLopt algorithms are limited-memory "quasi-Newton" <span style="color: red; font-weight: bold;">algorithms</span>, which "remember" the gradients from a finite number ''M'' of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger ''M'' is, the more storage the algorithms require, but on the other hand they ''may'' converge faster for larger ''M''. By default, NLopt chooses a heuristic value of ''M'', but this can be changed/retrieved by calling:</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"> nlopt_result nlopt_set_vector_storage(nlopt_opt opt, unsigned M);</td><td> </td><td style="background: #eee; font-size: smaller;"> nlopt_result nlopt_set_vector_storage(nlopt_opt opt, unsigned M);</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4405&oldid=prevStevenj: /* Vector storage for limited-memory quasi-Newton methods */2011-05-26T18:16:10Z<p><span class="autocomment">Vector storage for limited-memory quasi-Newton methods</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 18:16, 26 May 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 311:</strong></td>
<td colspan="2" align="left"><strong>Line 311:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"> unsigned nlopt_get_vector_storage(const nlopt_opt opt);</td><td> </td><td style="background: #eee; font-size: smaller;"> unsigned nlopt_get_vector_storage(const nlopt_opt opt);</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">Passing ''M''=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets ''M'' to 10 or at most 10&nbsp;[[<span style="color: red; font-weight: bold;">WP</span>:Mebibyte|MiB]] worth of vectors, whichever is larger.</td><td>+</td><td style="background: #cfc; font-size: smaller;">Passing ''M''=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets ''M'' to 10 or at most 10&nbsp;[[<span style="color: red; font-weight: bold;">W</span>:Mebibyte|MiB]] worth of vectors, whichever is larger.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4403&oldid=prevStevenj: /* Vector storage for limited-memory quasi-Newton methods */2011-05-26T18:13:22Z<p><span class="autocomment">Vector storage for limited-memory quasi-Newton methods</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 18:13, 26 May 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 306:</strong></td>
<td colspan="2" align="left"><strong>Line 306:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">==Vector storage for limited-memory quasi-Newton methods==</td><td> </td><td style="background: #eee; font-size: smaller;">==Vector storage for limited-memory quasi-Newton methods==</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">Some of the NLopt algorithms are limited-memory "quasi-Newton" methods, which "remember" the gradients from a finite number ''M'' of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger ''M'' is, the more storage the algorithms require, but on the other hand they ''may'' converge faster for larger ''M''. By default, NLopt chooses a heuristic value of ''M'', but this can be changed by calling:</td><td>+</td><td style="background: #cfc; font-size: smaller;">Some of the NLopt algorithms are limited-memory "quasi-Newton" methods, which "remember" the gradients from a finite number ''M'' of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger ''M'' is, the more storage the algorithms require, but on the other hand they ''may'' converge faster for larger ''M''. By default, NLopt chooses a heuristic value of ''M'', but this can be changed<span style="color: red; font-weight: bold;">/retrieved </span>by calling:</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"> nlopt_result nlopt_set_vector_storage(nlopt_opt opt, unsigned M);</td><td> </td><td style="background: #eee; font-size: smaller;"> nlopt_result nlopt_set_vector_storage(nlopt_opt opt, unsigned M);</td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;"> nlopt_get_vector_storage(const nlopt_opt opt);</td><td>+</td><td style="background: #cfc; font-size: smaller;"> <span style="color: red; font-weight: bold;">unsigned </span>nlopt_get_vector_storage(const nlopt_opt opt);</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Passing ''M''=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets ''M'' to 10 or at most 10&nbsp;[[WP:Mebibyte|MiB]] worth of vectors, whichever is larger.</td><td> </td><td style="background: #eee; font-size: smaller;">Passing ''M''=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets ''M'' to 10 or at most 10&nbsp;[[WP:Mebibyte|MiB]] worth of vectors, whichever is larger.</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4402&oldid=prevStevenj: /* Vector storage */2011-05-26T18:12:58Z<p><span class="autocomment">Vector storage</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 18:12, 26 May 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 303:</strong></td>
<td colspan="2" align="left"><strong>Line 303:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling <code>nlopt_srand</code> to set a deterministic seed.)</td><td> </td><td style="background: #eee; font-size: smaller;">(Normally, you don't need to call this as it is called automatically. However, it might be useful if you want to "re-randomize" the pseudorandom numbers after calling <code>nlopt_srand</code> to set a deterministic seed.)</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">==Vector storage for limited-memory quasi-Newton methods==</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">Some of the NLopt algorithms are limited-memory "quasi-Newton" methods, which "remember" the gradients from a finite number ''M'' of the previous optimization steps in order to construct an approximate 2nd derivative matrix. The bigger ''M'' is, the more storage the algorithms require, but on the other hand they ''may'' converge faster for larger ''M''. By default, NLopt chooses a heuristic value of ''M'', but this can be changed by calling:</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"> nlopt_result nlopt_set_vector_storage(nlopt_opt opt, unsigned M);</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"> nlopt_get_vector_storage(const nlopt_opt opt);</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">Passing ''M''=0 (the default) tells NLopt to use a heuristic value. By default, NLopt currently sets ''M'' to 10 or at most 10&nbsp;[[WP:Mebibyte|MiB]] worth of vectors, whichever is larger.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td><td> </td><td style="background: #eee; font-size: smaller;">==Version number==</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Reference&diff=4401&oldid=prevStevenj: /* Bound constraints */2011-05-26T18:03:39Z<p><span class="autocomment">Bound constraints</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 18:03, 26 May 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 83:</strong></td>
<td colspan="2" align="left"><strong>Line 83:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">If a lower/upper bound is not set, the default is no bound (unconstrained, i.e. a bound of infinity); it is possible to have lower bounds but not upper bounds or vice versa. Alternatively, the user can call one of the above functions and explicitly pass a lower bound of <code>-HUGE_VAL</code> and/or an upper bound of <code>+HUGE_VAL</code> for some optimization parameters to make them have no lower/upper bound, respectively. (<code>HUGE_VAL</code> is the standard C constant for a floating-point infinity, found in the <code>math.h</code> header file.)</td><td> </td><td style="background: #eee; font-size: smaller;">If a lower/upper bound is not set, the default is no bound (unconstrained, i.e. a bound of infinity); it is possible to have lower bounds but not upper bounds or vice versa. Alternatively, the user can call one of the above functions and explicitly pass a lower bound of <code>-HUGE_VAL</code> and/or an upper bound of <code>+HUGE_VAL</code> for some optimization parameters to make them have no lower/upper bound, respectively. (<code>HUGE_VAL</code> is the standard C constant for a floating-point infinity, found in the <code>math.h</code> header file.)</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">It is permitted to set <code>lb[i] == ub[i]</code> in one or more dimensions; this is equivalent to fixing the corresponding <code>x[i]</code> parameter, eliminating it from the optimization.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Note, however, that some of the algorithms in NLopt, in particular most of the global-optimization algorithms, do not support unconstrained optimization and will return an error in <code>nlopt_optimize</code> if you do not supply finite lower and upper bounds.</td><td> </td><td style="background: #eee; font-size: smaller;">Note, however, that some of the algorithms in NLopt, in particular most of the global-optimization algorithms, do not support unconstrained optimization and will return an error in <code>nlopt_optimize</code> if you do not supply finite lower and upper bounds.</td></tr>
</table>
Stevenj