http://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&action=history&feed=atomNLopt Introduction - Revision history2024-03-29T15:38:03ZRevision history for this page on the wikiMediaWiki 1.7.3http://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4400&oldid=prevStevenj: /* Optimization problems */2011-05-26T18:02:17Z<p><span class="autocomment">Optimization problems</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 18:02, 26 May 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 13:</strong></td>
<td colspan="2" align="left"><strong>Line 13:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">:<math>lb_i \leq x_i \leq ub_i</math> for <math>i=1,\ldots,n</math></td><td> </td><td style="background: #eee; font-size: smaller;">:<math>lb_i \leq x_i \leq ub_i</math> for <math>i=1,\ldots,n</math></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">given lower bounds ''lb'' and upper bounds ''ub'' (which may be &minus;&infin; and/or +&infin;, respectively, for partially or totally unconstrained problems). One may also optionally have ''m'' '''nonlinear inequality constraints''' (sometimes called a '''nonlinear programming''' problem):</td><td>+</td><td style="background: #cfc; font-size: smaller;">given lower bounds ''lb'' and upper bounds ''ub'' (which may be &minus;&infin; and/or +&infin;, respectively, for partially or totally unconstrained problems). <span style="color: red; font-weight: bold;">(If <math>lb_i = ub_i</math>, that parameter is effectively eliminated.) </span>One may also optionally have ''m'' '''nonlinear inequality constraints''' (sometimes called a '''nonlinear programming''' problem):</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">:<math>fc_i(\mathbf{x}) \leq 0</math> for <math>i=1,\ldots,m</math></td><td> </td><td style="background: #eee; font-size: smaller;">:<math>fc_i(\mathbf{x}) \leq 0</math> for <math>i=1,\ldots,m</math></td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4392&oldid=prevStevenj: /* Function value and parameter tolerances */2011-03-07T21:54:13Z<p><span class="autocomment">Function value and parameter tolerances</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 21:54, 7 March 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 112:</strong></td>
<td colspan="2" align="left"><strong>Line 112:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">* <b>Note:</b> generally, you can only ask for about ''half'' as many decimal places in the xtol as in the ftol. The reason is that, near the minimum, <math>\Delta f \approx f'' (\Delta x)^2 / 2</math> from the [[w:Taylor expansion|Taylor expansion]], so <span style="color: red; font-weight: bold;">for </span><math>f'' \approx 1</math> <span style="color: red; font-weight: bold;">then </span>a change in ''x'' by 10<sup>–7</sup> gives a change in ''f'' by around 10<sup>–14</sup>. In particular, this means that it is generally hopeless to request an xtol_rel much smaller than the ''square root'' of [[w:Machine epsilon|machine precision]].</td><td>+</td><td style="background: #cfc; font-size: smaller;">* <b>Note:</b> generally, you can only ask for about ''half'' as many decimal places in the xtol as in the ftol. The reason is that, near the minimum, <math>\Delta f \approx f'' (\Delta x)^2 / 2</math> from the [[w:Taylor expansion|Taylor expansion]], <span style="color: red; font-weight: bold;">and </span>so <span style="color: red; font-weight: bold;">(assuming </span><math>f'' \approx 1</math> <span style="color: red; font-weight: bold;">for simplicity) </span>a change in ''x'' by 10<sup>–7</sup> gives a change in ''f'' by around 10<sup>–14</sup>. In particular, this means that it is generally hopeless to request an xtol_rel much smaller than the ''square root'' of [[w:Machine epsilon|machine precision]].</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4391&oldid=prevStevenj: /* Function value and parameter tolerances */2011-03-07T21:53:23Z<p><span class="autocomment">Function value and parameter tolerances</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 21:53, 7 March 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 112:</strong></td>
<td colspan="2" align="left"><strong>Line 112:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">* <b>Note:</b> generally, you can only ask for about ''half'' as many decimal places in the xtol as in the ftol. The reason is that, near the minimum, <math>\Delta f \approx f'' (\Delta x)^2 / 2</math> from the [[w:Taylor expansion|Taylor expansion]], so for <math>f'' \approx 1</math> then a change in ''x'' by 10<sup>–7</sup> gives a change in ''f'' by around 10<sup>–14</sup>. In particular, this means that it is generally hopeless to request an xtol_rel much smaller than the square root of [[w:Machine epsilon|machine precision]].</td><td>+</td><td style="background: #cfc; font-size: smaller;">* <b>Note:</b> generally, you can only ask for about ''half'' as many decimal places in the xtol as in the ftol. The reason is that, near the minimum, <math>\Delta f \approx f'' (\Delta x)^2 / 2</math> from the [[w:Taylor expansion|Taylor expansion]], so for <math>f'' \approx 1</math> then a change in ''x'' by 10<sup>–7</sup> gives a change in ''f'' by around 10<sup>–14</sup>. In particular, this means that it is generally hopeless to request an xtol_rel much smaller than the <span style="color: red; font-weight: bold;">''</span>square root<span style="color: red; font-weight: bold;">'' </span>of [[w:Machine epsilon|machine precision]].</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4390&oldid=prevStevenj: /* Function value and parameter tolerances */2011-03-07T21:52:48Z<p><span class="autocomment">Function value and parameter tolerances</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 21:52, 7 March 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 112:</strong></td>
<td colspan="2" align="left"><strong>Line 112:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">* <b>Note:</b> generally, you can only ask for about ''half'' as many decimal places in the xtol as in the ftol. The reason is that, near the minimum, <math>\Delta f \approx f'' (\Delta x)^2 / 2</math> from the [[w:Taylor expansion|Taylor expansion]], so for <math>f'' \approx 1</math> then a change in ''x'' by 10<sup>–7</<span style="color: red; font-weight: bold;">math</span>> gives a change in ''f'' by around 10<sup>–14</<span style="color: red; font-weight: bold;">math</span>>. In particular, this means that it is generally hopeless to request an xtol_rel much smaller than the square root of [[w:Machine epsilon|machine precision]].</td><td>+</td><td style="background: #cfc; font-size: smaller;">* <b>Note:</b> generally, you can only ask for about ''half'' as many decimal places in the xtol as in the ftol. The reason is that, near the minimum, <math>\Delta f \approx f'' (\Delta x)^2 / 2</math> from the [[w:Taylor expansion|Taylor expansion]], so for <math>f'' \approx 1</math> then a change in ''x'' by 10<sup>–7</<span style="color: red; font-weight: bold;">sup</span>> gives a change in ''f'' by around 10<sup>–14</<span style="color: red; font-weight: bold;">sup</span>>. In particular, this means that it is generally hopeless to request an xtol_rel much smaller than the square root of [[w:Machine epsilon|machine precision]].</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4389&oldid=prevStevenj: /* Function value and parameter tolerances */2011-03-07T21:52:33Z<p><span class="autocomment">Function value and parameter tolerances</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 21:52, 7 March 2011</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 111:</strong></td>
<td colspan="2" align="left"><strong>Line 111:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td><td> </td><td style="background: #eee; font-size: smaller;">Similarly, you can specify a fractional tolerance xtol_rel and an absolute tolerance xtol_abs<sub>''i''</sub> on the parameters '''x'''. Again, it is practically impossible for these to be tolerances on the actual error &Delta;'''x''' compared to the (unknown) minimum point, so in practice &Delta;'''x''' is usually some measure of how much '''x''' changes by from one iteration to the next, or the diameter of a search region, or something like that. Then the algorithm stops when either |&Delta;''x''<sub>''i''</sub>| &lt; xtol_abs<sub>''i''</sub> or when |&Delta;''x''<sub>''i''</sub>|/|''x''<sub>''i''</sub>| &lt; xtol_rel. However, there is some variation in how the different algorithms implement '''x''' tolerance tests; for example, some of them check instead whether |&Delta;'''x'''|/|'''x'''| &lt; xtol_rel, where |&bull;| is some norm.</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">* <b>Note:</b> generally, you can only ask for about ''half'' as many decimal places in the xtol as in the ftol. The reason is that, near the minimum, <math>\Delta f \approx f'' (\Delta x)^2 / 2</math> from the [[w:Taylor expansion|Taylor expansion]], so for <math>f'' \approx 1</math> then a change in ''x'' by 10<sup>–7</math> gives a change in ''f'' by around 10<sup>–14</math>. In particular, this means that it is generally hopeless to request an xtol_rel much smaller than the square root of [[w:Machine epsilon|machine precision]].</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td><td> </td><td style="background: #eee; font-size: smaller;">In most cases, the fractional tolerance (tol_rel) is the most useful one to specify, because it is independent of any absolute scale factors or units. Absolute tolerance (tol_abs) is mainly useful if you think that the minimum function value or parameters might occur at or close to zero.</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4299&oldid=prevStevenj: /* Equality constraints */2010-07-16T04:58:49Z<p><span class="autocomment">Equality constraints</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 04:58, 16 July 2010</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 72:</strong></td>
<td colspan="2" align="left"><strong>Line 72:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">:<math>h_i(\mathbf{x}) = 0</math>.</td><td> </td><td style="background: #eee; font-size: smaller;">:<math>h_i(\mathbf{x}) = 0</math>.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">In ''principle'', each equality constraint can be expressed by two inequality constraints <math>h_i(\mathbf{x}) \leq 0</math> and <math>-h_i(\mathbf{x}) \leq 0</math>, so you might think that any code that can handle inequality constraints can automatically handle equality constraints. In practice, this is not true&mdash;if you try to express an equality constraint as a pair of nonlinear inequality constraints, <span style="color: red; font-weight: bold;">many </span>algorithms will fail to converge.</td><td>+</td><td style="background: #cfc; font-size: smaller;">In ''principle'', each equality constraint can be expressed by two inequality constraints <math>h_i(\mathbf{x}) \leq 0</math> and <math>-h_i(\mathbf{x}) \leq 0</math>, so you might think that any code that can handle inequality constraints can automatically handle equality constraints. In practice, this is not true&mdash;if you try to express an equality constraint as a pair of nonlinear inequality constraints, <span style="color: red; font-weight: bold;">some </span>algorithms will fail to converge.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Equality constraints sometimes require special handling because they reduce the ''dimensionality'' of the feasible region, and not just its size as for an inequality constraint. Only some of the NLopt algorithms (AUGLAG, COBYLA, and ISRES) currently support nonlinear equality constraints.</td><td> </td><td style="background: #eee; font-size: smaller;">Equality constraints sometimes require special handling because they reduce the ''dimensionality'' of the feasible region, and not just its size as for an inequality constraint. Only some of the NLopt algorithms (AUGLAG, COBYLA, and ISRES) currently support nonlinear equality constraints.</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4251&oldid=prevStevenj: /* Penalty functions */2010-07-07T20:28:30Z<p><span class="autocomment">Penalty functions</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:28, 7 July 2010</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 95:</strong></td>
<td colspan="2" align="left"><strong>Line 95:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">Another popular approach to equality constraints (and inequality constraints, for that matter) is to include some sort of '''penalty function''' in the objective function, which penalizes '''x''' values that violate the constraints. A standard technique of this sort is known as the '''augmented Lagrangian''' approach, and a variant of this approach is implemented in NLopt's [[NLopt Algorithms#Augmented Lagrangian algorithm|AUGLAG algorithm]].</td><td> </td><td style="background: #eee; font-size: smaller;">Another popular approach to equality constraints (and inequality constraints, for that matter) is to include some sort of '''penalty function''' in the objective function, which penalizes '''x''' values that violate the constraints. A standard technique of this sort is known as the '''augmented Lagrangian''' approach, and a variant of this approach is implemented in NLopt's [[NLopt Algorithms#Augmented Lagrangian algorithm|AUGLAG algorithm]].</td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;"></td></tr>
<tr><td colspan="2"> </td><td>+</td><td style="background: #cfc; font-size: smaller;">(For inequality constraints, a variant of the penalty idea is a '''barrier method''': this is simply a penalty that diverges as you approach the constraint, which forces the optimization to stay within the feasible region.)</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">==Termination conditions==</td><td> </td><td style="background: #eee; font-size: smaller;">==Termination conditions==</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4250&oldid=prevStevenj: /* Equivalent formulations of optimization problems */2010-07-07T20:19:40Z<p><span class="autocomment">Equivalent formulations of optimization problems</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:19, 7 July 2010</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 51:</strong></td>
<td colspan="2" align="left"><strong>Line 51:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">There are many equivalent ways to formulate a given optimization problem, even within the framework defined above, and finding the best formulation can be something of an art.</td><td> </td><td style="background: #eee; font-size: smaller;">There are many equivalent ways to formulate a given optimization problem, even within the framework defined above, and finding the best formulation can be something of an art.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">To begin with a trivial example, suppose that you want to ''maximize'' the function ''g''('''x'''). This is equivalent to minimizing the function ''f''('''x''')=&minus;''g''('''x'''). Because of this, there is no need for NLopt to provide separate maximization routines in addition to its minimization routines&mdash;the user can just flip the sign to do maximization. As a convenience, however, NLopt provides a maximization interface.</td><td>+</td><td style="background: #cfc; font-size: smaller;">To begin with a trivial example, suppose that you want to ''maximize'' the function ''g''('''x'''). This is equivalent to minimizing the function ''f''('''x''')=&minus;''g''('''x'''). Because of this, there is no need for NLopt to provide separate maximization routines in addition to its minimization routines&mdash;the user can just flip the sign to do maximization. As a convenience, however, NLopt provides a maximization interface <span style="color: red; font-weight: bold;">(which performs the necessary sign flips for you, internally)</span>.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">A more interesting example is that of a '''minimax''' optimization problem, where the objective function ''f''('''x''') is the maximum of ''N'' functions:</td><td> </td><td style="background: #eee; font-size: smaller;">A more interesting example is that of a '''minimax''' optimization problem, where the objective function ''f''('''x''') is the maximum of ''N'' functions:</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4249&oldid=prevStevenj: /* Global versus local optimization */2010-07-07T20:16:44Z<p><span class="autocomment">Global versus local optimization</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:16, 7 July 2010</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 33:</strong></td>
<td colspan="2" align="left"><strong>Line 33:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">'''Global optimization''' is the problem of finding the feasible point '''x''' that minimizes the objective ''f''('''x''') over the ''entire'' feasible region. In general, this can be a ''very difficult'' problem, becoming exponentially harder as the number ''n'' of parameters increases. In fact, unless special information about ''f'' is known, it is not even possible to be certain whether one has found the true global optimum, because there might be a sudden dip of ''f'' hidden somewhere in the parameter space you haven't looked at yet. However, NLopt includes several global optimization algorithms that work well on reasonably well-behaved problems, if the dimension ''n'' is not too large.</td><td> </td><td style="background: #eee; font-size: smaller;">'''Global optimization''' is the problem of finding the feasible point '''x''' that minimizes the objective ''f''('''x''') over the ''entire'' feasible region. In general, this can be a ''very difficult'' problem, becoming exponentially harder as the number ''n'' of parameters increases. In fact, unless special information about ''f'' is known, it is not even possible to be certain whether one has found the true global optimum, because there might be a sudden dip of ''f'' hidden somewhere in the parameter space you haven't looked at yet. However, NLopt includes several global optimization algorithms that work well on reasonably well-behaved problems, if the dimension ''n'' is not too large.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">'''Local optimization''' is the ''much easier'' problem of finding a feasible point '''x''' that is only a ''local'' minimum: ''f''('''x''') is less than or equal to the value of ''f'' for all nearby feasible points (the intersection of the feasible region with at least some small neighborhood of '''x'''). In general, a nonlinear optimization problem may have ''many'' local minima, and which one is located by an algorithm typically depends upon the starting point that the user supplies to the algorithm. Local optimization algorithms, on the other hand, can often quickly locate a local minimum even in very high-dimensional problems (especially using gradient-based algorithms). (An algorithm that is guaranteed to find some ''local'' minimum from any feasible starting <span style="color: red; font-weight: bold;">'''x''' </span>is, somewhat confusingly, called ''globally convergent''.)</td><td>+</td><td style="background: #cfc; font-size: smaller;">'''Local optimization''' is the ''much easier'' problem of finding a feasible point '''x''' that is only a ''local'' minimum: ''f''('''x''') is less than or equal to the value of ''f'' for all nearby feasible points (the intersection of the feasible region with at least some small neighborhood of '''x'''). In general, a nonlinear optimization problem may have ''many'' local minima, and which one is located by an algorithm typically depends upon the starting point that the user supplies to the algorithm. Local optimization algorithms, on the other hand, can often quickly locate a local minimum even in very high-dimensional problems (especially using gradient-based algorithms). (An algorithm that is guaranteed to find some ''local'' minimum from any feasible starting <span style="color: red; font-weight: bold;">point </span>is, somewhat confusingly, called ''globally convergent''.)</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">In the special class of '''convex optimization''' problems, for which both the objective and inequality constraint functions are [[w:Convex function|convex]] (and the equality constraints are [[w:Affine transformation|affine]] or in any case have [[w:Convex set|convex]] [[w:Level set|level sets]]), there is only ''one'' local minimum value of ''f'' , so that a ''local'' optimization method finds a ''global'' optimum. <nowiki>[</nowiki>There may, however, be more than one point '''x''' that yield the same minimum ''f''('''x'''), with the optimum points forming a convex subset of the (convex) feasible region.<nowiki>]</nowiki> Typically, convex problems arise from functions of special analytical forms, such as linear programming problems, semidefinite programming, quadratic programming, and so on, and specialized techniques are available to solve these problems very efficiently. NLopt includes only general methods that do not assume convexity; if you have a provably convex problem, you may be better off with a different software package, such as the [http://www.stanford.edu/~boyd/cvx/ CVX package] from Stanford.</td><td> </td><td style="background: #eee; font-size: smaller;">In the special class of '''convex optimization''' problems, for which both the objective and inequality constraint functions are [[w:Convex function|convex]] (and the equality constraints are [[w:Affine transformation|affine]] or in any case have [[w:Convex set|convex]] [[w:Level set|level sets]]), there is only ''one'' local minimum value of ''f'' , so that a ''local'' optimization method finds a ''global'' optimum. <nowiki>[</nowiki>There may, however, be more than one point '''x''' that yield the same minimum ''f''('''x'''), with the optimum points forming a convex subset of the (convex) feasible region.<nowiki>]</nowiki> Typically, convex problems arise from functions of special analytical forms, such as linear programming problems, semidefinite programming, quadratic programming, and so on, and specialized techniques are available to solve these problems very efficiently. NLopt includes only general methods that do not assume convexity; if you have a provably convex problem, you may be better off with a different software package, such as the [http://www.stanford.edu/~boyd/cvx/ CVX package] from Stanford.</td></tr>
</table>
Stevenjhttp://ab-initio.mit.edu/wiki/index.php?title=NLopt_Introduction&diff=4248&oldid=prevStevenj: /* Global versus local optimization */2010-07-07T20:15:59Z<p><span class="autocomment">Global versus local optimization</span></p>
<table border='0' width='98%' cellpadding='0' cellspacing='4' style="background-color: white;">
<tr>
<td colspan='2' width='50%' align='center' style="background-color: white;">←Older revision</td>
<td colspan='2' width='50%' align='center' style="background-color: white;">Revision as of 20:15, 7 July 2010</td>
</tr>
<tr><td colspan="2" align="left"><strong>Line 33:</strong></td>
<td colspan="2" align="left"><strong>Line 33:</strong></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">'''Global optimization''' is the problem of finding the feasible point '''x''' that minimizes the objective ''f''('''x''') over the ''entire'' feasible region. In general, this can be a ''very difficult'' problem, becoming exponentially harder as the number ''n'' of parameters increases. In fact, unless special information about ''f'' is known, it is not even possible to be certain whether one has found the true global optimum, because there might be a sudden dip of ''f'' hidden somewhere in the parameter space you haven't looked at yet. However, NLopt includes several global optimization algorithms that work well on reasonably well-behaved problems, if the dimension ''n'' is not too large.</td><td> </td><td style="background: #eee; font-size: smaller;">'''Global optimization''' is the problem of finding the feasible point '''x''' that minimizes the objective ''f''('''x''') over the ''entire'' feasible region. In general, this can be a ''very difficult'' problem, becoming exponentially harder as the number ''n'' of parameters increases. In fact, unless special information about ''f'' is known, it is not even possible to be certain whether one has found the true global optimum, because there might be a sudden dip of ''f'' hidden somewhere in the parameter space you haven't looked at yet. However, NLopt includes several global optimization algorithms that work well on reasonably well-behaved problems, if the dimension ''n'' is not too large.</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td>-</td><td style="background: #ffa; font-size: smaller;">'''Local optimization''' is the ''much easier'' problem of finding a feasible point '''x''' that is only a ''local'' minimum: ''f''('''x''') is less than or equal to the value of ''f'' for all nearby feasible points (the intersection of the feasible region with at least some small neighborhood of '''x'''). In general, a nonlinear optimization problem may have ''many'' local minima, and which one is located by an algorithm typically depends upon the starting point that the user supplies to the algorithm. Local optimization algorithms, on the other hand, can often quickly locate a local minimum even in very high-dimensional problems (especially using gradient-based algorithms). (An algorithm that is guaranteed to find <span style="color: red; font-weight: bold;">a </span>''local'' minimum from any feasible starting '''x''' is, somewhat confusingly, called ''globally convergent''.)</td><td>+</td><td style="background: #cfc; font-size: smaller;">'''Local optimization''' is the ''much easier'' problem of finding a feasible point '''x''' that is only a ''local'' minimum: ''f''('''x''') is less than or equal to the value of ''f'' for all nearby feasible points (the intersection of the feasible region with at least some small neighborhood of '''x'''). In general, a nonlinear optimization problem may have ''many'' local minima, and which one is located by an algorithm typically depends upon the starting point that the user supplies to the algorithm. Local optimization algorithms, on the other hand, can often quickly locate a local minimum even in very high-dimensional problems (especially using gradient-based algorithms). (An algorithm that is guaranteed to find <span style="color: red; font-weight: bold;">some </span>''local'' minimum from any feasible starting '''x''' is, somewhat confusingly, called ''globally convergent''.)</td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;"></td><td> </td><td style="background: #eee; font-size: smaller;"></td></tr>
<tr><td> </td><td style="background: #eee; font-size: smaller;">In the special class of '''convex optimization''' problems, for which both the objective and inequality constraint functions are [[w:Convex function|convex]] (and the equality constraints are [[w:Affine transformation|affine]] or in any case have [[w:Convex set|convex]] [[w:Level set|level sets]]), there is only ''one'' local minimum value of ''f'' , so that a ''local'' optimization method finds a ''global'' optimum. <nowiki>[</nowiki>There may, however, be more than one point '''x''' that yield the same minimum ''f''('''x'''), with the optimum points forming a convex subset of the (convex) feasible region.<nowiki>]</nowiki> Typically, convex problems arise from functions of special analytical forms, such as linear programming problems, semidefinite programming, quadratic programming, and so on, and specialized techniques are available to solve these problems very efficiently. NLopt includes only general methods that do not assume convexity; if you have a provably convex problem, you may be better off with a different software package, such as the [http://www.stanford.edu/~boyd/cvx/ CVX package] from Stanford.</td><td> </td><td style="background: #eee; font-size: smaller;">In the special class of '''convex optimization''' problems, for which both the objective and inequality constraint functions are [[w:Convex function|convex]] (and the equality constraints are [[w:Affine transformation|affine]] or in any case have [[w:Convex set|convex]] [[w:Level set|level sets]]), there is only ''one'' local minimum value of ''f'' , so that a ''local'' optimization method finds a ''global'' optimum. <nowiki>[</nowiki>There may, however, be more than one point '''x''' that yield the same minimum ''f''('''x'''), with the optimum points forming a convex subset of the (convex) feasible region.<nowiki>]</nowiki> Typically, convex problems arise from functions of special analytical forms, such as linear programming problems, semidefinite programming, quadratic programming, and so on, and specialized techniques are available to solve these problems very efficiently. NLopt includes only general methods that do not assume convexity; if you have a provably convex problem, you may be better off with a different software package, such as the [http://www.stanford.edu/~boyd/cvx/ CVX package] from Stanford.</td></tr>
</table>
Stevenj