used last time using the Nelder-Meade solver implemented in scipy.optimize.fmin function
import nlopt from numpy import * def nlopt_rosen_fg(x, grad): return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0) x0 = [1.3, 0.7, 0.8, 1.9, 1.2] n = len(x0) grad = array([0] * n) opt = nlopt.opt(nlopt.LN_NELDERMEAD, len(x0)) opt.set_min_objective(nlopt_rosen_fg) opt.set_xtol_rel(1.0e-4) opt.set_maxeval(500) opt.optimize(x0) opt_val= opt.last_optimum_value() opt_result = opt.last_optimize_result() print print opt_val print x0 print opt_result
When the above program runs, the following output is obtained:
python rosenbrock-nlopt.py 1.22270197534e-08 [1.3, 0.69999999999999996, 0.80000000000000004, 1.8999999999999999, 1.2] 4
The return code of 4 means that the xtol has been achieved. We notice that the optimized vector is far from all ones although the function value is near zero. Doubtless we still have much to learn in using nlopt for this simple optimization without derivatives and we welcome any insightful remarks.
In the docs, it is not stated how many function evaluations were actually taken at the point of termination. It is obvious but the optimal vector is the final value of the initial vector variable.
No comments:
Post a Comment