Class mmin_bfgs2 (o2scl)

O2scl : Class List

template<class func_t = multi_funct, class vec_t = boost::numeric::ublas::vector<double>, class dfunc_t = grad_funct, class auto_grad_t = gradient<multi_funct, boost::numeric::ublas::vector<double>>, class def_auto_grad_t = gradient_gsl<multi_funct, boost::numeric::ublas::vector<double>>>
class mmin_bfgs2 : public o2scl::mmin_base<multi_funct, grad_funct, boost::numeric::ublas::vector<double>>

Multidimensional minimization by the BFGS algorithm (GSL)

The functions mmin() and mmin_de() min a given function until the gradient is smaller than the value of mmin::tol_rel (which defaults to \( 10^{-4} \) ).

See an example for the usage of this class in Multidimensional minimizer example.

This class includes the optimizations from the GSL minimizer vector_bfgs2.

Default template arguments

  • func_t - multi_funct

  • vec_t - boost::numeric::ublas::vector <double >

  • dfunc_t - mm_funct

  • auto_grad_t - gradient<func_t, boost::numeric::ublas::vector <double > >

  • def_auto_grad_t - gradient_gsl<func_t, boost::numeric::ublas::vector < double > >

    Todo:

    While BFGS does well in the ex_mmin example with the initial guess of \( (1,0,7\pi) \) it seems to converge more poorly for the spring function than the other minimizers with other initial guesses, and I think this will happen in the GSL versions too. I need to examine this more closely with some code designed to clearly show this.

    Idea for Future:

    When the bfgs2 line minimizer returns a zero status, the minimization fails. When err_nonconv is false, the minimizer isn’t able to update the x vector so the mmin() function doesn’t return the best minimum obtained so far. This is a bit confusing, and could be improved.

The original variables from the GSL state structure

int iter
double step
double g0norm
double pnorm
double delta_f
double fp0
vec_t x0
vec_t g0
vec_t p
vec_t dx0
vec_t dg0
mmin_wrapper_gsl<func_t, vec_t, dfunc_t, auto_grad_t> wrap
double rho
double sigma
double tau1
double tau2
double tau3
int order
mmin_linmin_gsl lm

The line minimizer.

Store the arguments to set() so we can use them for iterate()

vec_t *st_x
vec_t st_dx
vec_t st_grad
double st_f
size_t dim

Memory size.

auto_grad_t *agrad

Automatic gradient object.

double step_size

The size of the first trial step (default 0.01)

double lmin_tol

The tolerance for the 1-dimensional minimizer.

def_auto_grad_t def_grad

Default automatic gradient object.

inline mmin_bfgs2()
inline virtual ~mmin_bfgs2()
inline virtual int iterate()

Perform an iteration.

inline virtual const char *type()

Return string denoting type(“mmin_bfgs2”)

inline virtual int allocate(size_t n)

Allocate the memory.

inline virtual int free()

Free the allocated memory.

inline int restart()

Reset the minimizer to use the current point as a new starting point.

inline virtual int set(vec_t &x, double u_step_size, double tol_u, func_t &ufunc)

Set the function and initial guess.

inline virtual int set_de(vec_t &x, double u_step_size, double tol_u, func_t &ufunc, dfunc_t &udfunc)

Set the function, the gradient, and the initial guess.

inline virtual int mmin(size_t nn, vec_t &xx, double &fmin, func_t &ufunc)

Calculate the minimum min of func w.r.t the array x of size nn.

inline virtual int mmin_de(size_t nn, vec_t &xx, double &fmin, func_t &ufunc, dfunc_t &udfunc)

Calculate the minimum min of func w.r.t the array x of size nn.

mmin_bfgs2(const mmin_bfgs2<func_t, vec_t, dfunc_t, auto_grad_t, def_auto_grad_t>&)
mmin_bfgs2<func_t, vec_t, dfunc_t, auto_grad_t, def_auto_grad_t> &operator=(const mmin_bfgs2<func_t, vec_t, dfunc_t, auto_grad_t, def_auto_grad_t>&)