fatiando.inversion.optimization
)¶Methods to optimize a given objective function.
All solvers are Python iterators. This means that should be used in a for
loop, like so:
solver = newton(hess_func, grad_func, value_func, initial)
for i, p, stats in solver:
... do something or 'continue' to step through the iterations ...
# 'p' is the current estimate for the parameter vector at the 'i'th
# iteration.
# 'stats' is a dictionary with some information about the optimization
# process so far (number of attempted steps, value of objective
# function per step, total number of iterations so far, etc).
# At the end, 'p' is the final estimate and 'stats' will contain the
# statistics for the whole iteration process.
Gradient descent
linear
: Solver for a linear problemnewton
: Newton’s methodlevmarq
: Levemberg-Marquardt
algorithmsteepest
: Steepest Descent methodHeuristic methods
acor
: ACO-R: Ant Colony Optimization
for Continuous Domains (Socha and Dorigo, 2008)References
Socha, K., and M. Dorigo (2008), Ant colony optimization for continuous domains, European Journal of Operational Research, 185(3), 1155-1173, doi:10.1016/j.ejor.2006.06.046.
fatiando.inversion.optimization.
acor
(value, bounds, nparams, nants=None, archive_size=None, maxit=1000, diverse=0.5, evap=0.85, seed=None)[source]¶Minimize the objective function using ACO-R.
ACO-R stands for Ant Colony Optimization for Continuous Domains (Socha and Dorigo, 2008).
Parameters:
Returns the value of the objective function at a given parameter vector
The bounds of the search space. If only two values are given, will interpret as the minimum and maximum, respectively, for all parameters. Alternatively, you can given a minimum and maximum for each parameter, e.g., for a problem with 3 parameters you could give bounds = [min1, max1, min2, max2, min3, max3].
The number of parameters that the objective function takes.
The number of ants to use in the search. Defaults to the number of parameters.
The number of solutions to keep in the solution archive. Defaults to 10 x nants
The number of iterations to run.
Scalar from 0 to 1, non-inclusive, that controls how much better solutions are favored when constructing new ones.
The pheromone evaporation rate (evap > 0). Controls how spread out the search is.
Seed for the random number generator.
Yields:
The current iteration number
The current best estimated parameter vector
Statistics about the optimization so far. Keys:
The name of the optimization algorithm
The total number of iterations so far
Value of the objective function corresponding to the best estimate per iteration.
fatiando.inversion.optimization.
levmarq
(hessian, gradient, value, initial, maxit=30, maxsteps=20, lamb=10, dlamb=2, tol=1e-05, precondition=True)[source]¶Minimize an objective function using the Levemberg-Marquardt algorithm.
Parameters:
A function that returns the Hessian matrix of the objective function when given a parameter vector.
A function that returns the gradient vector of the objective function when given a parameter vector.
A function that returns the value of the objective function evaluated at a given parameter vector.
The initial estimate for the gradient descent.
The maximum number of iterations allowed.
The maximum number of times to try to take a step before giving up.
Initial amount of step regularization. The larger this is, the more the algorithm will resemble Steepest Descent in the initial iterations.
Factor by which lamb is divided or multiplied when taking steps.
The convergence criterion. The lower it is, the more steps are permitted.
If True, will use Jacobi preconditioning.
Yields:
The current iteration number
The current estimated parameter vector
Statistics about the optimization so far. Keys:
The name of the optimization method
The total number of iterations so far
Value of the objective function per iteration. First value corresponds to the inital estimate
Number of attempts at taking a step per iteration. First number is zero, reflecting the initial estimate.
fatiando.inversion.optimization.
linear
(hessian, gradient, precondition=True)[source]¶Find the parameter vector that minimizes a linear objective function.
The parameter vector \(\bar{p}\) that minimizes this objective function \(\phi\) is the one that solves the linear system
where \(\bar{\bar{H}}\) is the Hessian matrix of \(\phi\) and \(\bar{g}\) is the gradient vector of \(\phi\).
Parameters:
The Hessian matrix of the objective function.
The gradient vector of the objective function.
If True, will use Jacobi preconditioning.
Yields:
The current iteration number
The current estimated parameter vector
Statistics about the optimization so far
Linear solvers have only a single step, so i
will be 0 and stats
will only have the method name.
fatiando.inversion.optimization.
newton
(hessian, gradient, value, initial, maxit=30, tol=1e-05, precondition=True)[source]¶Minimize an objective function using Newton’s method.
Newton’s method searches for the minimum of an objective function \(\phi(\bar{p})\) by successively incrementing the initial estimate. The increment is the solution of the linear system
where \(\bar{\bar{H}}\) is the Hessian matrix of \(\phi\) and \(\bar{g}\) is the gradient vector of \(\phi\). Both are evaluated at the previous estimate \(\bar{p}^k\).
Parameters:
A function that returns the Hessian matrix of the objective function when given a parameter vector.
A function that returns the gradient vector of the objective function when given a parameter vector.
A function that returns the value of the objective function evaluated at a given parameter vector.
The initial estimate for the gradient descent.
The maximum number of iterations allowed.
The convergence criterion. The lower it is, the more steps are permitted.
If True, will use Jacobi preconditioning.
Returns:
Yields:
The current iteration number
The current estimated parameter vector
Statistics about the optimization so far. Keys:
The name of the optimization method
The total number of iterations so far
Value of the objective function per iteration. First value corresponds to the inital estimate
fatiando.inversion.optimization.
steepest
(gradient, value, initial, maxit=1000, linesearch=True, maxsteps=30, beta=0.1, tol=1e-05)[source]¶Minimize an objective function using the Steepest Descent method.
The increment to the initial estimate of the parameter vector \(\bar{p}\) is calculated by (Kelley, 1999)
where \(\lambda\) is the step size and \(\bar{g}\) is the gradient vector.
The step size can be determined thought a line search algorithm using the Armijo rule (Kelley, 1999). In this case,
where \(1 > \beta > 0\) and \(m \ge 0\) is an integer that controls the step size. The line search finds the smallest \(m\) that satisfies the Armijo rule
where \(\phi(\bar{p})\) is the objective function evaluated at \(\bar{p}\) and \(\alpha = 10^{-4}\).
Parameters:
A function that returns the gradient vector of the objective function when given a parameter vector.
A function that returns the value of the objective function evaluated at a given parameter vector.
The initial estimate for the gradient descent.
The maximum number of iterations allowed.
Whether or not to perform the line search to determine an optimal step size.
The maximum number of times to try to take a step before giving up.
The base factor used to determine the step size in line search algorithm. Must be 1 > beta > 0.
The convergence criterion. The lower it is, the more steps are permitted.
Yields:
The current iteration number
The current estimated parameter vector
Statistics about the optimization so far. Keys:
The name of the optimization algorithm
The total number of iterations so far
Value of the objective function per iteration. First value corresponds to the inital estimate
Number of attempts at taking a step per iteration. First number
is zero, reflecting the initial estimate. Will be empty if
linesearch==False
.
References:
Kelley, C. T., 1999, Iterative methods for optimization: Raleigh: SIAM.