The fatiando package has been deprecated. Please check out the new tools in the Fatiando a Terra website: www.fatiando.org

# Optimization routines (fatiando.inversion.optimization)¶

Methods to optimize a given objective function.

All solvers are Python iterators. This means that should be used in a for loop, like so:

solver = newton(hess_func, grad_func, value_func, initial)
for i, p, stats in solver:
... do something or 'continue' to step through the iterations ...
# 'p' is the current estimate for the parameter vector at the 'i'th
# iteration.
# 'stats' is a dictionary with some information about the optimization
# process so far (number of attempted steps, value of objective
# function per step, total number of iterations so far, etc).
# At the end, 'p' is the final estimate and 'stats' will contain the
# statistics for the whole iteration process.


Heuristic methods

• acor: ACO-R: Ant Colony Optimization for Continuous Domains (Socha and Dorigo, 2008)

References

Socha, K., and M. Dorigo (2008), Ant colony optimization for continuous domains, European Journal of Operational Research, 185(3), 1155-1173, doi:10.1016/j.ejor.2006.06.046.

fatiando.inversion.optimization.acor(value, bounds, nparams, nants=None, archive_size=None, maxit=1000, diverse=0.5, evap=0.85, seed=None)[source]

Minimize the objective function using ACO-R.

ACO-R stands for Ant Colony Optimization for Continuous Domains (Socha and Dorigo, 2008).

Parameters:

• value
: function

Returns the value of the objective function at a given parameter vector

• bounds
: list

The bounds of the search space. If only two values are given, will interpret as the minimum and maximum, respectively, for all parameters. Alternatively, you can given a minimum and maximum for each parameter, e.g., for a problem with 3 parameters you could give bounds = [min1, max1, min2, max2, min3, max3].

• nparams
: int

The number of parameters that the objective function takes.

• nants
: int

The number of ants to use in the search. Defaults to the number of parameters.

• archive_size
: int

The number of solutions to keep in the solution archive. Defaults to 10 x nants

• maxit
: int

The number of iterations to run.

• diverse
: float

Scalar from 0 to 1, non-inclusive, that controls how much better solutions are favored when constructing new ones.

• evap
: float

The pheromone evaporation rate (evap > 0). Controls how spread out the search is.

• seed
: None or int

Seed for the random number generator.

Yields:

• i, estimate, stats:
• i
: int

The current iteration number

• estimate
: 1d-array

The current best estimated parameter vector

• stats
: dict

Statistics about the optimization so far. Keys:

• method
: stf

The name of the optimization algorithm

• iterations
: int

The total number of iterations so far

• objective
: list

Value of the objective function corresponding to the best estimate per iteration.

fatiando.inversion.optimization.levmarq(hessian, gradient, value, initial, maxit=30, maxsteps=20, lamb=10, dlamb=2, tol=1e-05, precondition=True)[source]

Minimize an objective function using the Levemberg-Marquardt algorithm.

Parameters:

• hessian
: function

A function that returns the Hessian matrix of the objective function when given a parameter vector.

: function

A function that returns the gradient vector of the objective function when given a parameter vector.

• value
: function

A function that returns the value of the objective function evaluated at a given parameter vector.

• initial
: 1d-array

The initial estimate for the gradient descent.

• maxit
: int

The maximum number of iterations allowed.

• maxsteps
: int

The maximum number of times to try to take a step before giving up.

• lamb
: float

Initial amount of step regularization. The larger this is, the more the algorithm will resemble Steepest Descent in the initial iterations.

• dlamb
: float

Factor by which lamb is divided or multiplied when taking steps.

• tol
: float

The convergence criterion. The lower it is, the more steps are permitted.

• precondition
: True or False

If True, will use Jacobi preconditioning.

Yields:

• i, estimate, stats:
• i
: int

The current iteration number

• estimate
: 1d-array

The current estimated parameter vector

• stats
: dict

Statistics about the optimization so far. Keys:

• method
: str

The name of the optimization method

• iterations
: int

The total number of iterations so far

• objective
: list

Value of the objective function per iteration. First value corresponds to the inital estimate

• step_attempts
: list

Number of attempts at taking a step per iteration. First number is zero, reflecting the initial estimate.

fatiando.inversion.optimization.linear(hessian, gradient, precondition=True)[source]

Find the parameter vector that minimizes a linear objective function.

The parameter vector $$\bar{p}$$ that minimizes this objective function $$\phi$$ is the one that solves the linear system

$\bar{\bar{H}} \bar{p} = -\bar{g}$

where $$\bar{\bar{H}}$$ is the Hessian matrix of $$\phi$$ and $$\bar{g}$$ is the gradient vector of $$\phi$$.

Parameters:

• hessian
: 2d-array

The Hessian matrix of the objective function.

: 1d-array

The gradient vector of the objective function.

• precondition
: True or False

If True, will use Jacobi preconditioning.

Yields:

• i, estimate, stats:
• i
: int

The current iteration number

• estimate
: 1d-array

The current estimated parameter vector

• stats
: dict

Statistics about the optimization so far

Linear solvers have only a single step, so i will be 0 and stats will only have the method name.

fatiando.inversion.optimization.newton(hessian, gradient, value, initial, maxit=30, tol=1e-05, precondition=True)[source]

Minimize an objective function using Newton’s method.

Newton’s method searches for the minimum of an objective function $$\phi(\bar{p})$$ by successively incrementing the initial estimate. The increment is the solution of the linear system

$\bar{\bar{H}}(\bar{p}^k) \bar{\Delta p}^k = -\bar{g}(\bar{p}^k)$

where $$\bar{\bar{H}}$$ is the Hessian matrix of $$\phi$$ and $$\bar{g}$$ is the gradient vector of $$\phi$$. Both are evaluated at the previous estimate $$\bar{p}^k$$.

Parameters:

• hessian
: function

A function that returns the Hessian matrix of the objective function when given a parameter vector.

: function

A function that returns the gradient vector of the objective function when given a parameter vector.

• value
: function

A function that returns the value of the objective function evaluated at a given parameter vector.

• initial
: 1d-array

The initial estimate for the gradient descent.

• maxit
: int

The maximum number of iterations allowed.

• tol
: float

The convergence criterion. The lower it is, the more steps are permitted.

• precondition
: True or False

If True, will use Jacobi preconditioning.

Returns:

Yields:

• i, estimate, stats:
• i
: int

The current iteration number

• estimate
: 1d-array

The current estimated parameter vector

• stats
: dict

Statistics about the optimization so far. Keys:

• method
: str

The name of the optimization method

• iterations
: int

The total number of iterations so far

• objective
: list

Value of the objective function per iteration. First value corresponds to the inital estimate

fatiando.inversion.optimization.steepest(gradient, value, initial, maxit=1000, linesearch=True, maxsteps=30, beta=0.1, tol=1e-05)[source]

Minimize an objective function using the Steepest Descent method.

The increment to the initial estimate of the parameter vector $$\bar{p}$$ is calculated by (Kelley, 1999)

$\Delta\bar{p} = -\lambda\bar{g}$

where $$\lambda$$ is the step size and $$\bar{g}$$ is the gradient vector.

The step size can be determined thought a line search algorithm using the Armijo rule (Kelley, 1999). In this case,

$\lambda = \beta^m$

where $$1 > \beta > 0$$ and $$m \ge 0$$ is an integer that controls the step size. The line search finds the smallest $$m$$ that satisfies the Armijo rule

$\phi(\bar{p} + \Delta\bar{p}) - \Gamma(\bar{p}) < \alpha\beta^m ||\bar{g}(\bar{p})||^2$

where $$\phi(\bar{p})$$ is the objective function evaluated at $$\bar{p}$$ and $$\alpha = 10^{-4}$$.

Parameters:

: function

A function that returns the gradient vector of the objective function when given a parameter vector.

• value
: function

A function that returns the value of the objective function evaluated at a given parameter vector.

• initial
: 1d-array

The initial estimate for the gradient descent.

• maxit
: int

The maximum number of iterations allowed.

• linesearch
: True or False

Whether or not to perform the line search to determine an optimal step size.

• maxsteps
: int

The maximum number of times to try to take a step before giving up.

• beta
: float

The base factor used to determine the step size in line search algorithm. Must be 1 > beta > 0.

• tol
: float

The convergence criterion. The lower it is, the more steps are permitted.

Yields:

• i, estimate, stats:
• i
: int

The current iteration number

• estimate
: 1d-array

The current estimated parameter vector

• stats
: dict

Statistics about the optimization so far. Keys:

• method
: stf

The name of the optimization algorithm

• iterations
: int

The total number of iterations so far

• objective
: list

Value of the objective function per iteration. First value corresponds to the inital estimate

• step_attempts
: list

Number of attempts at taking a step per iteration. First number is zero, reflecting the initial estimate. Will be empty if linesearch==False.

References:

Kelley, C. T., 1999, Iterative methods for optimization: Raleigh: SIAM.