fatiando.inversion.regularization
)¶Ready made classes for regularization.
Each class represents a regularizing function. They can be used by adding them
to a Misfit
derivative (all inversions in
Fatiando are derived from Misfit).
The regularization parameter is set by multiplying the regularization instance
by a scalar, e.g., solver = misfit + 0.1*regularization
.
See fatiando.gravmag.eqlayer.EQLGravity
for an example.
List of classes
Damping
: Damping regularization
(or 0th order Tikhonov regularization)Smoothness
: Generic smoothness
regularization (or 1st order Tikhonov regularization). Requires a finite
difference matrix to specify the parameter derivatives to minimize.Smoothness1D
: Smoothness for 1D
problems. Automatically builds a finite difference matrix based on the number
of parametersSmoothness2D
: Smoothness for 2D
grid based problems. Automatically builds a finite difference matrix of
derivatives in the two spacial dimensions based on the shape of the parameter
gridTotalVariation
: Generic total
variation regularization (enforces sharpness of the solution). Requires a
finite difference matrix to specify the parameter derivatives.TotalVariation1D
: Total variation
for 1D problems. Similar to Smoothness1DTotalVariation2D
: Total variation
for 2D grid based problems. Similar to Smoothness2Dfatiando.inversion.regularization.
Damping
(nparams)[source]¶Bases: fatiando.inversion.regularization.Regularization
Damping (0th order Tikhonov) regularization.
Imposes the minimum norm of the parameter vector.
The regularizing function if of the form
Its gradient and Hessian matrices are, respectively,
Parameters:
The number of parameter
Examples:
>>> import numpy
>>> damp = Damping(3)
>>> p = numpy.array([0, 0, 0])
>>> damp.value(p)
0.0
>>> damp.hessian(p).todense()
matrix([[ 2., 0., 0.],
[ 0., 2., 0.],
[ 0., 0., 2.]])
>>> damp.gradient(p)
array([ 0., 0., 0.])
>>> p = numpy.array([1, 0, 0])
>>> damp.value(p)
1.0
>>> damp.hessian(p).todense()
matrix([[ 2., 0., 0.],
[ 0., 2., 0.],
[ 0., 0., 2.]])
>>> damp.gradient(p)
array([ 2., 0., 0.])
The Hessian matrix is cached so that it is only generated on the first
call to damp.hessian
(unlike the gradient, which is calculated every
time).
>>> damp.hessian(p) is damp.hessian(p)
True
>>> damp.gradient(p) is damp.gradient(p)
False
copy
(deep=False)¶Make a copy of me together with all the cached methods.
gradient
(p)[source]¶Calculate the gradient vector.
Parameters:
The parameter vector. If None, will return 0.
Returns:
The gradient
hessian
(p)[source]¶Calculate the Hessian matrix.
Parameters:
The parameter vector
Returns:
The Hessian
regul_param
¶The regularization parameter (scale factor) for the objetive function.
Defaults to 1.
fatiando.inversion.regularization.
Smoothness
(fdmat)[source]¶Bases: fatiando.inversion.regularization.Regularization
Smoothness (1st order Tikhonov) regularization.
Imposes that adjacent parameters have values close to each other.
The regularizing function if of the form
Its gradient and Hessian matrices are, respectively,
in which matrix \(\bar{\bar{R}}\) is a finite difference matrix. It represents the differences between one parameter and another and is what indicates what adjacent means.
Parameters:
The finite difference matrix
Examples:
>>> import numpy as np
>>> fd = np.array([[1, -1, 0],
... [0, 1, -1]])
>>> smooth = Smoothness(fd)
>>> p = np.array([0, 0, 0])
>>> smooth.value(p)
0.0
>>> smooth.gradient(p)
array([0, 0, 0])
>>> smooth.hessian(p)
array([[ 2, -2, 0],
[-2, 4, -2],
[ 0, -2, 2]])
>>> p = np.array([1, 0, 1])
>>> smooth.value(p)
2.0
>>> smooth.gradient(p)
array([ 2, -4, 2])
>>> smooth.hessian(p)
array([[ 2, -2, 0],
[-2, 4, -2],
[ 0, -2, 2]])
The Hessian matrix is cached so that it is only generated on the first
call to hessian
(unlike the gradient, which is calculated every
time).
>>> smooth.hessian(p) is smooth.hessian(p)
True
>>> smooth.gradient(p) is smooth.gradient(p)
False
copy
(deep=False)¶Make a copy of me together with all the cached methods.
gradient
(p)[source]¶Calculate the gradient vector.
Parameters:
The parameter vector. If None, will return 0.
Returns:
The gradient
hessian
(p)[source]¶Calculate the Hessian matrix.
Parameters:
The parameter vector
Returns:
The Hessian
regul_param
¶The regularization parameter (scale factor) for the objetive function.
Defaults to 1.
fatiando.inversion.regularization.
Smoothness1D
(npoints)[source]¶Bases: fatiando.inversion.regularization.Smoothness
Smoothness regularization for 1D problems.
Extends the generic Smoothness
class by automatically building the finite difference matrix.
Parameters:
The number of parameters
Examples:
>>> import numpy as np
>>> s = Smoothness1D(3)
>>> p = np.array([0, 0, 0])
>>> s.value(p)
0.0
>>> s.gradient(p)
array([0, 0, 0])
>>> s.hessian(p).todense()
matrix([[ 2, -2, 0],
[-2, 4, -2],
[ 0, -2, 2]])
>>> p = np.array([1, 0, 1])
>>> s.value(p)
2.0
>>> s.gradient(p)
array([ 2, -4, 2])
>>> s.hessian(p).todense()
matrix([[ 2, -2, 0],
[-2, 4, -2],
[ 0, -2, 2]])
copy
(deep=False)¶Make a copy of me together with all the cached methods.
gradient
(p)¶Calculate the gradient vector.
Parameters:
The parameter vector. If None, will return 0.
Returns:
The gradient
hessian
(p)¶Calculate the Hessian matrix.
Parameters:
The parameter vector
Returns:
The Hessian
regul_param
¶The regularization parameter (scale factor) for the objetive function.
Defaults to 1.
value
(p)¶Calculate the value of this function.
Parameters:
The parameter vector
Returns:
The value of this function evaluated at p
fatiando.inversion.regularization.
Smoothness2D
(shape)[source]¶Bases: fatiando.inversion.regularization.Smoothness
Smoothness regularization for 2D problems.
Extends the generic Smoothness
class by automatically building the finite difference matrix.
Parameters:
The shape of the parameter grid. Number of parameters in the y and x (or z and x, time and offset, etc) dimensions.
Examples:
>>> import numpy as np
>>> s = Smoothness2D((2, 2))
>>> p = np.array([[0, 0],
... [0, 0]]).ravel()
>>> s.value(p)
0.0
>>> s.gradient(p)
array([0, 0, 0, 0])
>>> s.hessian(p).todense()
matrix([[ 4, -2, -2, 0],
[-2, 4, 0, -2],
[-2, 0, 4, -2],
[ 0, -2, -2, 4]])
>>> p = np.array([[1, 0],
... [2, 3]]).ravel()
>>> s.value(p)
12.0
>>> s.gradient(p)
array([ 0, -8, 0, 8])
>>> s.hessian(p).todense()
matrix([[ 4, -2, -2, 0],
[-2, 4, 0, -2],
[-2, 0, 4, -2],
[ 0, -2, -2, 4]])
copy
(deep=False)¶Make a copy of me together with all the cached methods.
gradient
(p)¶Calculate the gradient vector.
Parameters:
The parameter vector. If None, will return 0.
Returns:
The gradient
hessian
(p)¶Calculate the Hessian matrix.
Parameters:
The parameter vector
Returns:
The Hessian
regul_param
¶The regularization parameter (scale factor) for the objetive function.
Defaults to 1.
value
(p)¶Calculate the value of this function.
Parameters:
The parameter vector
Returns:
The value of this function evaluated at p
fatiando.inversion.regularization.
TotalVariation
(beta, fdmat)[source]¶Bases: fatiando.inversion.regularization.Regularization
Total variation regularization.
Imposes that adjacent parameters have a few sharp transitions.
The regularizing function if of the form
where vector \(\bar{v} = \bar{\bar{R}}\bar{p}\). See
Smoothness
for the definition
of the \(\bar{\bar{R}}\) matrix.
This functions is not differentiable at the null vector, so the following form is used to calculate the gradient and Hessian
Its gradient and Hessian matrices are, respectively,
and
Parameters:
The beta parameter for the differentiable approximation. The larger it
is, the closer total variation is to
Smoothness
. Should be a
small, positive value.
The finite difference matrix
copy
(deep=False)¶Make a copy of me together with all the cached methods.
gradient
(p)[source]¶Calculate the gradient vector.
Parameters:
The parameter vector.
Returns:
The gradient
hessian
(p)[source]¶Calculate the Hessian matrix.
Parameters:
The parameter vector
Returns:
The Hessian
regul_param
¶The regularization parameter (scale factor) for the objetive function.
Defaults to 1.
fatiando.inversion.regularization.
TotalVariation1D
(beta, npoints)[source]¶Bases: fatiando.inversion.regularization.TotalVariation
Total variation regularization for 1D problems.
Extends the generic
TotalVariation
class by automatically building the finite difference matrix.
Parameters:
The beta parameter for the differentiable approximation. The larger it
is, the closer total variation is to
Smoothness
. Should be a
small, positive value.
The number of parameters
copy
(deep=False)¶Make a copy of me together with all the cached methods.
gradient
(p)¶Calculate the gradient vector.
Parameters:
The parameter vector.
Returns:
The gradient
hessian
(p)¶Calculate the Hessian matrix.
Parameters:
The parameter vector
Returns:
The Hessian
regul_param
¶The regularization parameter (scale factor) for the objetive function.
Defaults to 1.
value
(p)¶Calculate the value of this function.
Parameters:
The parameter vector
Returns:
The value of this function evaluated at p
fatiando.inversion.regularization.
TotalVariation2D
(beta, shape)[source]¶Bases: fatiando.inversion.regularization.TotalVariation
Total variation regularization for 2D problems.
Extends the generic
TotalVariation
class by automatically building the finite difference matrix.
Parameters:
The beta parameter for the differentiable approximation. The larger it
is, the closer total variation is to
Smoothness
. Should be a
small, positive value.
The shape of the parameter grid. Number of parameters in the y and x (or z and x, time and offset, etc) dimensions.
copy
(deep=False)¶Make a copy of me together with all the cached methods.
gradient
(p)¶Calculate the gradient vector.
Parameters:
The parameter vector.
Returns:
The gradient
hessian
(p)¶Calculate the Hessian matrix.
Parameters:
The parameter vector
Returns:
The Hessian
regul_param
¶The regularization parameter (scale factor) for the objetive function.
Defaults to 1.
value
(p)¶Calculate the value of this function.
Parameters:
The parameter vector
Returns:
The value of this function evaluated at p