FixedValuePrior

class lightkurve.prf.FixedValuePrior(value, name=None)[source]

Bases: oktopus.prior.Prior

An improper prior with a negative log probability of 0 at a single fixed value and inf elsewhere. This is similar to a Dirac Delta function, except this function does not peak at infinity so that it can be used in numerical optimization functions. It does not integrate to one as a result and is therefore an “improper distribution”.

Examples

>>> fp = FixedValuePrior(1)
>>> fp(1)
-0.0
>>> fp(0.5)
inf
Attributes:
value : int or array-like of ints

The fixed value.

Attributes Summary

mean Returns the fixed value.
name A name associated with the prior
variance Returns zero.

Methods Summary

__call__(params) Calls evaluate()
evaluate(params) Returns the negative log pdf.
fit([optimizer]) Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().
gradient(params) Returns the gradient of the loss function evaluated at params
hessian(params) Returns the Hessian matrix of the loss function evaluated at params

Attributes Documentation

mean

Returns the fixed value.

name

A name associated with the prior

variance

Returns zero.

Methods Documentation

__call__(params)

Calls evaluate()

evaluate(params)[source]

Returns the negative log pdf.

fit(optimizer='minimize', **kwargs)

Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().

Parameters:
optimizer : str

Optimization algorithm. Options are:

- ``'minimize'`` uses :func:`scipy.optimize.minimize`

- ``'differential_evolution'`` uses :func:`scipy.optimize.differential_evolution`

- ``'basinhopping'`` uses :func:`scipy.optimize.basinhopping`

- ``'gp_minimize'`` uses :func:`skopt.gp.gp_minimize`

‘minimize’ is usually robust enough and therefore recommended whenever a good initial guess can be provided. The remaining options are global optimizers which might provide better results precisely in cases where a close engouh initial guess cannot be obtained trivially.

kwargs : dict

Dictionary for additional arguments.

Returns:
opt_result : scipy.optimize.OptimizeResult object

Object containing the results of the optimization process. Note: this is also stored in self.opt_result.

gradient(params)[source]

Returns the gradient of the loss function evaluated at params

Parameters:
params : ndarray

parameter vector of the model

hessian(params)

Returns the Hessian matrix of the loss function evaluated at params

Parameters:
params : ndarray

parameter vector of the model