FixedValuePrior

class lightkurve.prf.FixedValuePrior(value, name=None)

Bases: oktopus.prior.Prior

An improper prior with a negative log probability of 0 at a single fixed value and inf elsewhere. This is similar to a Dirac Delta function, except this function does not peak at infinity so that it can be used in numerical optimization functions. It does not integrate to one as a result and is therefore an “improper distribution”.

Examples

>>> fp = FixedValuePrior(1)
>>> fp(1)
-0.0
>>> fp(0.5)
inf
Attributes
valueint or array-like of ints

The fixed value.

Attributes Summary

mean

Returns the fixed value.

name

A name associated with the prior

variance

Returns zero.

Methods Summary

__call__(self, params)

Calls evaluate()

evaluate(self, params)

Returns the negative log pdf.

fit(self[, optimizer])

Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().

gradient(self, params)

Returns the gradient of the loss function evaluated at params

hessian(self, params)

Returns the Hessian matrix of the loss function evaluated at params

Attributes Documentation

mean

Returns the fixed value.

name

A name associated with the prior

variance

Returns zero.

Methods Documentation

__call__(self, params)

Calls evaluate()

evaluate(self, params)

Returns the negative log pdf.

fit(self, optimizer='minimize', **kwargs)

Minimizes the evaluate() function using scipy.optimize.minimize(), scipy.optimize.differential_evolution(), scipy.optimize.basinhopping(), or skopt.gp.gp_minimize().

Parameters
optimizerstr

Optimization algorithm. Options are:

- ``'minimize'`` uses :func:`scipy.optimize.minimize`

- ``'differential_evolution'`` uses :func:`scipy.optimize.differential_evolution`

- ``'basinhopping'`` uses :func:`scipy.optimize.basinhopping`

- ``'gp_minimize'`` uses :func:`skopt.gp.gp_minimize`

'minimize' is usually robust enough and therefore recommended whenever a good initial guess can be provided. The remaining options are global optimizers which might provide better results precisely in cases where a close engouh initial guess cannot be obtained trivially.

kwargsdict

Dictionary for additional arguments.

Returns
opt_resultscipy.optimize.OptimizeResult object

Object containing the results of the optimization process. Note: this is also stored in self.opt_result.

gradient(self, params)

Returns the gradient of the loss function evaluated at params

Parameters
paramsndarray

parameter vector of the model

hessian(self, params)

Returns the Hessian matrix of the loss function evaluated at params

Parameters
paramsndarray

parameter vector of the model


Created with ♥ by the Lightkurve collaboration. Please cite us or join us on GitHub.