mushi.loss_functions
Loss functions for measuring goodness of fit.
Each loss function takes an expected data matrix \(\mathbb{E}[\mathbf X]\) and an observed data matrix \(\mathbf X\), and returns a loss value. Higher loss means worse fit between \(\mathbf X\) and \(\mathbb{E}[\mathbf X]\).
Examples
>>> import mushi.loss_functions as lf
>>> import numpy as np
Define expected data matrix \(\mathbb{E}[\mathbf X]\) and an observed data matrix \(\mathbf X\) as \(10\times 10\) arrays of ones (for this trivial example):
>>> E = np.ones((10, 10))
>>> X = np.ones((10, 10))
Compute various losses:
Poisson random field
>>> lf.prf(E, X) DeviceArray(100., dtype=float64)
Generalized Kullback-Leibler divergence
>>> lf.dkl(E, X) DeviceArray(0., dtype=float64)
Least squares
>>> lf.lsq(E, X) 0.0
Functions
Generalized Kullback-Liebler divergence, a Bregman divergence (ignores constant term) |
|
Least-squares loss |
|
Poisson random field loss |