# PYME.Deconv.richardsonLucyMVM module¶

class PYME.Deconv.richardsonLucyMVM.dec_conv

Classical deconvolution with a stationary PSF

Methods

 Afunc(f) Forward transform - convolve with the PSF Ahfunc(f) Conjugate transform - convolve with conj. Lfunc(f) convolve with an approximate 2nd derivative likelihood operator in 3D. Lhfunc(f) convolve with an approximate 2nd derivative likelihood operator in 3D. deconv(views, lamb[, num_iters, weights, ...]) This is what you actually call to do the deconvolution. deconvp(args) convenience function for deconvolving in parallel using processing.Pool.map dlHd(data, g, velx, tvals) dlHd2(data, g, velx, tvals) pd(g, velx, vely, tvals) psf_calc(psf, data_size) Precalculate the OTF etc... startGuess(data) starting guess for deconvolution - can be overridden in derived classes updateVX(data, g, velx, tvals[, beta, nIterates])
Afunc(f)

Forward transform - convolve with the PSF

Ahfunc(f)

Conjugate transform - convolve with conj. PSF

Lfunc(f)

convolve with an approximate 2nd derivative likelihood operator in 3D. i.e. [[[0,0,0][0,1,0][0,0,0]],[[0,1,0][1,-6,1][0,1,0]],[[0,0,0][0,1,0][0,0,0]]]

Lhfunc(f)

convolve with an approximate 2nd derivative likelihood operator in 3D. i.e. [[[0,0,0][0,1,0][0,0,0]],[[0,1,0][1,-6,1][0,1,0]],[[0,0,0][0,1,0][0,0,0]]]

psf_calc(psf, data_size)

Precalculate the OTF etc...

class PYME.Deconv.richardsonLucyMVM.dec_conv_slow

Classical deconvolution with a stationary PSF

Methods

 Afunc(f) Forward transform - convolve with the PSF Ahfunc(f) Conjugate transform - convolve with conj. Lfunc(f) convolve with an approximate 2nd derivative likelihood operator in 3D. Lhfunc(f) convolve with an approximate 2nd derivative likelihood operator in 3D. deconv(views, lamb[, num_iters, weights, ...]) This is what you actually call to do the deconvolution. deconvp(args) convenience function for deconvolving in parallel using processing.Pool.map dlHd(data, g, velx, tvals) dlHd2(data, g, velx, tvals) pd(g, velx, vely, tvals) psf_calc(psf, data_size) Precalculate the OTF etc... startGuess(data) starting guess for deconvolution - can be overridden in derived classes updateVX(data, g, velx, tvals[, beta, nIterates])
Afunc(f)

Forward transform - convolve with the PSF

Ahfunc(f)

Conjugate transform - convolve with conj. PSF

Lfunc(f)

convolve with an approximate 2nd derivative likelihood operator in 3D. i.e. [[[0,0,0][0,1,0][0,0,0]],[[0,1,0][1,-6,1][0,1,0]],[[0,0,0][0,1,0][0,0,0]]]

Lhfunc(f)

convolve with an approximate 2nd derivative likelihood operator in 3D. i.e. [[[0,0,0][0,1,0][0,0,0]],[[0,1,0][1,-6,1][0,1,0]],[[0,0,0][0,1,0][0,0,0]]]

psf_calc(psf, data_size)

Precalculate the OTF etc...

PYME.Deconv.richardsonLucyMVM.rand(d0, d1, ..., dn)

Random values in a given shape.

Create an array of the given shape and populate it with random samples from a uniform distribution over [0, 1).

Parameters: d0, d1, ..., dn : int, optional The dimensions of the returned array, should all be positive. If no argument is given a single Python float is returned. out : ndarray, shape (d0, d1, ..., dn) Random values.

random

Notes

This is a convenience function. If you want an interface that takes a shape-tuple as the first argument, refer to np.random.random_sample .

Examples

>>> np.random.rand(3,2)
array([[ 0.14022471,  0.96360618],  #random
[ 0.37601032,  0.25528411],  #random
[ 0.49313049,  0.94909878]]) #random
PYME.Deconv.richardsonLucyMVM.randn(d0, d1, ..., dn)

Return a sample (or samples) from the “standard normal” distribution.

If positive, int_like or int-convertible arguments are provided, randn generates an array of shape (d0, d1, ..., dn), filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1 (if any of the $$d_i$$ are floats, they are first converted to integers by truncation). A single float randomly sampled from the distribution is returned if no argument is provided.

This is a convenience function. If you want an interface that takes a tuple as the first argument, use numpy.random.standard_normal instead.

Parameters: d0, d1, ..., dn : int, optional The dimensions of the returned array, should be all positive. If no argument is given a single Python float is returned. Z : ndarray or float A (d0, d1, ..., dn)-shaped array of floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied.

random.standard_normal
Similar, but takes a tuple as its argument.

Notes

For random samples from $$N(\mu, \sigma^2)$$, use:

sigma * np.random.randn(...) + mu

Examples

>>> np.random.randn()
2.1923875335537315 #random

Two-by-four array of samples from N(3, 6.25):

>>> 2.5 * np.random.randn(2, 4) + 3
array([[-4.49401501,  4.00950034, -1.81814867,  7.29718677],  #random
[ 0.39924804,  4.68456316,  4.99394529,  4.84057254]]) #random

Classical deconvolution using non-fft convolution - pot. faster for v. small psfs. Note that PSF must be symetric

Methods

 Afunc(f) Forward transform - convolve with the PSF Ahfunc(f) Conjugate transform - convolve with conj. deconv(views, lamb[, num_iters, weights, ...]) This is what you actually call to do the deconvolution. deconvp(args) convenience function for deconvolving in parallel using processing.Pool.map dlHd(data, g, velx, tvals) dlHd2(data, g, velx, tvals) pd(g, velx, vely, tvals) psf_calc(psf, data_size) startGuess(data) starting guess for deconvolution - can be overridden in derived classes updateVX(data, g, velx, tvals[, beta, nIterates])
Afunc(f)

Forward transform - convolve with the PSF

Ahfunc(f)

Conjugate transform - convolve with conj. PSF

psf_calc(psf, data_size)
class PYME.Deconv.richardsonLucyMVM.rldec

Deconvolution class, implementing a variant of the Richardson-Lucy algorithm.

Derived classed should additionally define the following methods: AFunc - the forward mapping (computes Af) AHFunc - conjugate transpose of forward mapping (computes ar{A}^T f) LFunc - the likelihood function LHFunc - conj. transpose of likelihood function

see dec_conv for an implementation of conventional image deconvolution with a measured, spatially invariant PSF

Methods

 deconv(views, lamb[, num_iters, weights, ...]) This is what you actually call to do the deconvolution. deconvp(args) convenience function for deconvolving in parallel using processing.Pool.map dlHd(data, g, velx, tvals) dlHd2(data, g, velx, tvals) pd(g, velx, vely, tvals) startGuess(data) starting guess for deconvolution - can be overridden in derived classes updateVX(data, g, velx, tvals[, beta, nIterates])
deconv(views, lamb, num_iters=10, weights=1, bg=0, vx=0, vy=0)

This is what you actually call to do the deconvolution. parameters are:

data - the raw data lamb - the regularisation parameter num_iters - number of iterations (note that the convergence is fast when

compared to many algorithms - e.g Richardson-Lucy - and the default of 10 will usually already give a reasonable result)
alpha - PSF phase - hacked in for variable phase 4Pi deconvolution, should
really be refactored out into the dec_4pi classes.
deconvp(args)

convenience function for deconvolving in parallel using processing.Pool.map

dlHd(data, g, velx, tvals)
dlHd2(data, g, velx, tvals)
pd(g, velx, vely, tvals)
startGuess(data)

starting guess for deconvolution - can be overridden in derived classes but the data itself is usually a pretty good guess.

updateVX(data, g, velx, tvals, beta=-0.9, nIterates=1)