Python API

Stochastic Bayesian inference of a nonlinear model

Infers:
  • Posterior mean values of model parameters
  • A posterior covariance matrix (which may be diagonal or a full positive-definite matrix)
The general order for tensor dimensions is:
  • Voxel indexing (V=number of voxels / W=number of parameter vertices)
  • Parameter indexing (P=number of parameters)
  • Sample indexing (S=number of samples)
  • Data point indexing (B=batch size, i.e. number of time points being trained on, in some cases T=total number of time points in full data)

This ordering is chosen to allow the use of TensorFlow batch matrix operations. However it is inconvenient for the model which would like to be able to index input by parameter. For this reason we transpose when calling the model’s evaluate function to put the P dimension first.

The parameter vertices, W, are the set of points on which parameters are defined and will be output. They may be voxel centres, or surface element vertices. The data voxels, V, on the other hand are the points on which the data to be fitted to is defined. Typically this will be volumetric voxels as that is what most imaging experiments output as raw data.

In many cases, W will be the same as V since we are inferring volumetric parameter maps from volumetric data. However we might alternatively want to infer surface based parameter maps but keep the comparison to the measured volumetric data. In this case V and W will be different. The key point at which this difference is handled is the model evaluation which takes parameters defined on W and outputs a prediction defined on V.

V and W are currently identical but may not be in the future. For example we may want to estimate parameters on a surface (W=number of surface vertices) using data defined on a volume (V=number of voxels).

Ideas for per voxel/vertex convergence:

  • Maintain vertex_mask as member. Initially all ones
  • Mask vertices when generating samples and evaluating model. The latent cost will be over unmasked vertices only.
  • PROBLEM: need reconstruction cost defined over full voxel set hence need to project model evaluation onto all voxels. So masked vertices still need to keep their previous model evaluation output
  • Define criteria for masking vertices after each epoch
  • PROBLEM: spatial interactions make per-voxel convergence difficult. Maybe only do full set convergence in this case (like Fabber)

Indices and tables