Extending VisGUI

The VisGUI pipeline

VisGUI utilises a pipeline to process incoming point position data prior to visualization. It is responsible for filtering to reject bad fits as well as performing other operations such as re-mapping data, munging column names into our desired format, clumping grouped localizations, and applying various data corrections and calibrations. The pipeline consists of 3 key parts: A file import adapter, a variable section, and a fixed section. Each section consists of a number of cascaded classes which each implement the Tabular Data model (see also Tabular filters and PYME.IO.tabular). As a general rule, these classes are written to lazily access the data and do not cache results. They are typically ‘look through’ for any variables which aren’t altered. The pipeline object itself also implements the tabular interface and exposes the output of the fixed section.

_images/pipeline_new.png

The VisGUI pipeline showing a hypothetical configuration of the recipe section which might be used for clumping repeated observations of molecules.

The input adapter

This is specific to the file format being loaded, and typically consists of in input filter which loads data into tabular form, and a mapping filter which adapts column names and scaling to fit VisGUIs standard requirements (see VisGUI column names).

The variable section

This is implemented as a PYME recipe (see also Writing a Recipe Module). It takes the output of the input adapter, and performs any additional manipulation that might be sample, microscope, or dataset specific. This should include tasks such as event clumping and any calibrations or corrections. The recipes namespace replaces the previous .dataSources attribute of the pipeline.

Warning

The variable section is very new, and as such there will still be bugs. Notably a lot of stuff which should be in the variable section is currently accomplished by hard to follow circular code paths within the fixed section.

The fixed section

This is responsible for filtering events prior to display or rendering, and for selecting which colour channels to display. It can use the output of any recipe block in the variable section as it’s input, but will usually point to the tail (or last output) of the variable section. What the fixed section uses for input can be set by calling PYME.LMVis.pipeline.Pipeline.selectDataSource(). [1]

Note

Those who are familiar with the old configuration of the pipeline will recognize this as being the bulk of the old pipeline. At present, the fixed section still contains a mappingFilter, this is still available as pipeline.mapping and much of the functionality still revolves around manipulating this, on inserting new data sources [2], and on a circular flow [3]. Moving forward, much of this logic should be moved into the variable section, and the flow linearized. The use of the ``.mapping`` attribute of the pipeline is deprecated.

Tabular filters

Tabular filters are classes which take tabular data as an input and themselves expose the tabular interface. They are used extensively for the manipulation of data within the pipeline, and are generally look-through and lazily evaluated on column access (This reduces the memory footprint and latency of maintaining a large pipeline at the expense of some computation when results need to be recomputed. Computationally intensive tabular filters will often cache results, but usually on a column by column basis with lazy computation on first access.)

The two archetypal tabular filters are PYME.IO.tabular.resultsFilter and PYME.IO.tabular.mappingFilter which implement filtering and re-mapping or data respectively. The mapping filter in particular permits new columns to be derived from a functional manipulation of existing columns.

Writing VisGUI Plugins

Writing VisGUI plugins is very similar to writing plugins for dsviewer. A plugin is a python file which implements a Plug(visfr) method. In contrast to dsviewer plugins, the Plug() usually takes a PYME.LMVis.VisGUI.VisGUIFrame instance as it’s argument [4]. Like dsviewer plugins, the visfr object exposes a .AddMenuItem() method. Unlike dsviewer plugins, visfr will have a .pipeline attribute which is an instance of the current pipeline object.

The other main difference to dsviewer plugins is the location where plugins will be discovered. VisGUI plugins will be automatically found in PYME.LMVis.Extras or PYMEnf.LMVis.Extras [5].

Note

A more flexible method for discovering VisGUI plugins is on the TODO list.

Plugins which use the output of the pipeline

These are plugins which use the output of the pipeline, but don’t modify the pipeline itself. Examples are: PYME.LMVis.Extras.photophysics, PYME.LMVis.Extras.vibration, and PYME.LMVis.Extras.shiftmapGenerator

These are relatively trivial to write - just use the output of the pipeline by accessing the visfr.pipeline object as though it were a dictionary. e.g.

visfr.pipeline['x']

See also VisGUI column names for details on what column names can be used.

Occaisionally you might also want to use the colour filter to switch between colour channels. PYME.LMVis.Extras.photophysics has a good example of this.

Plugins which modify the pipeline

These are a little harder. The general procedure (alpha) is as follows:

  1. Find or write recipe module(s) which perform the desired task

  2. For each of the modules
    1. Create an instance of each recipe module, using pipeline.recipe as the parent, and the current selected datasource key or the outputName of the previous module as the inputName.
    2. Add the module instance to pipeline.recipe.modules
  3. Execute the recipe

  4. Update the selected data source to point to the output of the last module.

An example below:

def OnDBSCANCluster(visfr):
    from PYME.recipes.tablefilters import DBSCANClustering
    clumper = DBSCANClustering(visfr.pipeline.recipe, inputName=visfr.pipeline.selectedDataSourceKey, outputName='dbscanClumps')

    if clumper.configure_traits(kind='modal'):
        visfr.pipeline.recipe.modules.append(clumper)
        visfr.pipeline.recipe.execute()
        visfr.pipeline.selectDataSource('dbscanClumps')

def Plug(visfr):
    visfr.addMenuItem('Extras', 'Find DBSCAN clusters', lambda e : OnDBSCANCluster(visfr))

Warning

This is exceptionally new and might not currently work as expected. There are several things yet to be done:

  • Make the recipe re-execute when parameters etc ... change.
  • Add convenience functions for adding recipe modules to reduce the boiler plate.
  • Refactor existing code to use the new scheme.

VisGUI column names

The core column names that should be defined in VisGUI and you can rely on in the pipeline output are as follows:

Name Units Description
x nm x position of points in nm
y nm y position of points in nm
z nm z position (focus and offset combined)
t frames frame num at which a point was observed

New Rendering Modules

Footnotes

[1]selectDataSource() effectively allows you to ‘walk’ the recipe namespace.
[2]You can still technically inject a new data source using pipeline.addDataSource, but it is now injected into the recipe namespace. New code should avoid this and use the variable section instead.
[3]The classic example of this is/was event clumping. You took the output of the pipeline, used this to determine and extract clumped positions, and then injected these upstream in the data sources and ran them through the pipeline again.
[4]The exception to this is when a VisGUI plugin is loaded from within dsviewer, by way of the visgui plugin. In either case, the argument to Plug(...) is guaranteed to have .pipeline and .AddMenuItem(...) attributes.
[5]PYMEnf is a module which is used internally within the Baddeley and Soeller groups and contains code that we cannot distribute due to licensing restrictions, contains sensitive information, or for some other reason is not ready for public release.