PYME.ParallelTasks.HDFTaskQueue module

class PYME.ParallelTasks.HDFTaskQueue.HDFResultsTaskQueue(name, resultsFilename, initialTasks=[], onEmpty=<function doNix>, fTaskToPop=<function popZero>)

Bases: PYME.ParallelTasks.taskQueue.TaskQueue

Task queue which saves it’s results to a HDF file

Methods

addQueueEvents(events)
checkTimeouts()
cleanup()
fileResult(res) Called remotely from workers to file / save results
fileResults(ress) File/save the results of fitting multiple frames
flushMetaData()
getCompletedTask()
getNumQueueEvents()
getNumberTasksCompleted()
getQueueData(fieldName, *args) Get data, defined by fieldName and potntially additional arguments, ascociated with queue
getQueueMetaData(fieldName)
getQueueMetaDataKeys()
prepResultsFile()
purge()
setQueueMetaData(fieldName, value)
setQueueMetaDataEntries(mdh)

Generate a task queue which saves results to an HDF5 file using pytables

NOTE: This is only ever used as a base class

Args:
name : string
the queue name by which this set of task is identified
resultsFilename: string
the name of the output file
initialTasks: list
task to populate with initially - not used in practice
onEmpty:
what to do when the list of tasks is empty (nominally for closing output files etc ... but unused)
fTaskToPop:
a callback function which decides which task to give a worker. Returns the index of the task to return based on information about the current worker. An inital attempt at load balancing, which is now not really used.

Methods

addQueueEvents(events)
checkTimeouts()
cleanup()
fileResult(res) Called remotely from workers to file / save results
fileResults(ress) File/save the results of fitting multiple frames
flushMetaData()
getCompletedTask()
getNumQueueEvents()
getNumberTasksCompleted()
getQueueData(fieldName, *args) Get data, defined by fieldName and potntially additional arguments, ascociated with queue
getQueueMetaData(fieldName)
getQueueMetaDataKeys()
prepResultsFile()
purge()
setQueueMetaData(fieldName, value)
setQueueMetaDataEntries(mdh)
addQueueEvents(events)
checkTimeouts()
cleanup()
fileResult(res)

Called remotely from workers to file / save results

Adds incoming results to a queue and calls fileResults when enough time has elapsed (5 sec)

Args:
res: a fitResults object, as defined in ParallelTasks.remFitBuf

Returns:

fileResults(ress)

File/save the results of fitting multiple frames

Args:
ress: list of fit results

Returns:

flushMetaData()
getCompletedTask()
getNumQueueEvents()
getNumberTasksCompleted()
getQueueData(fieldName, *args)

Get data, defined by fieldName and potntially additional arguments, ascociated with queue

getQueueMetaData(fieldName)
getQueueMetaDataKeys()
prepResultsFile()
purge()
setQueueMetaData(fieldName, value)
setQueueMetaDataEntries(mdh)
class PYME.ParallelTasks.HDFTaskQueue.HDFTaskQueue(name, dataFilename=None, resultsFilename=None, onEmpty=<function doNix>, fTaskToPop=<function popZero>, startAt='guestimate', frameSize=(-1, -1), complevel=6, complib='zlib', resultsURI=None)

Bases: PYME.ParallelTasks.HDFTaskQueue.HDFResultsTaskQueue

task queue which, when initialised with an hdf image filename, automatically generates tasks - should also (eventually) include support for dynamically adding to data file for on the fly analysis

Methods

addQueueEvents(events)
checkTimeouts()
cleanup()
fileResult(res) Called remotely from workers to file / save results
fileResults(ress) File/save the results of fitting multiple frames
flushMetaData()
getCompletedTask()
getNumQueueEvents()
getNumberOpenTasks([exact])
getNumberTasksCompleted()
getQueueData(fieldName, *args) Get data, defined by fieldName and potntially additional arguments, ascociated with queue
getQueueMetaData(fieldName)
getQueueMetaDataKeys()
getTask([workerN, NWorkers]) get task from front of list, blocks
getTasks([workerN, NWorkers]) get task from front of list, blocks
logQueueEvent(event)
postTask(task)
postTasks(tasks)
prepResultsFile()
purge()
releaseTasks([startingAt])
setQueueMetaData(fieldName, value)
setQueueMetaDataEntries(mdh)
cleanup()
flushMetaData()
getNumberOpenTasks(exact=True)
getQueueData(fieldName, *args)

Get data, defined by fieldName and potntially additional arguments, ascociated with queue

getTask(workerN=0, NWorkers=1)

get task from front of list, blocks

getTasks(workerN=0, NWorkers=1)

get task from front of list, blocks

logQueueEvent(event)
postTask(task)
postTasks(tasks)
prepResultsFile()
releaseTasks(startingAt=0)
setQueueMetaData(fieldName, value)
setQueueMetaDataEntries(mdh)
class PYME.ParallelTasks.HDFTaskQueue.dataBuffer(dataSource, bLen=1000)

Methods

getSlice(ind)
getSlice(ind)
class PYME.ParallelTasks.HDFTaskQueue.myLock(lock=<thread.lock object>)

Methods

acquire()
release()
acquire()
release()
class PYME.ParallelTasks.HDFTaskQueue.rwlock2

Bases: object