Table of Contents
Name | Direction | Type | Default | Description |
---|---|---|---|---|
FromNow | Input | boolean | True | Process live data starting from the current time only. |
FromStartOfRun | Input | boolean | False | Record live data, but go back to the the start of the run and process all data since then. |
FromTime | Input | boolean | False | Record live data, but go back to a specific time and process all data since then. You must specify the StartTime property if this is checked. |
UpdateEvery | Input | number | 60 | Frequency of updates, in seconds. Default 60. If you specify 0, MonitorLiveData will not launch and you will get only one chunk. |
SpectraList | Input | int list | An optional list of spectra to load. If blank, all available spectra will be loaded. Applied to ISIS histogram data only. | |
PeriodList | Input | int list | An optional list of periods to load. If blank, all available periods will be loaded. Applied to ISIS histogram data only. | |
Instrument | Input | string | Mandatory | Name of the instrument to monitor. Allowed values: [‘ALF’, ‘CRISP’, ‘ENGIN-X’, ‘ENGIN-X_EVENT’, ‘GEM’, ‘HET’, ‘HRPD’, ‘IMAT’, ‘INES’, ‘INTER’, ‘IRIS’, ‘LARMOR’, ‘LOQ’, ‘MAPS’, ‘MARI’, ‘MERLIN’, ‘MERLIN_EVENT’, ‘OSIRIS’, ‘PEARL’, ‘POLARIS’, ‘SANDALS’, ‘SURF’, ‘SXD’, ‘TOSCA’, ‘VESUVIO’, ‘LET’, ‘LET_EVENT’, ‘NIMROD’, ‘OFFSPEC’, ‘OFFSPEC_EVENT’, ‘POLREF’, ‘SANS2D’, ‘SANS2D_EVENT’, ‘WISH’, ‘HIFI’, ‘MUSR’, ‘EMU’, ‘ARGUS’, ‘CHRONUS’] |
StartTime | Input | string | Absolute start time, if you selected FromTime. Specify the date/time in UTC time, in ISO8601 format, e.g. 2010-09-14T04:20:12.95 | |
ProcessingAlgorithm | Input | string | Name of the algorithm that will be run to process each chunk of data. Optional. If blank, no processing will occur. | |
ProcessingProperties | Input | string | The properties to pass to the ProcessingAlgorithm, as a single string. The format is propName=value;propName=value | |
ProcessingScript | Input | string | A Python script that will be run to process each chunk of data. Only for command line usage, does not appear on the user interface. | |
ProcessingScriptFilename | Input | string | A Python script that will be run to process each chunk of data. Only for command line usage, does not appear on the user interface. Allowed values: [‘py’] | |
AccumulationMethod | Input | string | Add | Method to use for accumulating each chunk of live data. - Add: the processed chunk will be summed to the previous outpu (default). - Replace: the processed chunk will replace the previous output. - Append: the spectra of the chunk will be appended to the output workspace, increasing its size. Allowed values: [‘Add’, ‘Replace’, ‘Append’] |
PreserveEvents | Input | boolean | False | Preserve events after performing the Processing step. Default False. This only applies if the ProcessingAlgorithm produces an EventWorkspace. It is strongly recommended to keep this unchecked, because preserving events may cause significant slowdowns when the run becomes large! |
PostProcessingAlgorithm | Input | string | Name of the algorithm that will be run to process the accumulated data. Optional. If blank, no post-processing will occur. | |
PostProcessingProperties | Input | string | The properties to pass to the PostProcessingAlgorithm, as a single string. The format is propName=value;propName=value | |
PostProcessingScript | Input | string | A Python script that will be run to process the accumulated data. | |
PostProcessingScriptFilename | Input | string | Python script that will be run to process the accumulated data. Allowed values: [‘py’] | |
RunTransitionBehavior | Input | string | Restart | What to do at run start/end boundaries? - Restart: the previously accumulated data is discarded. - Stop: live data monitoring ends. - Rename: the previous workspaces are renamed, and monitoring continues with cleared ones. Allowed values: [‘Restart’, ‘Stop’, ‘Rename’] |
AccumulationWorkspace | Output | Workspace | Optional, unless performing PostProcessing: Give the name of the intermediate, accumulation workspace. This is the workspace after accumulation but before post-processing steps. | |
OutputWorkspace | Output | Workspace | Mandatory | Name of the processed output workspace. |
LastTimeStamp | Output | string | The time stamp of the last event, frame or pulse recorded. Date/time is in UTC time, in ISO8601 format, e.g. 2010-09-14T04:20:12.95 | |
MonitorLiveData | Output | IAlgorithm | A handle to the MonitorLiveData algorithm instance that continues to read live data after this algorithm completes. |
The StartLiveData algorithm launches a background job that monitors and processes live data.
The background algorithm started is MonitorLiveData v1, which simply calls LoadLiveData v1 at a fixed interval.
Note
For details on the way to specify the data processing steps, see LoadLiveData.
Instructions for setting up a “fake” data stream are found here.
Once live data monitoring has started, you can open a plot in MantidPlot. For example, you can right-click a workspace and choose “Plot Spectra”.
As the data is acquired, this plot updates automatically.
Another way to start plots is to use python MantidPlot commands. The StartLiveData algorithm returns after the first chunk of data has been loaded and processed. This makes it simple to write a script that will open a live plot. For example:
StartLiveData(UpdateEvery='1.0',Instrument='FakeEventDataListener',
ProcessingAlgorithm='Rebin',ProcessingProperties='Params=10e3,1000,60e3;PreserveEvents=1',
OutputWorkspace='live')
plotSpectrum('live', [0,1])
It is possible to have multiple live data sessions running at the same time. Simply call StartLiveData more than once, but make sure to specify unique names for the OutputWorkspace.
Please note that you may be limited in how much simultaneous processing you can do by your available memory and CPUs.
Example 1:
from threading import Thread
import time
def startFakeDAE():
# This will generate 2000 events roughly every 20ms, so about 50,000 events/sec
# They will be randomly shared across the 100 spectra
# and have a time of flight between 10,000 and 20,000
try:
FakeISISEventDAE(NPeriods=1,NSpectra=100,Rate=20,NEvents=1000)
except RuntimeError:
pass
def captureLive():
ConfigService.setFacility("TEST_LIVE")
# start a Live data listener updating every second, that rebins the data
# and replaces the results each time with those of the last second.
StartLiveData(Instrument='ISIS_Event', OutputWorkspace='wsOut', UpdateEvery=1,
ProcessingAlgorithm='Rebin', ProcessingProperties='Params=10000,1000,20000;PreserveEvents=1',
AccumulationMethod='Add', PreserveEvents=True)
# give it a couple of seconds before stopping it
time.sleep(2)
# This will cancel both algorithms
# you can do the same in the GUI
# by clicking on the details button on the bottom right
AlgorithmManager.newestInstanceOf("MonitorLiveData").cancel()
AlgorithmManager.newestInstanceOf("FakeISISEventDAE").cancel()
#--------------------------------------------------------------------------------------------------
oldFacility = ConfigService.getFacility().name()
thread = Thread(target = startFakeDAE)
thread.start()
time.sleep(2) # give it a small amount of time to get ready
if not thread.is_alive():
raise RuntimeError("Unable to start FakeDAE")
try:
captureLive()
except Exception, exc:
print "Error occurred starting live data"
finally:
thread.join() # this must get hit
# put back the facility
ConfigService.setFacility(oldFacility)
#get the ouput workspace
wsOut = mtd["wsOut"]
print "The workspace contains %i events" % wsOut.getNumberEvents()
Output:
The workspace contains ... events
Example 2:
from threading import Thread
import time
def startFakeDAE():
# This will generate 5 periods of histogram data, 10 spectra in each period,
# 100 bins in each spectrum
try:
FakeISISHistoDAE(NPeriods=5,NSpectra=10,NBins=100)
except RuntimeError:
pass
def captureLive():
ConfigService.setFacility("TEST_LIVE")
# Start a Live data listener updating every second,
# that replaces the results each time with those of the last second.
# Load only spectra 2,4, and 6 from periods 1 and 3
StartLiveData(Instrument='ISIS_Histogram', OutputWorkspace='wsOut', UpdateEvery=1,
AccumulationMethod='Replace', PeriodList=[1,3],SpectraList=[2,4,6])
# give it a couple of seconds before stopping it
time.sleep(2)
# This will cancel both algorithms
# you can do the same in the GUI
# by clicking on the details button on the bottom right
AlgorithmManager.newestInstanceOf("MonitorLiveData").cancel()
AlgorithmManager.newestInstanceOf("FakeISISHistoDAE").cancel()
#--------------------------------------------------------------------------------------------------
oldFacility = ConfigService.getFacility().name()
thread = Thread(target = startFakeDAE)
thread.start()
time.sleep(2) # give it a small amount of time to get ready
if not thread.is_alive():
raise RuntimeError("Unable to start FakeDAE")
try:
captureLive()
except Exception, exc:
print "Error occurred starting live data"
finally:
thread.join() # this must get hit
# put back the facility
ConfigService.setFacility(oldFacility)
#get the ouput workspace
wsOut = mtd["wsOut"]
print "The workspace contains %i periods" % wsOut.getNumberOfEntries()
print "Each period contains %i spectra" % wsOut.getItem(0).getNumberHistograms()
time.sleep(1)
Output:
The workspace contains ... periods
Each period contains ... spectra
Categories: Algorithms | DataHandling\LiveData