Customizing Program Readout
Last updated
Last updated
Readout capture describes the process used to extract data from the QPU control system as it runs a program. Currently, the data available from Quil programs run on QCS includes:
Raw Analog to Digital Converter (ADC) samples captured by a Quil RAW-CAPTURE
, one for each cycle of the ADC for the duration
of the RAW-CAPTURE
.
Complex unclassified values created by the integration of the raw data over the filter
specified in a CAPTURE
instruction. This yields one value per CAPTURE
instruction.
See for further information about these instructions. Each of them operates on a Quil frame.
Following capture, readout transformation may be applied to the results of the two data streams above to further process it before return to the user. Such transformation today includes linear classification and reduction as described below.
In Quil, a MEASURE
instruction is expanded using a DEFCAL MEASURE
calibration as part of compilation. Users can inspect the latest calibration set by calling GetQuiltCalibrations, which might yield a snippet like the following:
This CAPTURE
instruction returns data as complex IQ values. This MEASURE
operation then sends data to two filter nodes:
A DataBuffer
which collects those IQ values as-is and stores them in the job result readout values under the key q0_unclassified
A SingleQLinear
which projects each complex IQ value into a bit 0
or 1
value based on the parameters of the SingleQLinear function, then stores those bit values in the job result readout values under the key q0_classified
Each transformation node in the graph is declared by a PRAGMA FILTER-NODE
instruction with an ID and a configuration.
Note: throughout this document, filter and transformation are used interchangeably; we favor the latter because it's more specific, but usage of filter remains present in our systems.
The string ID must uniquely describe this transformation node within the readout transformation processor graph. By convention, the ID of a CAPTURE
node data source is q{qubit}_ro_rx/filter
and a RAW-CAPTURE
is q{qubit}_ro_rx/raw
, where {qubit}
is the sole qubit in the capture frame identifier. The IDs of transformation nodes
Provide the parameters in pseudo-JSON format (which allows single quotes '...'
instead of double for strings):
Each specific transformation node type may also require additional configuration within params
, described here:
Assume the following Quil program is run with a shot count of 5
:
The flow of data is represented by the following graph, where each node's output can be sent to multiple downstream consumers and any FILTER-NODE pragma can publish results to readout data.
Here, the q0_ro_rx/filter
is a CAPTURE output source ID which cannot be published directly (hence the DataBuffer filter node)
In this example, the execution result returned to the client would take the shape of the following - here is an sample:
Parameter | |
---|---|
filter_type | params |
---|---|
filter_type
and params
Processor type and parameters, enumerated below
source
A reference to the string ID of another readout node which must be present within the graph. This is the node's input data source.
publish
If true
, the output from this node will be present in the job result readout values returned to the user, keyed under the node's ID.
If false
then this node exists only as an intermediate step within the pipeline in order to feed input to other nodes.
_type
and module
Deprecated
DataBuffer
publishes its input data directly. This is useful for returning readout capture data without modification.
none
SingleQLinear
performs linear binary classification
a: [float, float]
classification axis
threshold: float
classification threshold
The input is a stream of complex values, and the output is a single bit, 1
or 0
. If the dot-product of the complex IQ value and a
is greater than or equal to threshold
the output will be 1
, otherwise 0
.
Reducer
reshapes an input stream and reduces the resulting data along an axis
function: 'mean' | 'count'
axis: 0 | 1
(default 0
)
0
: (reduce each "row")
1
: (reduce each "column")
Data is collected into axis 0 “row” first, for example assuming reshape=[3, 2]
and the following data stream: [ 0, 1, 2, 3, 4, 5 ]
For axis=0
, the output looks like: [ reduce(0, 1, 2), reduce(3, 4, 5) ]
For axis=1
, the output looks like: [ reduce(0, 3), reduce(1, 4), reduce(2, 5) ]
reshape: [int, int]
(default [-1, -1]
)
Similar to numpy
. Describes how the input data should be interpreted as rows and columns, each value must be -1 (indicates unbounded) or greater than 0 (bounded).
The first number describes the size of axis 0.
The second number does not change how data is collected, but is used as a size hint when provided to improve performance, e.g. [5, -1]
or [5, 2]
or [5, 1000]
all produce the same output.
Using an unbounded value for the 0-axis shape (e.g. [-1, -1]
) means that all data will be collected into a single row. Assuming the following data stream: [ 0, 1, 2, 3, 4 ]
For axis=0
, the output looks like: [ reduce(0, 1, 2, 3, 4) ]
For axis=1
, the output looks like: [ reduce(0), reduce(1), reduce(2), reduce(3), reduce(4) ]