With Common Vision Blox 13.3 new image and data acquisition interfaces have been introduced in 2020. After the original IGrab and IPingPong interface from 1997 and the IGrab2 interface from 2003 this is now the 3rd generation software interface for data acquisition in Common Vision Blox. Unlike its predecessors, the 3rd generation interface now also support acquisition of multiple streams from one device, acquisition of structured data streams that contain multiple logical parts (enabling the acquisition of 3D point clouds directly from devices that support this) and acquisition into user-specified destination buffers.
Preconditions
The sole precondition for using the 3rd generation acquisition stack is to work with a so-called GenTL (generic transport layer). If you have already been using GigE Vision or USB3 Vision devices you have already relied - unwittingly - on the services of the underlying GenTLs that were installed with Common Vision Blox. These days, many more devices (like e. g. frame grabbers) support this software standard and can be used with all the features of the 3rd generation acquisition stack. Whether your device is among them can be checked using the GenICam Browser: If you have Common Vision Blox and the software package provided by your hardware manufacturer installed, simply open the GenICam Browser, which will search and display all accessible transport layers on your computer.
The Acquisition Stack so far (2nd generation)
The 2nd generation acquisition stack (based on the IGrab2 interface) allows the acquisition of streams that provide regular image buffers (with the possible option to make use of chunk data) one after the other. The buffers handed back to the caller are always allocated and owned by the driver. There is no possibility to open multiple streams or structured buffers consisting of more than one image buffer or even buffers containing different data. The use of a caller-allocated buffer is not possible with the 2nd generation stack.
To summarize, the 2nd generation stack's functionality:
Use cases for the new Acquisition Stack (3rd generation)
The GenICam standard has over time been extended to support more possibilities than implemented in the 2nd generation acquisition stack in order to keep up with evolving requirements of camera users and camera vendors. Recently introduced features are:
In order to ensure that the system prevents conflict, it is prohibited to use both acquisition stacks within the same process. Assume that the user has already opened a device with the Gen3 Stack. In such a case, the user cannot open another device with the Gen2 Stack.
Architecture of the 3rd generation Acquisition Stack
To accommodate these new use cases, the acquisition engine of CVB has been modified. Where before the distinction between device and image was not immediately visible on the C-API level and only modeled into the object oriented wrappers, the new acquisition stack introduces the following set of entities in the CVB hardware abstraction layer already in the C-API:
Device
Stream
A stream provides a time series of the objects that its owning device can provide. When accessing a stream provided by a device, the developer has to be aware of the device capabilities and specify whether the stream delivers images, point clouds or composites. These three cases are further explained in Stream Types.
Depending on the hardware used for streaming, the caller can choose between three different stream options that return the corresponding object type during acquisition:
Image Stream
This stream type matches the vast majority of applications, where a standard camera is used for acquiring mono or color images. The objects acquired can be interpreted as traditional image buffers (i. e. in terms of width, height, color format). When using Image Streams to acquire from devices that supply multi part images, the stream will just provide the first image encountered in the returned composite.
Point Cloud Stream
As the name implies, this stream type delivers the data received from a 3D acquisition device. This stream type allows an indexed access to the different parts of the point cloud data (typically coordinates in x, y and z; potentially also additional data like confidence, color, or normal vectors).
Composite Stream
If none of the above stream types properly matches what the device provides, this generic stream type offers a dynamic payload interpretation based on the other interpretable object types. For example, this object type can combine buffers holding an image with buffers holding a point clouds at the cost of increased complexity when accessing the structured data.
Cancellation Tokens
The newly introduced cancellation tokens give the possibility to stop wait calls rather than stopping the entire acquisition. This makes sense because restarting the whole acquisition takes significantly longer than reinitiating a wait for the next data.
A typical use for this would be a scenario where an external stop signal is received by the application. The fastest way to stop the acquisition and restart it once the (externally defined) conditions are right would be to abort the wait function with a cancellation token. The wait call will then return with the wait status "abort".
What has changed?
The function to open a device accepts an access token provided by the device discovery as well as the preferred acquisition stack. It is possible to still use the 2nd generation acquisition stack (Vin) or to move to the 3rd generation acquisition stack (GenTL), depending on whether or not the new features like multi stream, multi part or composite streams should be used.
The following acquisition stack settings can be selected:
Vin
: Use Vin acquisition stack or fail.PreferVin
: Prefer to load the Vin acquisition stack. If the Vin stack cannot be loaded try opening the non-streamable device interface before failing.GenTL
: Use GenTL acquisition stack or fail.PreferGenTL
: Prefer the GenTL acquisition stack. If the GenTL stack cannot be loaded first try opening the Vin stack then try opening the non-streamable device interface before failing.Important Note
The discover method returns a list of device properties on which the functions for discovery properties can be used for retrieving device information such as MAC address or device model name. The list also includes a unique access token that is used for instantiating the corresponding device. The ignore vins flag may be passed as discovery flag to exclude CVB *.vin drivers other than the GenICam.vin.
Code Examples
2nd generation stack
(1) Discover all devices ignoring all *.vin drivers accept CVB's GenICam.vin.
(2) The Vin flag is responsible for opening the 2nd generation stack. To open non-streamable devices as well, use PreferVin
instead.
3rd generation stack
(1) The GenTL flag is responsible for opening the new stack. To open non-streamable devices as well, use PreferGenTL
instead.
What has changed?
In the 2nd generation acquisition stack the stream from the device was assumed to always yield images. However, depending on the hardware, a stream can also contain point clouds or any other type of data. Therefore we provide the generic stream type composite. A composite can be an image, a multi part image or a point cloud.
From the device a data stream should be instantiated with the expected stream type or the generic stream type composite. The different stream types are described in the chapter Stream Types. The stream type composite allows for a dynamic payload interpretation as it is composed of the other interpretable object types. For example this object type can combine buffers holding both, an image and a point cloud in the same object.
Starting the stream and waiting for the result has not changed much. If this offers advantages (like synchronously starting all streams), the stream start may now be split into starting the engine and the device separately, but in most cases this won't be needed.
The next difference will be in the wait function to return the streamed objects. The result consists of three components: The actual composite, a status code, indicating the success of the wait, and optionally a node map enumerator, which can hold node maps delivering information about the received object parts. It is recommended to check the status first. If the status is not "Ok" one must assume that something went wrong and the composite does not actually hold a valid handle to newly acquired data.
Please note that with the 3rd generation acquisition stack the DeviceImage supported by the 2nd generation stack will no longer be usable and will always be null.
Code Examples
2nd generation stack
(1) In the 2nd generation stack the stream is always an image stream. Each device can have only one stream.
(2) The acquisition stream is started.
(3) The wait function waits until the globally set timeout for the next acquired image and (unless a timeout occurred) returns it. The function returns a result structure that combines the wait status and the acquired image. The wait status informs about the validity of the acquired image and should be checked prior to working with the image.
(4) The abort operation immediately stops the ongoing streaming. Restarting the stream will take a significant amount of time, typically in the 100 to 200 ms range.
With the 3rd generation acquisition stack it is also possible to access a set of buffer node maps that provide extended diagnostic information:
3rd generation stack
(1) A data stream is instantiated with index zero (default; when working with devices with more than one stream, the stream index could be passed as parameter). The stream type is defined by the template parameter of the stream query function. The returned object is a shared pointer to the stream object.
(2) The stream acquisition is simplified to the combined start, which advises the driver to start the acquisition engine and stream control mechanism automatically. A user does not have to call these separate streaming components separately unless he wants to. How to call them separately is described in Multiple Streams. By default, infinite acquisition is started.
After starting the stream, the stream engine and control are running in the background until the stream is stopped, sending a continuous flow of images or composites.
(3) Each composite will need to be proactively waited for by means of a call to wait on the stream. It is possible to pass a timeout to this function, which defines the maximum time to wait for the next piece of data. The returned triple consists of three components. The actual composite, a status code and a nodemap enumerator, which are described in the introduction of this chapter.
(4) The abort operation immediately stops the ongoing stream.
The following map helps printing the return code of the reported wait status as a readable string in case of errors:
If the wait status returns Ok, a new composite has been acquired and can be processed:
(1) As our simple application assumes a single composite element we extract this first element. In the simplest of cases, the newly acquired composite will simply point to an image object. For composites, which hold multiple elements, check out Multi Part Image.
(2) This assumption can (and should) be verified by the holds_alternative
function.
(3) The next step is to check the correct interpretation of this first element as an image.
(4) The received image now provides convenient access to the image’s properties, as in this case the buffer memory address.
(1) The composite's purpose can be determined out by the purpose function.
Summary
What has changed?
With the 3rd generation stack it is possible to use buffers (organized in a structure named "flow set pool") managed either by the driver or allocate and passed to the driver by the user/caller. The option to use user-allocated memory is useful if, for example, the image data buffers need to satisfy conditions like e.g. address alignment (if extensions like SSE or AVX are going to be used on the data) or a particular block of memory (if the image is to be used by a GPU).
If the objective is simply to change the size of the memory pool but it is irrelevant, where on the heap the flow sets are created, then the function to register a managed flow set pool provides an easy alternative to the fully user-allocated pool.
Code Examples
2nd generation stack
(1) The presence or absence of ringbuffer capability can (and should) be verified by checking the return value of the ringbuffer access function.
(2) Changes the number of buffers in the stream's ring buffer. Calling the change count function in mode update device image will discard all buffers, with which the device was created with and free the memory before reallocating the new buffers.
(3) Activates the lock mode of the ring buffer. The buffer in this case is unlocked automatically when running out of scope.
(4) Like in the simple Single Stream case, the Wait() function returns with the wait result that combines the stream image with the wait status.
(5) Calling the Unlock()
method for freeing the ring buffer element is necessary in the RingBufferLockMode::On
so that the buffer may be re-used for newly acquired images.
3rd generation stack
(1) With the function to register a managed flow set pool, it is possible to create an internal flow set pool with the specified size. A previously registered flow set pool will be detached from the acquisition engine after the new flow set pool was created.
Large Buffer Number Change
3rd generation stack
(1) Calling the DeregisterFlowSetPool()
function between flow set pool registrations helps keep the memory consumption of the software low (otherwise memory usage would spike for a short moment to the sum of the currently used pool and the new pool). Note that a running stream must be stopped prior to registering or deregistering a flow set pool.
User-Allocated Memory (External Flow Set Pool)
Buffer Layout
When passing user-allocated buffers to a stream to be used as the destination memory for image acquisition, these buffers are organized in flow set pools, one of which is to be set per stream by means of the registration function for external flow set pools.
Each flow set pool is effectively a list of flow sets. The required minimum number of flow sets per flow set pool can be queried with the function for min required flow set count. This minimum pool size must be observed when constructing a flow set pool. A maximum pool size is not explicitly defined and is normally up to the user and the amount of available suitable memory in a given system.
The flow sets in turn are lists of flows. Flows can simply be thought of as buffers. However it is to a certain extent up to the camera, how this buffer will be used and therefore the simple equation 1 flow = 1 image buffer is not necessarily true. The size of these flows is a device-specific information that needs to be queried with the function size on flow set info.
This example is a helper class for flow sets and derived from FlowSetPool
. At the core of the class is a vector with flow sets. That vector simply holds the buffers that have been allocated for the flow set pool.
(1) The first step is to query the required layout of the flow sets from the current stream.
(2) Then create the custom flow set pool.
(3) The flow buffers are allocated with the size and alignment information of the flow set info and stored into the flow set pool. The buffers will later be released in the destructor of the user flow set pool ((4) in previous snippet).
(4) Then the flow set pool to be passed to the stream. Note that this transfers ownership of the flow set pool from the local scope to the stream on which the pool is registered. Once the stream ceases to exist, it will no longer need the registered flow set pool and the destructor is called. The stream also takes care of flow sets still in use by the user. Thus the user flow set pool will be deleted only when the last multipart image, point cloud or composite is deleted.
(5) Deregister the user flow set pool to free the buffers.
Summary
Using user-allocated memory for data acquisition is possible, but properly generating the pool of memory, that a stream can draw upon, is somewhat complex as it will be necessary to...
To that end, the creation of an object hierarchy, that takes care of these intricacies, is recommended.
If the objective is simply to change the size of the memory pool, but it is irrelevant where on the heap the flows are created, then the function of registering managed flow set pools provides an easy alternative to the fully user-allocated pool.
What has changed?
The procedure for configuring a camera has not changed with the presence of the new acquisition stack. However, as the memory buffer structure has been adapted to flow set pools, (re)allocating the buffers is different. This has to be considered when changing the camera configuration after streaming resp. after the buffers have been allocated.
Code Examples
Before Streaming
If camera settings that affect the buffer layout, are changed before streaming, the buffers are automatically allocated correctly. The procedure is the same for 2nd and 3rd generation stack.
2nd generation stack
(1) Get the required node map, on which the parameter needs to be changed.
(2) Get the node to be read or written.
(3) Set the new value. In this example the pixel format is changed and the buffers will be allocated with the correct size automatically.
3rd generation stack
After Streaming
If camera settings, that affect the buffers, are changed after streaming, the buffers have to be reallocated with the new layout. As the memory structure has changed, the updating of the buffers to the new size is done differently in 3rd generation stack.
2nd generation stack
(1) The camera setting pixel format is changed, which means, that the buffers will need to be resized.
(2) This update function call resizes the buffers. The result of this operation is that the currently active device image is moved to the new device image internally. As a side effect, the current image buffer is lost and replaced by an empty buffer.
3rd generation stack
(1) The camera setting "PixelFormat" is changed, which means, that the buffers will need to be resized.
(2) In the 3rd generation stack the buffers are organized in a flow set pool. To reallocate the buffers, the existing flow set pool has to be removed from the acquisition engine. Find more information on flow set pools in Ringbuffer vs Flow Set Pool.
This example is pretty much the same as in Single Stream, only the stream type is a point cloud stream in this case.
Code Example
3rd generation stack
(1) From the device a data stream is instantiated. In our assumed scenario, only one data stream is involved, so by default, the stream at index zero is returned. The stream type is defined by the template specialization of the stream query function. With the 3rd generation GenTL acquisition stack it is possible to get different stream types, which are described in Stream Types.
Multi part basically means that the stream returns a composite instead of an image which may itself consist of one or more image(s), planes of point cloud(s) etc. In other words: A composite is a way to iterate over and interpret the delivered data and it will in fact be necessary to do so in order to make sense of the acquired data.
The use of a composite element's data is represented by the composite purpose, which can be:
Custom
: an unspecified custom composite - consult the source of the image for an interpretation.Image
: one image, potentially with extra data.ImageList
: multiple images grouped together.MultiAoi
: multiple images that represent multiple AOIs in one frame.RangeMap
: a 2.5D image, potentially with extra data.PointCloud
: 3D data, potentially with extra data.ImageCube
: a (hyper spectral) image cube.The different part types are:
Code Examples
There are multiple ways to check for multi part images. One possibility is to check the incoming composite's elements for image data:
3rd generation stack
(1) Extract the number of elements.
(2) Get each element from this composite.
(3) Verify the element's type using the holds_alternative
function.
An alternative approach would be to build a MultiPartImage
object from the composite:
3rd generation stack
(1) Converting the composite to a MultiPartImage
will make will make the image data accessible. Please note that the device needs to deliver at least one composite elements that may be interpreted as a CVB image for this approach to be viable.
(2) Retrieve the number of parts within this multi part image.
(3) Get the element from the multi part image.
(4) The type of this element can be verified by the holds_alternative
function.
Summary
As has been shown, acquiring multi part data from a camera is not actually any more complex than acquiring a composite - the actual effort with multi part data goes into parsing the content of the received composite. Usually this will be done with the properties of the hardware in mind, i.e. like in the example above, the code usually contains assumptions about the content of the composite (that should, of course, be verified) and therefore tends to be somewhat device-specific.
This example expands the use case of the single data stream example to multi stream devices, for example devices that potentially deliver more than just one data stream with potentially different data types per stream and potentially asynchronously delivered data. An example for such a device is a multispectral camera providing multiple streams transferring different spectral information.
Code Examples
3rd generation stack
(1) Where previously only one data stream was accessed, now all available streams on the device are accessed. Therefore, the index based stream fetching is used. The number of available streams gets queried with the stream count function on the device - the queried streams are collected as image stream type. This enables parallel streaming over all streams.
(2) The approach to starting the acquisition and acquiring the data basically remains the same in the multi stream case – the only difference being that it is now necessary to start the acquisition in a loop for all streams.
(3) The multi stream wait also requires a separate processing of the streams. For simplicity, in this code snippet the Wait()
function is called sequentially for all streams - a more reasonable real-life implementation would of course be to work with multiple threads to facilitate true asynchronous processing of the Wait()
calls.
(4) Again, the only difference to the single stream example is that the stop/abort function needs to be called on each of the streams. Following the reverse order compared to starting the stream, now the stream control is stopped before the acquisition engines for each stream.
When working in a multi stream scenario, it might make sense to split the stream start into its two separate steps (engine start and device start). This is because the sum of both steps is significantly longer than the DeviceStart
step alone - which can lead to notable latencies between the streams: When working with Stream::Start()
the first stream might already have acquired some three to five images before the next one is up and running. Coding it as follows will drastically reduce this effect:
3rd generation stack
(1) The acquisition engine on all streams is started.
(2) The device acquisition is started in all streams.
(3) The device acquisition has to be either stopped or aborted. Following the reverse order compared to starting the stream: now the stream control needs to be stopped before the acquisition engine for each stream.
(4) Finally the acquisition engine is stopped / aborted on all streams.
Summary
When using multi stream devices, the sequence of actions necessary on a single stream device simply needs to be extended from 1 stream to N streams. This means that start, stop and wait for on the stream need to be called in a loop over all the required streams. When working with asynchronous streams it should be considered to put the Wait()
calls into dedicated threads to make sure that the streams don't stall each other.
The cancellation token gives the possibility to cancel individual wait calls instead of the whole acquisition. Restarting the whole acquisition takes much longer, than to simply call wait functions again. A use case for example would be if an external stop signal is received by the application, the fastest reaction to stop the acquisition and to restart it, would be to abort the wait function with a cancellation token. The wait call returns with the wait status abort in this case. The token itself can be checked if it has been canceled.
Code Example
3rd generation stack
(1) Create a cancellation token source object, which provides tokens and the signal to cancel.
(2) Get a cancellation token.
(3) Pass the cancellation token to the wait function.
(4) Cancel the wait function by using the cancellation token.
(5) Returns, whether or not the cancellation token has been canceled.
What has changed?
The function to open a device accepts an access token provided by the device discovery as well as the preferred acquisition stack. It is possible to still use the 2nd generation acquisition stack (Vin) or to move to the 3rd generation acquisition stack (GenTL), depending on whether or not the new features like multi stream, multi part or composite streams should be used.
The following acquisition stack settings can be selected:
Vin
: Use Vin acquisition stack or fail.PreferVin
: Prefer to load the Vin acquisition stack. If the Vin stack cannot be loaded try opening the non-streamable device interface before failing.GenTL
: Use GenTL acquisition stack or fail.PreferGenTL
: Prefer the GenTL acquisition stack. If the GenTL stack cannot be loaded first try opening the Vin stack then try opening the non-streamable device interface before failing.Important Note
The discover method returns a list of device properties on which the functions for discovery properties can be used for retrieving device information such as MAC address or device model name. The list also includes a unique access token that is used for instantiating the corresponding device. The ignore vins flag may be passed as discovery flag to exclude CVB *.vin drivers other than the GenICam.vin.
Code Examples
2nd generation stack
(1) Discover all devices ignoring all *.vin drivers accept CVB's GenICam.vin.
(2) The Vin flag is responsible for opening the 2nd generation stack. To open non-streamable devices as well, use PreferVin
instead.
3rd generation stack
(1) The GenTL flag is responsible for opening the new stack. To open non-streamable devices as well, use PreferGenTL
instead.
What has changed?
In the 2nd generation acquisition stack the stream from the device was assumed to always yield images. However, depending on the hardware, a stream can also contain point clouds or any other type of data. Therefore we provide the generic stream type composite. A composite can be an image, a multi part image or a point cloud.
From the device a data stream should be instantiated with the expected stream type or the generic stream type composite. The different stream types are described in the chapter Stream Types. The stream type composite allows for a dynamic payload interpretation as it is composed of the other interpretable object types. For example this object type can combine buffers holding both, an image and a point cloud in the same object.
Starting the stream and waiting for the result has not changed much. If this offers advantages (like synchronously starting all streams), the stream start may now be split into starting the engine and the device separately, but in most cases this won't be needed.
The next difference will be in the wait function to return the streamed objects. The result consists of three components: The actual composite, a status code, indicating the success of the wait, and optionally a node map enumerator, which can hold node maps delivering information about the received object parts. It is recommended to check the status first. If the status is not "Ok" one must assume that something went wrong and the composite does not actually hold a valid handle to newly acquired data.
Please note that with the 3rd generation acquisition stack the DeviceImage supported by the 2nd generation stack will no longer be usable and will always be null.
Code Examples
2nd generation stack
(1) In the 2nd generation stack the stream is always an image stream. Each device can have only one stream.
(2) The acquisition stream is started.
(3) The wait function waits until the globally set timeout for the next acquired image and (unless a timeout occurred) returns it. The function returns a result structure that combines the wait status and the acquired image. The wait status informs about the validity of the acquired image and should be checked prior to working with the image.
(4) The abort operation immediately stops the ongoing streaming. Restarting the stream will take a significant amount of time, typically in the 100 to 200 ms range.
With the 3rd generation acquisition stack it is also possible to access a set of buffer node maps that provide extended diagnostic information:
3rd generation stack
(1) A data stream is instantiated with index zero (default; when working with devices with more than one stream, the stream index could be passed as parameter). The stream type is defined by the template parameter of the stream query function. The returned object is a shared pointer to the stream object.
(2) The stream acquisition is simplified to the combined start, which advises the driver to start the acquisition engine and stream control mechanism automatically. A user does not have to call these separate streaming components separately unless he wants to. How to call them separately is described in Multiple Streams. By default, infinite acquisition is started.
After starting the stream, the stream engine and control are running in the background until the stream is stopped, sending a continuous flow of images or composites.
(3) Each composite will need to be proactively waited for by means of a call to wait on the stream. It is possible to pass a timeout to this function, which defines the maximum time to wait for the next piece of data. The returned triple consists of three components. The actual composite, a status code and a nodemap enumerator, which are described in the introduction of this chapter.
(4) The abort operation immediately stops the ongoing stream. Restarting the stream will take a significant amount of time, typically in the 100 to 200 ms range.
(1) The composite's purpose can be found out by the purpose function.
Summary
What has changed?
With the 3rd generation stack the ring buffer is replaced by the managed flow set pool.
Code Examples
2nd generation stack
(1) Changes the number of buffers in the stream's ring buffer. Calling the ChangeCount
function in mode update device image will discard all buffers, with which the device was created with and free the memory before reallocating the new buff. Note that the RingBuffer
property may be null if the device does not support ring buffers - check before accessing if you are uncertain.
(2) Activates the lock mode of the ring buffer. The buffer is unlocked automatically when running out of scope and the image is not stored.
(3) Wait
returns with the wait result, also containing the stream image, when the ring buffer interface is available on the device.
(4) This unlocks the ringbuffer if the lock mode is on, so the buffer is returned into the acquisition queue.
3rd generation stack
(1) With the function to register a managed flow set pool, you can create an internally managed flow set pool with the given size. Any previously registered flow set pool will be removed from the acquisition engine after the new flow set pool was created.
Large Buffer Number Change
3rd generation stack
(1) Calling the DeregisterFlowSetPool()
function between flow set pool registrations helps keep the memory consumption of the software low (otherwise memory usage would spike for a short moment to the sum of the currently used pool and the new pool). Note that a running stream must be stopped prior to registering or deregistering a flow set pool.
User-Allocated Memory (External Flow Set Pool)
The external flow set pool is only available in C++: Ringbuffer vs Flow Set Pool
What has changed?
The procedure for configuring a camera has not changed with the presence of the new acquisition stack. However, as the memory buffer structure has been adapted to flow set pools, (re)allocating the buffers is different. This has to be considered when changing the camera configuration after streaming resp. after the buffers have been allocated.
Code Examples
Before Streaming
If camera settings that affect the buffer layout, are changed before streaming, the buffers are automatically allocated correctly. The procedure is the same for 2nd and 3rd generation stack.
3rd generation stack
(1) Get the required node map, on which the parameter needs to be changed.
(2) Get the node to be read or written.
(3) Set the new value. In this example the pixel format is changed and the buffers will be allocated with the new correct size automatically.
After Streaming
If camera settings, that affect the buffers, are changed after streaming, the buffers have to be reallocated with the new layout. As the memory structure has changed, the updating of the buffers to the new size is done differently in 3rd generation stack.
3rd generation stack
(1) The camera setting "PixelFormat" is changed, which means, that the buffers will need to be resized.
(2) In the 3rd generation stack the buffers are organized in a flow set pool. To reallocate the buffers, the existing flow set pool has to be removed from the acquisition engine. Find more information on flow set pools in Ringbuffer vs Flow Set Pool.
This example is pretty much the same as in Single Stream, only the stream type is a point cloud stream in this case.
Code Example
3rd generation stack
(1) From the device a data stream is instantiated. In our assumed scenario, only one data stream is involved, so by default, the stream at index zero is returned. The stream type is defined by the template specialization of the stream query function. With the 3rd generation GenTL acquisition stack it is possible to get different stream types, which are described in Stream Types.
Multi part basically means that the stream returns a composite instead of an image which may itself consist of one or more image(s), planes of point cloud(s) etc. In other words: A composite is a way to iterate over and interpret the delivered data and it will in fact be necessary to do so in order to make sense of the acquired data.
The use of a composite element's data is represented by the composite purpose, which can be:
Custom
: an unspecified custom composite - consult the source of the image for an interpretation.Image
: one image, potentially with extra data.ImageList
: multiple images grouped together.MultiAoi
: multiple images that represent multiple AOIs in one frame.RangeMap
: a 2.5D image, potentially with extra data.PointCloud
: 3D data, potentially with extra data.ImageCube
: a (hyper spectral) image cube.The different part types are:
Code Examples
3rd generation stack
(1) With the wait function we await a multi part image.
(2) Get the parts from the multi part image.
(3) The type of this element can be verified and handled differently.
Summary
As has been shown, acquiring multi part data from a camera is not actually any more complex than acquiring a composite - the actual effort with multi part data goes into parsing the content of the received composite. Usually this will be done with the properties of the hardware in mind, i.e. like in the example above, the code usually contains assumptions about the content of the composite (that should, of course, be verified) and therefore tends to be somewhat device-specific.
This example expands the use case of the single data stream example to multi stream devices, for example devices that potentially deliver more than just one data stream with potentially different data types per stream and potentially asynchronously delivered data. An example for such a device is a multispectral camera providing multiple streams transferring different spectral information.
Code Examples
When working in a multi stream scenario, it often makes sense to split the stream start into its two separate steps (engine start and device start). This is because the sum of both steps is significantly longer than the DeviceStart
step alone - which can lead to notable latencies between the streams: When working with Stream::Start()
the first stream might already have acquired some three to five images before the next one is up and running. Coding it as follows will drastically reduce this effect:
3rd generation stack
(1) Where previously only one data stream was accessed, now all available streams on the device are accessed. Therefore, the index based stream fetching is used. The number of available streams gets queried with the stream count function on the device - the queried streams are collected as image stream type. This enables parallel streaming over all streams.
(2) The acquisition engine on all streams is started.
(3) The device acquisition is started in all streams.
(4) The device acquisition has to be either stopped or aborted. Following the reverse order compared to starting the stream, now the stream control is stopped before the acquisition engines for each stream.
(5) Finally the acquisition engine is stopped / aborted on all streams.
Summary
When using multi stream devices, the sequence of actions necessary on a single stream device simply needs to be extended from 1 stream to N streams. This means that start, stop and wait for on the stream need to be called in a loop over all the required streams. When working with asynchronous streams it should be considered to put the Wait()
calls into dedicated threads to make sure that the streams don't stall each other.
The cancellation token gives the possibility to cancel individual wait calls instead of the whole acquisition. Restarting the whole acquisition takes much longer, than to simply call wait functions again. A use case for example would be if an external stop signal is received by the application, the fastest reaction to stop the acquisition and to restart it, would be to abort the wait function with a cancellation token. The wait call returns with the wait status abort in this case. The token itself can be checked if it has been canceled.
Code Example
3rd generation stack
(1) Create a cancellation token source object, which provides tokens and the signal to cancel.
(2) Get a cancellation token.
(3) Pass the cancellation token to the wait function.
(4) Cancel the wait function by using the cancellation token.
What has changed?
The function to open a device accepts an access token provided by the device discovery as well as the preferred acquisition stack. It is possible to still use the 2nd generation acquisition stack (Vin) or to move to the 3rd generation acquisition stack (GenTL), depending on whether or not the new features like multi stream, multi part or composite streams should be used.
The following acquisition stack settings can be selected:
Vin
: Use Vin acquisition stack or fail.PreferVin
: Prefer to load the Vin acquisition stack. If the Vin stack cannot be loaded try opening the non-streamable device interface before failing.GenTL
: Use GenTL acquisition stack or fail.PreferGenTL
: Prefer the GenTL acquisition stack. If the GenTL stack cannot be loaded first try opening the Vin stack then try opening the non-streamable device interface before failing.Important Note
The discover method returns a list of device properties on which the functions for discovery properties can be used for retrieving device information such as MAC address or device model name. The list also includes a unique access token that is used for instantiating the corresponding device. The ignore vins flag may be passed as discovery flag to exclude CVB *.vin drivers other than the GenICam.vin.
Code Examples
2nd generation stack
(1) Discover all devices ignoring all *.vin drivers accept CVB's GenICam.vin.
(2) The Vin flag is responsible for opening the 2nd generation stack. To open non-streamable devices as well, use PreferVin
instead.
3rd generation stack
(1) The GenTL flag is responsible for opening the new stack. To open non-streamable devices as well, use PreferGenTL
instead.
What has changed?
In the 2nd generation acquisition stack the stream from the device was assumed to always yield images. However, depending on the hardware, a stream can also contain point clouds or any other type of data. Therefore we provide the generic stream type composite. A composite can be an image, a multi part image or a point cloud.
From the device a data stream should be instantiated with the expected stream type or the generic stream type composite. The different stream types are described in the chapter Stream Types. The stream type composite allows for a dynamic payload interpretation as it is composed of the other interpretable object types. For example this object type can combine buffers holding both, an image and a point cloud in the same object.
Starting the stream and waiting for the result has not changed much. If this offers advantages (like synchronously starting all streams), the stream start may now be split into starting the engine and the device separately, but in most cases this won't be needed.
The next difference will be in the wait function to return the streamed objects. The result consists of three components: The actual composite, a status code, indicating the success of the wait, and optionally a node map enumerator, which can hold node maps delivering information about the received object parts. It is recommended to check the status first. If the status is not "Ok" one must assume that something went wrong and the composite does not actually hold a valid handle to newly acquired data.
Please note that with the 3rd generation acquisition stack the DeviceImage supported by the 2nd generation stack will no longer be usable and will always be null.
Code Examples
2nd generation stack
(1) In the 2nd generation stack the stream is always an image stream. Each device can have only one stream.
(2) The acquisition stream is started.
(3) The wait function waits until the globally set timeout for the next acquired image and (unless a timeout occurred) returns it. The function returns a result structure that combines the wait status and the acquired image. The wait status informs about the validity of the acquired image and should be checked prior to working with the image.
(4) The abort operation immediately stops the ongoing streaming.
3rd generation stack
(1) A data stream is instantiated with index zero (default; when working with devices with more than one stream, the stream index could be passed as parameter). The stream type is defined by the template parameter of the stream query function. The returned object is a shared pointer to the stream object.
(2) The stream acquisition is simplified to the combined start, which advises the driver to start the acquisition engine and stream control mechanism automatically. A user does not have to call these separate streaming components separately unless he wants to. How to call them separately is described in Multiple Streams. By default, infinite acquisition is started.
After starting the stream, the stream engine and control are running in the background until the stream is stopped, sending a continuous flow of images or composites.
(3) Each composite will need to be proactively waited for by means of a call to wait on the stream. It is possible to pass a timeout to this function, which defines the maximum time to wait for the next piece of data. The returned triple consists of three components. The actual composite, a status code and a nodemap enumerator, which are described in the introduction of this chapter.
(4) The abort operation immediately stops the ongoing stream.
Summary
What has changed?
In 3rd generation stack the ring buffer is replaced by the managed flow set pool.
Code Examples
2nd generation stack
(1) Changes the number of buffers in this ring buffer. Calling the change count function in mode update device image will discard all buffers, with which the device was created with and free the memory. Note that if the device does not support ring buffers the value of ring_buffer
is None - when in doubt it's a good idea to check before accessing it.
(2) Activates the lock mode of the ring buffer. The buffer is unlocked automatically when running out of scope and the image is not stored.
(3) wait_for
returns with the wait result, also containing the stream image, when the ring buffer interface is available on the device.
(4) This unlocks the ringbuffer if the lock mode is on, so the buffer is returned into the acquisition queue.
3rd generation stack
(1) With the function to register a managed flow set pool, it is possible to create an internal flow set pool with the specified size. A previously registered flow set pool will be detached from the acquisition engine after the new flow set pool was created.
User-Allocated Memory (External Flow Set Pool)
The external flow set pool is only available in C++: Ringbuffer vs Flow Set Pool
Large Buffer Number Change
3rd generation stack
(1) Calling the deregister flow set pull function between consecutive registrations reduces the memory consumption. The stream must be stopped.
What has changed?
The procedure for configuring a camera has not changed with the presence of the new acquisition stack. However, as the memory buffer structure has been adapted to flow set pools, (re)allocating the buffers is different. This has to be considered when changing the camera configuration after streaming resp. after the buffers have been allocated.
Code Examples
Before Streaming
If camera settings that affect the buffer layout, are changed before streaming, the buffers are automatically allocated correctly. The procedure is the same for 2nd and 3rd generation stack.
2nd generation stack
(1) Get the required node map, on which the parameter needs to be changed.
(2) Get the node to be read or written.
(3) Set the new value. In this example the pixel format is changed and the buffers will be allocated with the correct size automatically.
3rd generation stack
After Streaming
If camera settings, that affect the buffers, are changed after streaming, the buffers have to be reallocated with the new layout. As the memory structure has changed, the updating of the buffers to the new size is done differently in 3rd generation stack.
2nd generation stack
(1) The camera setting "PixelFormat" is changed, which means, that the buffers will need to be resized.
(2) This update function call resizes the buffers. The result of this operation is that the currently active device image is moved to the new device image internally. As a side effect, the current image buffer is lost and replaced by an empty buffer.
3rd generation stack
(1) The camera setting pixel format is changed, which means, that the buffers will need to be resized.
(2) In the 3rd generation stack the buffers are organized in a flow set pool. To reallocate the buffers, the existing flow set pool has to be removed from the acquisition engine. Find more information on flow set pools in Ringbuffer vs Flow Set Pool.
This example is pretty much the same as in Single Stream, only the stream type is a point cloud stream in this case.
Code Example
3rd generation stack
(1) From the device a data stream is instantiated. In our assumed scenario, only one data stream is involved, so by default, the stream at index zero is returned. The stream type is defined by the template specialization of the stream query function. With the 3rd generation GenTL acquisition stack it is possible to get different stream types, which are described in Stream Types.
Multi part basically means that the stream returns a composite instead of an image which may itself consist of one or more image(s), planes of point cloud(s) etc. In other words: A composite is a way to iterate over and interpret the delivered data and it will in fact be necessary to do so in order to make sense of the acquired data.
The use of a composite element's data is represented by the composite purpose, which can be:
Custom
: an unspecified custom composite - consult the source of the image for an interpretation.Image
: one image, potentially with extra data.ImageList
: multiple images grouped together.MultiAoi
: multiple images that represent multiple AOIs in one frame.RangeMap
: a 2.5D image, potentially with extra data.PointCloud
: 3D data, potentially with extra data.ImageCube
: a (hyper spectral) image cube.The different part types are:
Code Examples
3rd generation stack
(1) Get the parts of the image.
(2) The type of this element can be verified by the isinstance
function.
Summary
As has been shown, acquiring multi part data from a camera is not actually any more complex than acquiring a composite - the actual effort with multi part data goes into parsing the content of the received composite. Usually this will be done with the properties of the hardware in mind, i.e. like in the example above, the code usually contains assumptions about the content of the composite (that should, of course, be verified) and therefore tends to be somewhat device-specific.
This example expands the use case of the single data stream example to multi stream devices, for example devices that potentially deliver more than just one data stream with potentially different data types per stream and potentially asynchronously delivered data. An example for such a device is a multispectral camera providing multiple streams transferring different spectral information.
Code Examples
3rd generation stack
(1) Where previously only one data stream was accessed, now all available streams on the device are accessed. Therefore, the index based stream fetching is used. The number of available streams gets queried with the stream count function on the device - the queried streams are collected as image stream type. This enables parallel streaming over all streams.
(2) The acquisition engine on all streams is started.
(3) The device acquisition is started in all streams.
(4) The device acquisition has to be either stopped or aborted. Following the reverse order compared to starting the stream, now the stream control is stopped before the acquisition engines for each stream.
(5) Finally the acquisition engine is stopped / aborted on all streams.
Summary
When using multi stream devices, the sequence of actions necessary on a single stream device simply needs to be extended from 1 stream to N streams. This means that start, stop and wait for on the stream need to be called in a loop over all the required streams. When working with asynchronous streams it should be considered to put the wait()
calls into dedicated threads to make sure that the streams don't stall each other.
The cancellation token gives the possibility to cancel individual wait calls instead of the whole acquisition. Restarting the whole acquisition takes much longer, than to simply call wait functions again. A use case for example would be if an external stop signal is received by the application, the fastest reaction to stop the acquisition and to restart it, would be to abort the wait function with a cancellation token. The wait call returns with the wait status abort in this case. The token itself can be checked if it has been canceled.
Code Example
3rd generation stack
(1) Create a cancellation token source object, which provides tokens and the signal to cancel.
(2) Get a cancellation token.
(3) Pass the cancellation token to the wait function.
(4) Cancel the wait function by using the cancellation token.
(5) Check, whether the cancellation token has been canceled.