CVB++ 15.0
3rd Generation Acquisition Interfaces

Objective

The tutorials described here aim to give Common Vision Blox (CVB) users an introduction into the use of the newly introduced acquisition functions and features in Common Vision Blox 13.4.

This document is intended to be read top-down as the examples described further down build upon what has been described previously.

The New Acquisition Stack of CVB

To stay in line with recent developments in the GenICam standard, the acquisition engine of CVB has been modified to accommodate some of the recently introduced use cases:

  • Acquisition from devices that provide more than just one data stream
  • Acquisition from devices that provide a structured buffer with multiple distinct parts
  • Acquisition from devices that do not send a “traditional” image buffer but data like 3D point clouds or HSI cubes
  • Acquisition into buffers that have been allocated by the user (as opposed to buffers that have been allocated by the driver itself)

For that, the following set of new objects has been implemented in the CVB hardware abstraction layer:

Device

  • The Device acts as the representative of the camera/sensor in the CVB driver stack.
  • On this device, one or several Streams can be accessed that control the data transfer from the remote device.

Stream

  • A stream is tied to its use case by providing type of objects it can handle. When accessing a stream provided by the device, the developer has to define whether the stream delivers Images, PointClouds or Composites. These three use cases are further described in Stream Types.
  • Each stream owns a FlowSetPool which defines a list of buffers into which the data coming from the stream will be acquired. These FlowSetPools can be provided by the user or allocated by the driver.
  • For each device and each stream, an AcquisitionEngine can be instantiated that handles the acquisition, specifically the passing of the received buffers to the user for its stream.

Flows, Flow Sets, and Flow Set Pools

The term, Flow, initially comes from the GenICam GenTL Standard. The GenTL Standard defines the term as follows:

The flows are independent channels within the data stream, responsible to transfer individual components of the acquisition data.

For a better distinction, let's temporarily and explicitly call it a GenTL Flow. On the other hand, CVB redefines the term as follows:

A Flow is a data buffer that is supposed to be attached to a GenTL Flow to store a single information unit such as image data, or other auxiliary information, as a part of single Composite.

In the following diagram, the bucket represents a buffer and the yellow ball represents a piece of component data of an acquired Composite:

CVB defines two more relevant terms. The terms are a Flow Set and a Flow Set Pool. A Flow Set is defined as follows:

A Flow Set is a set of CVB Flows.

In the following diagram, the set of Flows in the red rounded rectangle, represents a single Flow Set that consists of M Flows where M is a positive integer; note that the difference in object shape represents the difference in acquired component data types:

Note that the number of Flows, M, does not necessarily mean the number of data components in a single Composite. On the other hand, the other term, a Flow Set Pool* is defined as follows:

A Flow Set Pool is a set of Flow Sets.

In the following diagram, the set of Flow Sets in the blue rounded rectangle, represents a single Flow Set Pool that consits of N Flow Sets where N is also a positive integer:

The number of Flow Sets, N, needs to be considered depending on the relationship between the mean processing time on the consumer side and the acquisition time in the given image acquisition model. If the mean processing time can get longer than the other, the following acquired images will keep being discarded until a Flow Set is queued again. The idea of a Flow Set Pool is to resolve this situation by adjusting the number of Flow Sets so that the system can guarantee the image acquisition throughput of a continuous image acquisition task.

Stream Types

Depending on the hardware used for streaming, the developer can decide between three different options:

ImageStream

This stream type matches the vast majority of applications, where a standard camera is used for mono or color image acquisition. The object acquired can be interpreted as traditional image buffers. Furthermore, when receiving a MultiPartImage through this stream type, this object is composed of one or multiple composed images.

PointCloudStream

As derived from the name, this stream type delivers the components associated to a 3D acquisition device. This stream type allows an indexed access to the different parts of the data. The data typically is organized in planes. The most basic example of a pointcloud holds three coordinate planes x, y and z. Depending on the manufacturer additional planes can streamed and easily accessed by the appended planes.

CompositeStream

If none of the above stream types fits into the applications needs, this generic stream type offers a dynamic payload interpretation, composed of the other interpretable object types. For example, this object type can combine buffers holding an Image, as well as a PointCloud, at the same time. A developer should be aware of the increased complexity when using this stream type.

Received Object Types

The objects returned from an acquisition or extracted from a composite can be represented by the following classes:

  1. Cvb::MultiPartImage - This container represents a buffer partitionable into one or multiple images.
  2. Cvb::PointCloud - If a buffer represents 3D point cloud data, this container holds the multi-plane access to the manufacturer dependent streamed representation.
  3. Cvb::Composite - an heterogeneous object container for specificly composed data.

In order to use shared ownership on the data with the driver, the object is represented by its corresponding pointer named with the postfix "-Ptr" after the class name.

  1. MultiPartImagePtr - std::shared_ptr to the Cvb::MultiPartImage object
  2. PointCloudPtr - std::shared_ptr to the Cvb::PointCloud object
  3. CompositePtr - std::shared_ptr to the Cvb::Composite object

Data Acquisition with a Single Data Stream

(Tutorial: CppSingleStreamComposite)

Use Case to Be Covered

This tutorial covers data acquisition that is from a camera with a single data stream. The buffer allocation will be managed by the stream itself. This use case is a straightforward and basic scenario – other more complex scenarios will be covered by the other tutorials described here.

In this tutorial the following classes are invoked for acquisition:

Furthermore, the following map helps printing the failure indicating code of the reported WaitStatus as a readable string:

WAIT_ERROR_STATES
{
};
@ Abort
The acquisition has been stopped asynchronously, there is no image buffer.
@ Timeout
A timeout occurred, no image buffer has been returned.

Required Steps

The data acquisition described here goes through the following steps:

  1. Enumerate available devices.
  2. Instantiate the target device.
  3. Instantiate a data stream.
  4. Start data acquisition.
  5. Acquire data.
  6. Stop data acquisition.

Source Code

The following code block shows the code of the entire tutorial. The code streams from a single stream camera, executes 10 subsequent waits and stops acquisition. Each logical block will be explained in detail in the sections further down.

#include <iostream>
#include <string>
#include <vector>
#include <cvb/device_factory.hpp>
#include <cvb/global.hpp>
#include <cvb/driver/composite_stream.hpp>
#include <cvb/genapi/node_map_enumerator.hpp>
static const constexpr auto TIMEOUT = std::chrono::milliseconds(3000);
static const constexpr int NUM_ELEMENTS_TO_ACQUIRE = 10; // number of elements to be acquired
WAIT_ERROR_STATES
{
};
int main()
{
try
{
// discover transport layers
auto infoList = Cvb::DeviceFactory::Discover(Cvb::DiscoverFlags::IgnoreVins);
// can't continue the demo if there's no available device
if (infoList.empty())
throw std::runtime_error("There is no available device for this demonstration.");
// instantiate the first device in the discovered list
auto device = Cvb::DeviceFactory::Open<Cvb::GenICamDevice>(infoList[0].AccessToken(), Cvb::AcquisitionStack::GenTL);
// access the first data stream that belongs to the device:
auto dataStream = device->Stream<Cvb::CompositeStream>();
// start the data acquisition
dataStream->Start();
// acquire data
for (auto i = 0; i < NUM_ELEMENTS_TO_ACQUIRE; i++)
{
Cvb::CompositePtr composite;
Cvb::WaitStatus waitStatus;
std::tie(composite, waitStatus, enumerator) = dataStream->WaitFor(TIMEOUT);
switch (waitStatus)
{
default:
std::cout << "wait status unknown.\n";
{
std::cout << "wait status not ok: " << WAIT_ERROR_STATES[waitStatus] << "\n";
continue;
}
{
break;
}
}
// assume the composites first element is an image
auto firstElement = composite->ItemAt(0);
if (!Cvb::holds_alternative<Cvb::ImagePtr>(firstElement))
{
std::cout << "composite does not contain an image at the first element\n";
continue;
}
auto image = Cvb::get<Cvb::ImagePtr>(firstElement);
auto linearAccess = image->Plane(0).LinearAccess();
std::cout << "acquired image: " << i << " at memory location: " << linearAccess.BasePtr() << "\n";
}
// stop the data acquisition, ignore errors
dataStream->Stop();
}
catch (const std::exception& e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
static std::vector< DiscoveryInformation > Discover()
Discovers available devices (not vins) with a default time span of 300ms.
Definition: decl_device_factory.hpp:221
void Start() override
Starts the acquisition.
Definition: decl_composite_stream_base.hpp:349
Streams composites.
Definition: composite_stream.hpp:21
Lazy enumeration of node maps.
Definition: node_map_enumerator.hpp:31
WaitStatus
Status after waiting for an image to be returned.
Definition: global.hpp:376
@ Ok
Everything is fine, a new image arrived.

Steps explained

Enumerating Available devices

To get a list of devices the function Cvb::DeviceFactory::Discover needs to be called as follows:

auto infoList = Cvb::DeviceFactory::Discover(Cvb::DiscoverFlags::IgnoreVins);
if (infoList.empty())
throw std::runtime_error("There is no available device for this demonstration.");

The Discover method returns a list of device properties on which the function Cvb::Driver::DiscoveryProperties can be used for retrieving device information such as MAC address or device model name. The list also includes a unique access token that is used for instantiating the corresponding device. In the code above Cvb::DiscoverFlags::IgnoreVins is passed as the discovery flag to exclude CVB *.vin drivers other than the GenICam.vin.

Instantiating a Device

The next step is to instantiate the device through its access token by using the DeviceFactory::Open function as follows

auto device = Cvb::DeviceFactory::Open<Cvb::GenICamDevice>(infoList[0].AccessToken(), Cvb::AcquisitionStack::GenTL);

After the function was called successfully, the device represents a shared pointer to the newly opened device.

Instantiating a typed Data Stream

From the device, a data stream can (and should) now be instantiated. In our assumed scenario, only one data stream is involved, so by default, the stream at index zero is queried. The stream type is defined by the template specialization (Cvb::CompositeStream) of the Stream query function:

auto dataStream = device->Stream<Cvb::CompositeStream>();

The returned object is a shared pointer to the stream object.

Starting Data Acquisition

In this simple example the stream acquisition is simplified to the combined Start, which advises the driver to start the acquisition engine and stream control mechanism automatically. A user does not have to call these separate streaming components, separately. By default, infinite acquisition is triggered.

dataStream->Start();

Acquiring Data

After starting the stream, the stream engine and control are running in the background until the stream is stopped, sending a continuous flow of images or composites.

Each composite will need to be proactively waited for by means of a call to WaitFor(TIMEOUT) on the stream. Passing a timeout to this function defines the maximum time to wait for the next piece of data.

Cvb::WaitStatus waitStatus;
std::tie(composite, waitStatus, enumerator) = dataStream->WaitFor(TIMEOUT);

The returned triple consists of three components. The actual composite, a status code, indicating the success of the wait and a nodemap enumerator, which can hold nodemaps delivering information about the received object parts.
Once WaitFor returns, the first thing to do is to have a look at the returned status value. If the status is not Ok one must assume that something went wrong and the composite doesn’t actually hold a valid handle to newly acquired data:

switch (waitStatus)
{
default:
std::cout << "wait status unknown.\n";
{
std::cout << "wait status not ok: " << WAIT_ERROR_STATES[waitStatus] << "\n";
continue;
}
...
}

The above defined error state translation WAIT_ERROR_STATES map produces readable error reasons.

Whereas the wait status returns Ok, a new composite has been acquired and can be processed. Then, we can turn our attention to the received data. As our simple application assumes a single composite element we extract this first element by:

auto firstElement = composite->ItemAt(0);

In the simplest of cases, the newly acquired composite will simply point to an image object. This can be verified by the holds_alternative function:

if (!Cvb::holds_alternative<Cvb::ImagePtr>(firstElement))
{
std::cout << "composite does not contain an image at the first element\n";
continue;
}

In order to interpret this first element as an image, a call to Cvb::get follows this check:

auto image = Cvb::get<Cvb::ImagePtr>(firstElement);

The received shared_ptr to an Cvb::Image now provides convenient access to the image’s properties, as in this case the buffer memory address:

auto linearAccess = image->Plane(0).LinearAccess();
std::cout << "acquired image: " << i << " at memory location: " << linearAccess.BasePtr() << "\n";

Stopping Data Acquisition

Once no more data from the device are needed it’ll be necessary to stop data acquisition. There are generally two approaches this: The one is "stop" and the other is "abort". "Stop" waits until the ongoing data acquisition process has been completed "abort" cancels any ongoing operations.

In this tutorial, we stop data acquisition by advising the driver to a combined Stop of the streaming components by the convenience function:

dataStream->Stop();

Summary

  • A device can be identified through a Cvb::Driver::DiscoveryInformation object.
  • Start data acquisition - managed by the driver - by the Start function. This safely starts the acquisition engine, and the data stream module, in that order.
  • The abort operation immediately terminates the ongoing data acquisition process while the stop operation waits until the ongoing data acquisition process is completed.

Further Reading

  • Basic Data Acquisition with Multiple Data Streams

(see further down)

This tutorial explains how to acquired data from devices that support multiple data streams.

  • Data Acquisition with User Allocated Memory

(see further down)

This tutorial explains how user-definable memory (as opposed to memory allocated by the driver) can be used as the delivery destination for acquired data.

  • Basic Data Acquisition with Multi Part Streams

(see further down)

This tutorial explains how to acquire structured data from devices that send multiple logical image parts in one go.

Data Acquisition with Multiple Data Streams

Use Case to be Covered

This tutorial expands the use case of the single data stream example to multi stream devices, i. e. devices that potentially deliver more than just one data stream with potentially different data types per stream. An example for such a device is a multispectral camera providing multiple streams transferring different spectral information (e.g. the JAI FS-series) or a device streaming a depth image and a RGB color image at the same time (e.g. the Intel RealSense series).

Steps for acquiring Multi Stream Data

The sequence of logical steps is only a slight modification from the previously described single stream tutorial:

  1. Enumerate available devices.
  2. Instantiate the target device.
  3. Instantiate all available data streams.
  4. Start data acquisition.
  5. Acquire data.
  6. Process data.
  7. Stop data acquisition.

Helper function

This tutorial interprets the received data as a MultiPartImage. The following helper describes the different types of buffers each of the parts can embody.

  • ImagePtr - a standard image
  • PlanePtr - single plane of data holding, e.g for point clouds confidence data
  • PlaneEnumeratorPtr - a collection of grouped planes, e.g. R,G and B planes
  • BufferPtr - a generic raw buffer
  • PFNCBufferPtr - a raw buffer following Pixel Format Naming Convention (PFNC) for e.g. packed data, which is not representable as an Image
void PrintOutAllPartTypes(const Cvb::MultiPartImage& multiPartImage)
{
const auto numParts = multiPartImage.NumParts();
for (auto partIndex = 0; partIndex < numParts; ++partIndex)
{
const auto part = multiPartImage.GetPartAt(partIndex);
std::cout << "part ";
if (Cvb::holds_alternative<Cvb::ImagePtr>(part))
std::cout << part.index() + 1 << " of " << numParts << " is an image\n";
if (Cvb::holds_alternative<Cvb::PlanePtr>(part))
std::cout << part.index() + 1 << " is a plane\n";
if (Cvb::holds_alternative<Cvb::PlaneEnumeratorPtr>(part))
std::cout << part.index() + 1 << " is a plane enumerator\n";
if (Cvb::holds_alternative<Cvb::BufferPtr>(part))
std::cout << part.index() + 1 << " is a buffer\n";
if (Cvb::holds_alternative<Cvb::PFNCBufferPtr>(part))
std::cout << part.index() + 1 << " is a pfnc buffer\n";
}
}
MultiPart image class.
Definition: multi_part_image.hpp:23
int NumParts() const noexcept
Number of parts in the multi part image.
Definition: multi_part_image.hpp:39
CompositeVariant GetPartAt(int index) const
Access to a multi part image element specified by its index.
Definition: multi_part_image.hpp:50

Source Code

The source code for the multi stream case differs only slightly from the single stream case.

#include <iostream>
#include <chrono>
#include <algorithm>
#include <cvb/device_factory.hpp>
#include <cvb/image.hpp>
#include <cvb/global.hpp>
// constants
static const constexpr auto TIMEOUT = std::chrono::milliseconds(3000);
static const constexpr int NUM_ELEMENTS_TO_ACQUIRE = 10;
void PrintOutAllPartTypes(const Cvb::MultiPartImage& multiPartImage);
WAIT_ERROR_STATES
{
};
int main()
{
try
{
// discover transport layers
auto infoList = Cvb::DeviceFactory::Discover(Cvb::DiscoverFlags::IgnoreVins);
// can't continue the demo if there's no available device:
if (infoList.empty())
throw std::runtime_error("There is no available device for this demonstration.");
// instantiate the first device in the discovered list
auto device = Cvb::DeviceFactory::Open<Cvb::GenICamDevice>(infoList[0].AccessToken(), Cvb::AcquisitionStack::GenTL);
// Get all streams
std::generate_n(std::back_inserter(streams), device->StreamCount(), [&device, i = 0]() mutable
{
return device->Stream<Cvb::ImageStream>(i++);
});
// start the data acquisition for all streams
for (const auto stream : streams)
{
stream->Start();
}
// acquire data
// note: getting the data is sequential here for simplicity;
// concurrent wait on different streams is the more likely use case
for (auto index = 0; index < NUM_ELEMENTS_TO_ACQUIRE; ++index)
{
for (auto streamIndex = 0u; streamIndex < streams.size(); ++streamIndex)
{
const auto stream = streams[streamIndex];
Cvb::MultiPartImagePtr multiPartImage;
Cvb::WaitStatus waitStatus;
std::tie(multiPartImage, waitStatus, enumerator) = stream->WaitFor(TIMEOUT);
switch (waitStatus)
{
default:
std::cout << "unknown wait status\n";
{
std::cout << "wait status not ok: " << WAIT_ERROR_STATES[waitStatus] << "\n";;
continue;
}
case Cvb::WaitStatus::Ok: break;
}
PrintOutAllPartTypes(*multiPartImage);
auto element1 = multiPartImage->GetPartAt(0);
if (!Cvb::holds_alternative<Cvb::ImagePtr>(element1))
{
std::cout << "multi part image does not contain an image as first element\n";
continue;
}
auto image = Cvb::get<Cvb::ImagePtr>(element1);
auto linearAccess = image->Plane(0).LinearAccess();
std::cout << "Acquired image: #" << index + 1 << ", at address " << linearAccess.BasePtr() << " from stream " << streamIndex << "\n";
}
}
// synchronous stop
for (const auto stream : streams)
{
stream->Stop();
}
}
catch (const std::exception& error)
{
std::cout << error.what() << std::endl;
}
return 0;
}
void PrintOutAllPartTypes(const Cvb::MultiPartImage& multiPartImage)
{
const auto numParts = multiPartImage.NumParts();
for (auto partIndex = 0; partIndex < numParts; ++partIndex)
{
const auto part = multiPartImage.GetPartAt(partIndex);
std::cout << "part ";
if (Cvb::holds_alternative<Cvb::ImagePtr>(part))
std::cout << part.index() + 1 << " of " << numParts << " is an image\n";
if (Cvb::holds_alternative<Cvb::PlanePtr>(part))
std::cout << part.index() + 1 << " is a plane\n";
if (Cvb::holds_alternative<Cvb::PlaneEnumeratorPtr>(part))
std::cout << part.index() + 1 << " is a plane enumerator\n";
if (Cvb::holds_alternative<Cvb::BufferPtr>(part))
std::cout << part.index() + 1 << " is a buffer\n";
if (Cvb::holds_alternative<Cvb::PFNCBufferPtr>(part))
std::cout << part.index() + 1 << " is a pfnc buffer\n";
}
}

Instantiating the Data Stream Modules

Where previously only one data stream was accessed, now all available streams on the device are accessed. Therefore, the index based stream fetching is used. The number of available streams gets queried with StreamCount() on the device - the queried streams are collected into a std::vector of type ImageStream. This enables parallel streaming over all streams.

std::generate_n(std::back_inserter(streams), device->StreamCount(), [&device, i = 0]() mutable
{
return device->Stream<Cvb::ImageStream>(i++);
});

Starting Data Acquisition

The approach to starting the acquisition and acquiring the data basically remains the same in the multi stream case – the only difference being that it is now necessary to start the acquisition in a loop for all streams.

for (const auto stream : streams)
{
stream->Start();
}

Waiting over all streams

The multistream wait also requires a separate processing of the data. This is (in this simple demonstration) done in sequential evaluation of the Wait function for each stream, respectively. A more reasonable approach would be the evaluation in separate parallel threads for performance reasons.

for (auto index = 0; index < NUM_ELEMENTS_TO_ACQUIRE; ++index)
{
for (auto streamIndex = 0u; streamIndex < streams.size(); ++streamIndex)
{
const auto stream = streams[streamIndex];
Cvb::MultiPartImagePtr multiPartImage;
Cvb::WaitStatus waitStatus;
std::tie(multiPartImage, waitStatus, enumerator) = stream->WaitFor(TIMEOUT);
//...
}
}

The processing for the received image is done according to the helper PrintOutAllPartTypes described above.

Stopping Data Acquisition

Again, the only difference to the single stream example is that the stop function needs to be called on each of the streams. Following the reverse order compared to starting the stream, now the stream control is stopped before the acquisition engines for each stream. As opposed to the single stream case, stop rather than abort is used:

for (const auto stream : streams)
{
stream->Stop();
}

Summary

When using multi stream devices, the sequence of actions necessary on a single stream device simply needs to be extended from 1 stream to N streams. This means that stream->Start, stream->Stop and stream->WaitFor need to be called in a loop over all the required streams.

Further Reading

  • Data Acquisition with User Allocated Memory
    (see further down)
    This tutorial explains how user-definable memory (as opposed to memory allocated by the driver) can be used as the delivery destination for acquired data.
  • Basic Data Acquisition with Multi Part Streams
    (see further down)
    This tutorial explains how to acquire structured data from devices that send multiple logical image parts in one go.

User-Allocated Memory

Use Case

This tutorial demonstrates how to pass user-allocated memory as the target buffer(s) for image acquisition from a camera. This option is useful if, for example, the image data buffers need to satisfy extraordinary conditions like e.g. address alignment (if extensions like SSE or AVX are going to be used on the data) or a particular block of memory (if the image is to be used by a GPU).

For the explanation how user-allocated memory may be used for image and/or composite acquisition it is easier to refer to the slightly simpler cased of single stream acquisition. Of course user-allocated memory may be used in the same manner for multi stream acquisition by simply extending the concept from one stream (as demonstrated here) to multiple streams by looping over the streams like in the multi stream case.

In this tutorial the following classes are used for user allocated memory:

Required Steps

The sequence of steps basically matches the one described for single streams, the sole difference being an additional step for allocating the necessary buffers:

  1. Enumerate available devices.
  2. Instantiate the target device.
  3. Instantiate a data stream.
  4. Allocate and pass the buffers for that stream.
  5. Start data acquisition.
  6. Acquire data.
  7. Abort data acquisition.

Source Code

Again, this section will only show the code that differs from the single stream case with the individual blocks being explained further down.

...
class UserFlowSetPool final
{
using UserFlowSetPoolPtr = std::shared_ptr<UserFlowSetPool>;
public:
UserFlowSetPool(const std::vector<Cvb::FlowInfo>& flowInfo) noexcept
: Cvb::FlowSetPool(flowInfo, Cvb::FlowSetPool::ProtectedTag{})
{
}
virtual ~UserFlowSetPool()
{
for (auto& flowSet : *this)
for (auto& flow : flowSet)
Tutorial::aligned_free(flow.Buffer);
}
static UserFlowSetPoolPtr Create(const std::vector<Cvb::FlowInfo>& flowInfos)
{
return std::make_shared<UserFlowSetPool>(flowInfos);
}
};
int main()
{
try
{
...
// get the flow set information that is needed for the current stream
auto flowSetInfo = stream->FlowSetInfo();
// create a subclass of FlowSetPool to store the created buffer
auto flowSetPoolPtr = Tutorial::UserFlowSetPool::Create(flowSetInfo);
std::generate_n(std::back_inserter(*flowSetPoolPtr), NUM_BUFFERS, [&flowSetInfo]()
{
auto flows = std::vector<void*>(flowSetInfo.size());
std::transform(flowSetInfo.begin(), flowSetInfo.end(), flows.begin(), [](Cvb::Driver::FlowInfo info)
{
return Tutorial::aligned_malloc(info.Size, info.Alignment);
});
return flows;
});
// register the user flow set pool
stream->RegisterExternalFlowSetPool(std::move(flowSetPoolPtr));
// acquire images
...
// deregister the user flow set pool to get free buffer (releaseCallback)
stream->DeregisterFlowSetPool();
...
}
}
FlowSetPool class to set external buffers as set of flows.
Definition: flow_set_pool.hpp:67
Struct handling the size, alignment and number of flows per set.
Definition: flow_set_pool.hpp:19

Steps explained

Buffer Layout

When passing user-allocated buffers to a stream to be used as the destination memory for image acquisition, these buffers are organized in so-called "Flow Set Pools", one of which is to be set per stream by means of the RegisterExternalFlowSetPool function.

Each Flow Set Pool is effectively a list of "Flow Sets". The minimum number of Flow Sets per Flow Set Pool can be queried with the function MinRequiredFlowSetCount. This minimum pool size must be observed when constructing a flow set pool. A maximum pool size is not explicitly defined and is normally up to the user and the amount of available memory in a given system.

The Flow Sets in turn are lists of "Flows". Flows can simply be thought of as buffers. However it is to a certain extent up to the camera how this buffer will be used and therefore the simple equation 1 Flow = 1 Image buffer is not necessarily true. The size of these Flows is a device-specific information that needs to be queried with the function FlowSetInfo.Size.

Instantiating a Device

The code for instantiating a device is effectively the same as in the single stream case.

Flow Set Generation

The logic for generating and assigning the flow set pool is located in the main code part. UserFlowSetPool itself is a helper class for flow sets and derived from FlowSetPool. At the core of the class is a std::vector flowSets_. The flowSets_ simply holds the buffers that have been allocated for the flow set pool.

class UserFlowSetPool final
{
using UserFlowSetPoolPtr = std::shared_ptr<UserFlowSetPool>;
public:
UserFlowSetPool(const std::vector<Cvb::FlowInfo>& flowInfo) noexcept
: Cvb::FlowSetPool(flowInfo, Cvb::FlowSetPool::ProtectedTag{})
{
}
virtual ~UserFlowSetPool()
{
for (auto& flowSet : *this)
for (auto& flow : flowSet)
Tutorial::aligned_free(flow.Buffer);
}
static UserFlowSetPoolPtr Create(const std::vector<Cvb::FlowInfo>& flowInfos)
{
return std::make_shared<UserFlowSetPool>(flowInfos);
}
};

Building the Flow Set Pool

The UserFlowSetPool class provides a class for releasing the allocated buffer.

The first thing to create is the UserFlowSetPool and to query the number of flows from the stream via FlowSetInfo:

auto flowSetInfo = stream->FlowSetInfo();
auto flowSetPoolPtr = Tutorial::UserFlowSetPool::Create(flowSetInfo);

The flow buffers are allocated with the size and alignment information of the FlowSetInfo and stored into FlowSetPool.

std::generate_n(std::back_inserter(*flowSetPoolPtr), NUM_BUFFERS, [&flowSetInfo]()
{
auto flows = std::vector<void*>(flowSetInfo.size());
std::transform(flowSetInfo.begin(), flowSetInfo.end(), flows.begin(), [](Cvb::Driver::FlowInfo info)
{
return Tutorial::aligned_malloc(info.Size, info.Alignment);
});
return flows;
});

At the end, the flowSetPoolPtr holds the entire Flow Set Pool. The buffers have to be released within the destructor of the UserFlowSetPool.

What is missing now is the Flow Set Pool to be passed to the stream.

stream->RegisterExternalFlowSetPool(std::move(flowSetPoolPtr));

One thing to point out here is that ownership of the Flow Set Pool passes from the local scope to the stream on which the pool is registered.

Once the stream itself ceases to exist, it will of course no longer need the registered Flow Set Pool and the destructor is called. The stream also takes care of flow sets still in use by the user. Thus the UserFlowSetPool will be deleted only when the last MultiPartImage, PointCloud or Composite is deleted.

virtual ~UserFlowSetPool()
{
for (auto& flowSet : *this)
for (auto& flow : flowSet)
Tutorial::aligned_free(flow.Buffer);
}

Summary

Using user-allocated memory for data acquisition is possible, but properly generating the pool of memory that a stream can draw upon is somewhat complex as it will be necessary to...

  • ... make sure the stream accepts user-allocated memory in the first place
  • ... make sure that the Flow Set Pool contains enough Flow Sets
  • ... make sure that each Flow Set contains the right amount of Flows
  • ... make sure that each Flow in a set is appropriately sized
  • ... make sure that the memory is valid when passed to the stream
  • ... make sure that the entire data structure is disposed of correctly under all circumstances

To that end, the creation of an object hierarchy, that takes care of these intricacies, is recommended.

When using multi stream devices, the sequence of actions necessary on a single stream device simply needs to be extended from 1 stream to N streams. This means that EngineStart, EngineAbort/Stop, DeviceStart and DeviceAbort/Stop need to be called in a loop over all the required streams.

If the objective is simply to change the size of the memory pool but it is irrelevant where on the heap the flows are created, then the function RegisterManagedFlowSetPool provides an easy alternative to the fully user-allocated pool.

Further Reading

  • Basic Data Acquisition with Multi Part Streams
    (see further down)
    This tutorial explains how to acquire structured data from devices that send multiple logical image parts in one go.

Multi Part Acquisition

Use Case

This fourth step concludes the group of the 3rd generation image acquisition interfaces tutorials and demonstrates multi part acquisition. Multi part basically means that the stream returns a composite instead of an image which may itself be one or more image(s), planes of pointcloud(s) etc. In other words: A composite is a composed iterable way of accessing and interpreting the delivered data. In order to find out what the result of a multi part acquisition actually contains it will be necessary to iterate over the returned composite's content. Furthermore, analysing the composites purpose will give a clue about its use case. (see AnalyseCompositeAt or PrintOutCompositePurpose). All use cases are represented by the Cvb::CompositePurpose enum.

If a developer wants a convenient access to the composite in the simplest of all composite streaming use cases, the following functions help converting:

  • If it is verified, that a device sends an image, a call to
    // composite is a Cvb::Composite object
    auto image = Cvb::MultiPartImage::FromComposite(composite);
    static MultiPartImagePtr FromComposite(CompositePtr object)
    Creates an image object from a composite.
    Definition: multi_part_image.hpp:61
    will make image data available through the known Cvb::MultipartImage. Please be aware, that only if the device delivers at least one element, which is interpretable as a CVB Image, this function returns a MultipartImage.
  • The same restriction also holds for a Cvb::PointCloud: If the camera definitely delivers points clouds, call
    // composite is a Cvb::Composite object
    auto pointCloud = Cvb::PointCloud::FromComposite<Cvb::DensePointCloud>(composite);
    in order to further work with a point cloud object.

In general, when working with composites, it is always a good idea to further analyse the contents for interpreting and processing the data. Steps towards this approach are demonstrated in this tutorial: If the inner structure of the composite is known by the cameras specification, the developer has indexed access to each item of the composite container and is able to analyse them:

void AnalyseCompositeAt(Cvb::CompositePtr composite, int index)
{
auto numberOfElements = composite->ItemCount();
if (numberOfElements < 2)
{
std::cout << "Number of elements in composite #"<< index << " is less than expected (no multipart)\n";
return;
}
std::cout << "Multipart data with " << numberOfElements << " elements\n";
for (auto j = 0; j < numberOfElements; j++)
{
auto element = composite->ItemAt(j);
if (Cvb::holds_alternative<Cvb::ImagePtr>(element))
{
auto image = Cvb::get<Cvb::ImagePtr>(element);
auto linearAccess = image->Plane(0).LinearAccess();
std::cout << "acquired image: " << index << " at memory location: " << linearAccess.BasePtr() << "\n";
}
else
std::cout << "Element " << j << " in composite " << index << " is not an image" << "\n";
}
}

The number of elements inside the composite can be extracted by composite->ItemCount().

In this demonstration the composite's content is further inspected for possibly held images by a call to Cvb::holds_alternative<ImagePtr>.

Furthermore, the composite's purpose can be found out by the function composite->Purpose The following code helps listing and printing the different values it can take.

void PrintOutCompositePurpose(Cvb::CompositePtr composite)
{
auto purpose = composite->Purpose();
std::cout << "Composite purpose: ";
switch (purpose)
{
case Cvb::CompositePurpose::Custom: std::cout << "Custom\n"; break;
case Cvb::CompositePurpose::Image: std::cout << "Image\n"; break;
case Cvb::CompositePurpose::ImageList: std::cout << "Image List\n"; break;
case Cvb::CompositePurpose::MultiAoi: std::cout << "MultiAoi\n"; break;
case Cvb::CompositePurpose::RangeMap: std::cout << "RangeMap\n"; break;
case Cvb::CompositePurpose::PointCloud: std::cout << "PointCloud\n"; break;
case Cvb::CompositePurpose::ImageCube: std::cout << "ImageCube\n"; break;
default: std::cout << "Unknown\n"; break;
}
}

This demonstration down will only yield interesting results when used with a device that actually sends multi part data. For the moment let's assume that the sample runs with a (hypothetical) camera that delivers a list of two images with one image being a color image and the other containing raw Bayer data.

Required Steps

The data acquisition described here goes through the following steps:

  1. Enumerate available devices.
  2. Instantiate the target device.
  3. Instantiate a data stream.
  4. Start data acquisition.
  5. Acquire data.
  6. Stop data acquisition.

This is basically the same as in the single stream case because the only difference is how the result is processed.

Source Code

As in the previous sections, only the difference between the single stream case and the Multi Part case will be shown:

#include <iostream>
#include <string>
#include <vector>
#include <cvb/device_factory.hpp>
#include <cvb/global.hpp>
#include <cvb/driver/composite_stream.hpp>
#include <cvb/genapi/node_map_enumerator.hpp>
static const constexpr auto TIMEOUT = std::chrono::milliseconds(3000);
static const constexpr int NUM_ELEMENTS_TO_ACQUIRE = 10;
void PrintOutCompositePurpose(Cvb::CompositePtr composite);
void AnalyseCompositeAt(Cvb::CompositePtr composite, int index);
WAIT_ERROR_STATES
{
};
int main()
{
try
{
// discover transport layers
auto infoList = Cvb::DeviceFactory::Discover(Cvb::DiscoverFlags::IgnoreVins);
// can't continue the demo if there's no available device
if (infoList.empty())
throw std::runtime_error("There is no available device for this demonstration.");
// instantiate the first device in the discovered list
auto device = Cvb::DeviceFactory::Open<Cvb::GenICamDevice>(infoList[0].AccessToken());
// access the first data stream that belongs to the device and start
auto dataStream = device->Stream<Cvb::CompositeStream>();
dataStream->Start();
// acquire data
for (auto i = 0; i < NUM_ELEMENTS_TO_ACQUIRE; i++)
{
Cvb::CompositePtr composite;
Cvb::WaitStatus waitStatus;
std::tie(composite, waitStatus, enumerator) = dataStream->WaitFor(TIMEOUT);
switch (waitStatus)
{
default:
{
std::cout << "wait status unknown.\n";
continue;
}
{
std::cout << "wait status not ok: " << WAIT_ERROR_STATES[waitStatus] << "\n";
continue;
}
break;
}
PrintOutCompositePurpose(composite);
AnalyseCompositeAt(composite, i);
}
dataStream->Stop();
}
catch (const std::exception& e)
{
std::cout << e.what() << std::endl;
}
return 0;
}
// implementation of the helpers see above.

As the device instantiation and acquisition code remains unchanged relative to the single stream tutorial it won't be touched upon any more. All that needs attention now is how the result data is processed, which happens in the after the break out of the "Ok" branch of the switch statement that evaluates the result of stream->WaitFor.

Result Evaluation

If the device did deliver the expected composite, then the assumption of this sample program that the composite is in fact a list of images needs to be verified. A list of images is a predefined possibility that can be queried by means of the helper PrintOutCompositePurpose Once the fact that a list of images has been received, the expectation that it is (at lest) two entries long should be verified before accessing the individual images inside AnalyseCompositeAt Of course a more interesting form of processing could (and should) be inserted at this point.

Summary

As has been shown, acquiring multi part data from a camera is not actually any more complex than acuiring a composite - the actual effort with multi part data goes into parsing the content of the received composite. Usually this will be done with the properties of the hardware in mind, i.e. like in the example above, the code usually contains assumptions about the content of the composite (that should, of course, be verified) and therefore tends to be somewhat device-specific.