Common Vision Blox 15.0
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Events Friends Modules Pages
GigE Vision


Introduction

GigEVision (Gigabit Ethernet Vision) is a machine vision technology standard for discovering, controlling and streaming Ethernet vision devices. Hosted by A3, it offers fast, abundant, standardized, scalable and low-cost streaming over Ethernet. The A3 page is the right place to look for all standard related material (documents, applications, legal topics).

Hardware requirements and scope

Read more about the requirements of a machine vision system using GEV

  • GEV-Compliant Camera – The camera must be compatible with the standard.
  • Gigabit Ethernet (GigE) Network Adapter – A 1 Gbps (or higher) network interface card (NIC) is required for data transfer.
  • Ethernet Cable (CAT + RJ-45 standardized, max. CAT6a) – Use high-quality cables for reliable data transmission over long distances. data rates, lower energy consumption and longer cable lengths
  • GigEVision compliant software, see the CVB GEV stack
  • GEV RDMA capable interfaces and cameras (optional) - for high data rates.
  • PoE (Power over Ethernet) Support (optional) – If the camera requires PoE, ensure your switch or injector provides sufficient power
  • Network Switch (optional) – A Gigabit Ethernet switch is needed if complex camera setups are used.
  • Fiber based setup (optional) - Fiber optical cables and SFP modules for higher
    Note
    Contact our support for more help on hardware setup.

CVB implements most of the available aspects of GEV. The standard defines three major regards of Ethernet based vision, which are represented as follows:

  1. Device Discovery: The adoptions and specifics about the connection to devices can be found in the dedicated topic here.
  2. Device Configuration: Known as the GEV Control Protocol (GVCP), CVB implements means for Device Configuration, Control Channel, and GEV specific configuration
  3. Streaming: Regarding GVSP (GEV Streaming Protocol), CVB acts as a receiver entity on host systems connected to any GEV compliant streaming device. The implementation of the GEV driver is comparable to other transport layers at a higher level. It is described in the section about Acquisition and Streaming. More specific information is written here.

Driver Structure In Common Vision Blox

The GEV stack - from a technical perspective - comes in three different forms, where each serves a different purpose and use case.

On top of all flavors of GEV streaming in CVB there is a common interface, that fulfills the GenTL (transport layer) and GEV specifications. GevSD in its standard configuration is the most basic implementation that uses the OS networking stack and - compared to the others - offers the least performance. The RDMA configuration in the GevSD uses direct memory access and brings significant improvements in performance by leveraging hardware offloading. For Windows there is a specific kernel driver called GevFD that skips part of the OS networking stack, offering a performance speedup when compared to the standard GevSD.

Comparison of Implementations

Note
All of the three implementations are accessible through the same acquisition interface, there is no difference in the required SDK modules.

The following table helps to find the right driver for an application:

Topic GevSD (RDMA) GevSD (GVSP) GevFD (GVSP)
GEV Implementation Uses direct NIC access Uses OS for GVSP transfer Uses packet filtering and bypasses OS kernel stack
Data Rate Highest (bypasses OS networking stack) Medium (limited by UDP stack and OS overhead) Higher (through kernel driver)
Latency Lowest (direct memory access to NIC) Moderate (UDP introduces some jitter) Moderate (depends on filtering efficiency)
CPU Load Very Low (offloaded to NIC hardware) High (software-based packet processing) Medium (kernel driver offloads some processing)
Firewall Interaction Not affected (direct NIC access) Needs open UDP ports Bypasses firewall via kernel driver
Reliability Highest (guaranteed memory transfers) Moderate (UDP may drop packets, resends take resources) High (for higher data rates controlled packet filtering, fast resends)
Use Case High-speed, low-latency applications Vision systems using standard GVSP, no dedicated hardware Systems needing controlled firewall bypassing on Windows
OS Compatibility Any Any Windows only
Hardware Requirement RDMA capable NIC and Camera Any Any

Ethernet Topologies

One of the major benefits of GEV is its capability regarding setting up different network topologies. They are - as usual in IP networks - configured by subnet masks and IP addresses of the different participants:

Simple single camera, single NIC setup

Description

This is the most straight forward case, where no other components are involved and the camera is setup as a simple point to point connection. IP addresses are either assigned by DHCP or manually. This results in the most performant setup, as there is the full bandwidth available for GVSP.

Multiple cameras connected through switch setup

Description

In order to build up more complex setups, switches can help stacking networks and enable one-to-N or N-to-M (participants) connections, for cameras and hosts. This requires compatibility and configuration in the switch.

Multiple cameras, Multiple NICs setup

Description

As soon as there are multiple network interfaces involved, the traffic needs to be distributed by subnets. A careful configuration of subnets per NIC ensures the correct behavior. Compared to the multiple camera setup on one NIC, this enhances performance as the load is balanced.

Configuration

Note
All settings can be made via the GenICam Browser. In this example the software defined approach is explained.

The overall performance of a GEV system setup depends on the network hardware components (network interface card, switch, camera, cable) as well as their appropriate configuration. Additionally, there are some camera dependent GigE network features.

IP Settings

To establish a connection to any GEV device, both the NIC and the GEV device must have a valid IP in the same subnet range. Addresses can be assigned both dynamically and statically.

Static IP

Note
Usage of static IP addresses is recommended to ensure deterministic behavior, both in NIC and in Camera.

Static addresses guarantee the expected IP assignment for all network participants. This is available in two different forms:

  • addresses can be either assigned in a persistent manner (Static IP) or
  • only until the next power cycle of the camera device (Force IP).
Note
In both cases, addresses must be set for the NIC and in the camera.

DHCP

The devices within the network with DHCP enabled get their addresses from the server. GEV devices try to inquire an address over DHCP by default. Dynamically assigned addresses are random and can vary over connections.

LLA

If there is no DHCP server present, the LLA (link local address) method is used. The NIC automatically assigns an IP address in the 169.254.X.Y subnet range.

Warning
Only one NIC on the PC should use LLA, otherwise it can lead to IP conflicts.
It is crucial that DHCP is enabled even when LLA is the preferred configuration mode.

See here for a guide in code

Performance Settings

Note
All settings can be set via the GenICam Browser. Here only the software defined approach is explained.

There are configuration properties which influence the performance of the system and the stability of the data transfer over the network card. The settings depend on the used operating system, the NIC should be optimized with respect to the corresponding vendors best practice. Refer to the manufacturers manual for specifics.

Receive Buffers

Sets the number of buffers used by the driver when copying data to the protocol memory. Increasing this value can enhance the receive performance, but also consumes system memory.

Flow Control

Enables adapters to generate or respond to flow control frames, which help regulate network traffic.  

Interrupt Moderation Rate

Sets the Interrupt Throttle Rate (ITR), the rate at which the controller moderates interrupts. The default setting is optimized for common configurations. Changing this setting can improve network performance on certain network and system configurations. When you use a higher ITR setting, the result is better system performance.

Note
A higher ITR rate also means the driver has more latency in handling packets. If the adapter is handling many small packets, lower the ITR so the driver is more responsive to incoming and outgoing packets.

Jumbo Frames/Packets (large MTU size)

Sets the capability. If large packets make up the majority of traffic and more latency can be tolerated, Jumbo Frames can reduce CPU utilization and improve transmission efficiency. Jumbo frames have a strong impact on the performance of the GEV system. Jumbo frames are packages with more than the Ethernet MTU (Maximum Transmission Unit) of 1500 bytes.

Note
Jumbo Frames can cause lost packets or frames when activated in the network card and the camera. But this is normally a side effect caused by the other settings mentioned before, which is only visible with Jumbo Frames. If you have trouble when using Jumbo Frames, try to find the cause in the previously mentioned settings instead of deactivating the Jumbo Frames.

When activated in the network card this does not mean that Jumbo Frames are used. The changes take effect when also adjusting the packet size within the device. 9174 Bytes is recommended if supported by camera and network card.

#include <cvb/device_factory.hpp>
#include <string>
auto devices = Cvb::DeviceFactory::Discover(Cvb::DiscoveryFlags::IgnoreVins);
auto desired = devices[0]; // for simplicity take the first
const auto paketSize = 9174; // Bytes
desired.SetParameter(CVB_LIT("PacketSize"), std::to_string(packetSize));
static std::vector< DiscoveryInformation > Discover()

var devices = DeviceFactory.Discover(DiscoverFlags.IgnoreVins);
var packetSize = 9174; // Bytes
devices[0].SetParameter("PacketSize", $"{packetSize}");
static DiscoveryInformationList Discover()

import cvb
discover = cvb.DeviceFactory.discover_from_root(cvb.DiscoverFlags.IgnoreVins)
info: cvb.DiscoveryInformation = discover[0]
packet_size = 9174 # Bytes
info.set_parameter("PacketSize", f"{packet_size}")
List[cvb.DiscoveryInformation] discover_from_root(int flags=cvb.DiscoverFlags.FindAll, int time_span=300)

Firewall

Firewalls can block GEV packets and slow down the system performance. Therefore it is important to disable all firewalls within the camera connection setup if there are connection problems. This can also be handled via Firewall incoming/outgoing rules.

Inter Packet Delay

Inter Packet Delay is a parameter to control the data rate from cameras in multi-device setups. With IPD, cameras can be slowed down by adding delays between the sent packets. This can help balancing network traffic between multiple devices. The specific delay values have to be tuned depending on the network capacity, the devices configuration and the data rates.

#include <cvb/device_factory.hpp>
#include <string>
auto devices = Cvb::DeviceFactory::Discover(Cvb::DiscoveryFlags::IgnoreVins);
auto desired = devices[0]; // for simplicity take the first
const auto interPacketDelay = 900; // ms
desired.SetParameter(CVB_LIT("InterPacketDelay"), std::to_string(interPacketDelay));

var devices = DeviceFactory.Discover(DiscoverFlags.IgnoreVins);
var interPacketDelay = 900; // ms
devices[0].SetParameter("InterPacketDelay", $"{interPacketDelay}");

import cvb
discover = cvb.DeviceFactory.discover_from_root(cvb.DiscoverFlags.IgnoreVins)
info: cvb.DiscoveryInformation = discover[0]
inter_packet_delay = 900 # ms
info.set_parameter("InterPacketDelay", f"{inter_packet_delay}")

Packet Resend

Packet resend can be used (when supported) to request the retransmission of packets that have been lost. In high data rate scenarios and due to the use of UDP this can cause corrupted frames. In order to get those lost frames, GEV implements an asynchronous resend mechanism.

#include <cvb/device_factory.hpp>
#include <string>
auto devices = Cvb::DeviceFactory::Discover(Cvb::DiscoveryFlags::IgnoreVins);
auto desired = devices[0]; // for simplicity take the first
desired.SetParameter(CVB_LIT("PacketResend"), "1");

var devices = DeviceFactory.Discover(DiscoverFlags.IgnoreVins);
devices[0].SetParameter("PacketResend", "1");

import cvb
discover = cvb.DeviceFactory.discover_from_root(cvb.DiscoverFlags.IgnoreVins)
info: cvb.DiscoveryInformation = discover[0]
info.set_parameter("PacketResend", "1")

RDMA

To enable RDMA on supported devices, the driver needs to be advised to do so. For further information on RDMA, see this page

Advanced Streaming:

The GEV standard offers a dedicated way to handle special devices. In the context of CVB they are hidden to the user, but for completeness their support may be mentioned here:

GEV MultiPart

The GEV specific container format offers a way to pack various payload types into one container. In CVB, those Multipart payloads are represented in Cvb::Composites parts.

GEV MultiStream

In certain scenarios camera vendors offer devices with not only one but multiple image streams in parallel. This is often the case if there are two independent signals acquired, simultaneously. In CVB the streams can be selected as follows:

using namespace Cvb;
auto streamIndex = 1; // for the second stream
auto stream = device.Stream<ImageStream>(streamIndex);

var streamIndex = 1; // for the second stream
var stream = device.GetStream<ImageStream>(streamIndex);

import cvb
stream_index = 1 # for the second stream
stream = device.stream(cvb.ImageStream, stream_index)

Further Reading

Discovering a GEV Device
Setting device IPs
Getting started with RDMA

GEV Standards Host: A3
CVB User Forum