<< Click to Display Table of Contents >> Navigation: CVB with GenICam > FAQ > GigE Vision FAQ |
GigE means Gigabit Ethernet and is the general Ethernet network technology combined with the bandwidth specification Gigabit.
GigE Vision (GEV) is the name of a network protocol, optimized for machine vision, maintained by the AIA (Automated Imaging Association) designed to control machine vision devices (cameras) and to transfer their data (images) as effectively as possible across IP networks.
Although GigE Vision contains GigE in its name, the protocol also works on lower bandwidth, although at least GigE is recommended.
No, there is no reference implementation. The standard is available only as a PDF Document. GEV products are bound to a self certification process. Compliance test software will be available however.
The protocol used on 10 GigE is no different to the one used on slower connections.
Therefore, 10GigE should simply just work. The only problem might be the performance.
You can not simply multiply it by 10 since you will reach the limitations of the PC.
The other factor is the cost, since 10GigE is still a little expensive.
To get exact numbers we would have to investigate and that has not yet been done.
DHCP (Dynamic Host Configuration Protocol) is a protocol which allows a device on the network to request a free (not used by another device) IP address from a DHCP server.
See https://en.wikipedia.org/wiki/DHCP.
If a device is configured to use DHCP but can not find a DHCP server it will, after a certain time, automatically fall back to a mode called linked local address (LLA).
On IPv4 networks (which is what we usually have) that means that the device assigns itself an IP in the range 169.254.x.x where the x.x are picked randomly with a subnet mask 255.255.0.0.
It then tries to broadcast to the network to find out it that IP is already taken. If the IP is free, it will be used.
If it is already taken, another address will be tried ( https://en.wikipedia.org/wiki/Link-local_address)
IP V4 address range for LLA is 169.254.1.0 through 169.254.254.255 according to RFC 3927.
A NIC is a "network interface card". This does not really mean the card itself but a single network connector.
So, for example, if you have a motherboard with 2 on board GigE interfaces (2x RJ45) you have two NICs in your system.
If you have a multi port network card each RJ45 (that is the name of a well known ethernet connector) on that multi port card (max. known is a 4 port card from Intel) is referred to as a separate NIC.
A filter driver is a kernel-mode driver which normally resides between the upper-level protocol and the lower-level miniport drivers (on a Windows based system).
For GigE Vision technology it is used to bypass the standard network stack (used by the operating system) for all data stream packets.
It filters GigE Vision related data packets and transfers them directly to an application-provided data buffer.
This greatly reduces the CPU load of the system.
All non GigE Vision data stream related packets are unaffected.
In principle there is no problem running a GEV camera over a wireless connection.
The problem here is one of reliability. Wireless networks dynamically adjust their speed depending on the connection quality.
This means that if the connection quality suddenly drops for some reason, a lot of packets will be dropped on the network.
This leads to an avalanche of resend requests from the host (which did not receive the packets) and in the worst case scenario, this will cause the whole connection to the camera to fail.
So, in principle, yes it will work but it is neither guaranteed or recommended.
The short answer is YES. The long answer is: Multicast is a Layer 3 IP protocol implementation.
GEV is on Layer 4 so in general GEV will simply use the different IPs associated with Multicast ranges.
The question is, whether the hardware (switches) and the software you are using supports it.
CVB did not support Multicast from the first version on but in further releases.
Please refer to the actual specific GenICam Driver Release Notes.
It is prepared but not yet implemented.
All points where IPs are stored in the device are prepared but are not part of the current standard.
There are big performance differences between different network cards from different vendors (driver performance, on-board memory etc.).
We prefer Intel for the network card given the choice.
For switches it is a bit more complicated and we need to evaluate certain features on a case by case basis.
Just because a product says ‘Gigabit Ethernet’ it does not guarantee that is provides the best performance.
Firstly it is important to say that it can be very hard to get the required information on a specific switch.
There are no fixed expressions or specific numbers. So one vendor might call it xxx and the other yyy.
We only can give you descriptions that relate to technical behavior.
In general:
If you cannot find a specific value in the documentation you can always contact the manufacturer to get the information directly or you might simply assume that this switch is not good enough in this aspect because the vendor has attempted to hide it.
|
Switching Bandwidth: |
Is the overall bandwidth a switch can handle across all ports |
|
Max packet rate: |
Max number of packets per second across all ports. |
|
Jumbo Packets: |
Maximal packet size possible. With the growing size of packets we get less overhead and better performance on the transmission. |
|
Layer 2 / Layer 3: |
You need Layer 3 switches if you need multicast. Otherwise it is not that important. |
|
Memory for packet storage: |
If you have multiple cameras connected to a switch and one gigabit link to the host you might end up with a peak bandwidth above one gigabit. In this case a switch with memory can buffer the packets and send them out with a small delay. |
|
SFP slots: |
If you need to run your network across fibre instead of copper most switches don’t have direct fibre ports but little slots where you can plug-in a little stick called SFP (small form factor pluggable) which holds the actual fibre connection. These SFP add to the cost of the system because they are not included in the price of the switch. |
|
Number of Ports: |
Is the actual number of cables/fibres and therefore devices you can connect to your switch. Watch out - sometimes, if you have SFP-Slots in your switch they share a ports with a copper RJ45. So for example: we have a 12 Port Netgear GigE switch in our portfolio. It has 12 RJ45 copper connectors and 12 SFP slots. That means that every RJ45 shares a port with an SFP-Slot and you can only use either, not both, ports. |
|
Managed switches: |
Most managed switches allow you to configure one of the ports as a monitoring port. That means that all packet traffic is mirrored on this port, which can help when debugging potential problems. This also manages switches, and maintains an error log which again helps to detect network cable problems. |
|
|
|
Vendors are:
•Netgear
•SMC
•HP
•Hirschmann
•Cisco
Yes, the number of cameras that can be connected to a single NIC is not limited in practice.
So it really depends on bandwidth and latency, and hence no general answer.
The more cameras you connect the more you have to think about peak bandwidth and latency.
The other problem with QOS is that one cannot really predict the bandwidth that a triggered camera might use.
There might be a peak bandwidth of 1 gpps one moment, while in the next second there might be no transfer for one minute!
Also, with GigE you are not limited to a specific number of cameras you can connect.
You could have 100 cameras all triggered at the same time but only once per minute. How would you share their bandwidth?
What has been implemented however is a mechanism called inter-packet delay.
This puts a small delay between the sending of packets which enables you to limit the bandwidth of a single camera and leave it to the switch to buffer and serialize the data from multiple cameras.
This really depends on the system you use, the bandwidth of the camera, the packet size, the driver software and the performance of your network components.
The new multi-core CPUs are ideal for GEV because the operating system can distribute the load among them.
Yes, but bear in mind that a Laptop might not have the same performance as a desktop system.
It mainly depends on the CPU and on the connection between the NIC and the memory.
There are machines out there which use PCIe which has a better performance then PCI.
The worst scenario would be a connection via PCMCIA.
Depending on your system configuration you can stream above 100MB/s sustained through one NIC.
That does not mean that you can easily extrapolate this to two, three or four NICs.
If you have applications close to that limit it is recommended that you contact the technical support of STEMMER IMAGING respectively your local distributor.
The short answer is: That depends on the system. The long answer is more difficult.
It really depends on the camera in use, the network setup, the packet size, the PC and the system load.
Assuming the camera is connected directly (no switch or router, no lost packets on the network) and with almost no CPU load on the receiving system, the latency is in the µs region.
Every switch would add to that (again depending on the switch) probably in the lower µs range.
But all this is a simple delay and is relatively easy to handle since it is the same with every image.
The worst situation is a jitter in the arrival of images depending on the transmission quality and on the system load.
If you have to perform a resend with a GigE camera, this adds to the data latency which is not predictable.
To measure a roundtrip time we used a camera which indicated the end of frame transfer in the camera by a digital output signal.
We triggered an image in the camera and as soon as the application reviewed the complete image it set another digital output signal in the camera.
We measured the delay between the two signals with an oscilloscope.
Once again, the time depends on various factors.
We measured a roundtrip time of about 3.5 ms.
This gives a realistic estimation on what such a delay would be in a real application.
The biggest difference here is selecting and handling the fibre itself.
There are a number of different connectors available, different fibres and different wavelength etc.
But all this relates to the physical layer, not to GEV.
For GEV there is NO difference between using a fibre connection or copper.
If you need help setting up a fibre connection and/or choosing the right components please contact the technical support of STEMMER IMAGING respectively your local distributor.
That is true, but this is the reason why we have GEV.
Since UDP is a connectionless protocol and since it does not have mechanisms to cope with lost packages we put a protocol layer on top so that we take care of these weaknesses while still maintaining the optimal performance at the lowest possible integration costs.
We chose UDP instead of TCP because of performance and cost reasons.
See also FAQ Is a Gig E Vision connection robust?
On an average system it takes between 0.5 ms and 2 ms (depending on the device and the network setup) to write a GEV register from the issue of the Write Reg command until the PC has the acknowledge.
This timing is for a direct link.
Apart from the limited bandwidth, GEV cameras should work just fine on Fast Ethernet (100MBit) or Ethernet (10MBit).
The only problem might occur if the link on the camera is 1GBit to a switch and then 10MBit or 100MBit from the switch to the host.
In that case the camera does not know about the limited connection and will send the packets at full Gigabit speed.
The switch can not forward that bandwidth and will drop packets. The host will see the missing packets and request resending.
For such a setup you have to limit the bandwidth of the camera or use a switch with internal memory to temporarily buffer the packets.
No, not yet, due to the size of the components and their limited power. There might be products in the future.
That is dependant on your software vendor.
GigE specifies Cat5e cabling, but to be on the safe side we recommend Cat6 cables which have better shielding.
Cat7 cables are available for higher frequencies but as yet there are no connectors defined.
The short answer would have to be yes.
The long answer would need to explain why this is the case since a UDP base protocol like GigE Vision sits on top of a Gigabit ethernet connection which is is not reliable as such.
With UDP we face two major problems. The first is that UDP is not connection oriented so we need to have a mechanism that enables each end to check that the other end is still available.
The 2nd problem is lost packets due to a number of different reasons.
So to solve the connection question, GigE Vision implements a mechanism called Heartbeat.
This enables the camera to determine whether the host is still up and running. The other problem is covered by the control protocol.
The packet loss is handled in different ways. With the control protocol, a lost package is detected by a ‘send and acknowledge’ mechanism.
The streaming, in order to achieve better performance, works with a different method.
Each data packet has a unique id so that the receiving host can identify missing packets and send a resend request to the device ‘on-demand’.
The device will store a certain number of packets and on receiving such a request it will resend the data.
This solution, in combination with a CRC (checksum) makes GigE to the most robust transfer technology we have so far in Machine Vision.
With CameraLink there is no possibility of identifying corrupt data, e.g. flipped bits due to too long cables.
With 1394 on the other hand we can also identify corrupt packets but don't have such a resend mechanism.
So without enumerating all technologies in this document, GigE Vision is as reliable as it can be.
On the other hand there will be a higher data latency which will be added to the data transfer and camera control when something goes wrong.
And it is even worse, because we can not deterministically predict these latencies when we assume that a control protocol packet gets lost, the camera sends no acknowledgement.
The host will wait a certain time until a timeout arrives and resend that command. This time the packet might get through, but the acknowledgement now get's lost.
Finally the third attempt works. This, of course, would be on a very very bad connection but this could happen.
So the comfort of having a safe connection brings a less deterministic behavior timewise.
How much this impacts the performance, really depends on the connection you are using, the components involved and the load on the host.
If nothing goes wrong and the connection does not loose packets we have NO performance loss.
Please also refer to the following FAQ's:
•What is the data latency compared to a frame grabber?
•What is the round trip time on a GEV setup?
•UDP is a "not reliable" protocol. Why is UDP used?
•How fast is the control protocol?
See FAQ Is a Gig E Vision connection robust?.
That really depends on the network topology. In general, if the network fails, the application will know about it and it try to reestablish the connection.
If the network fails the device and the host will loose the link and will disconnect. Common Vision Blox (CVB) fires an CVCError-event announcing an Acquisition Error .
Then the GenICam Driver has to be reloaded.
Devices which don't need to stream data can simply use the control protocol only and not expose a streaming channel.
In this way GigE Vision can be used as a simple control protocol.
Yes of course it can. Since GigE Vision uses standard network protocol mechanisms it can be implemented in software on the host.
You can use more than one filter driver for one NIC e.g. if you have SDK's or GigE Vision devices from several vendors on a single system.
The maximum number of filter drivers for one NIC is limited. There is one major drawback using multiple filter drivers on a single NIC.
Assume you have installed 2 filter drivers from 2 different vendors.
You have a GigE Vision device from each vendor which is processed by the filter driver of that vendor.
Your network stack will normally look something like this:
PROTOCOLDRIVER (upper-level)
FILTERDRIVER_B
FILTERDRIVER_A
MINIPORTDRIVER (lower-level)
NIC
Device_A + Device_B
All data stream packets from Device_A are filtered in FILTERDRIVER_A.
All data stream packets from Device_B are filtered in FILTERDRIVER_B.
But all data stream packets from Device_B have to pass FILTERDRIVER_A before they are processed by FILTERDRIVER_B.
This will cause additional CPU load. The more drivers you have in the stack the more CPU load you will get.
You can use different filter drivers on different NIC's. without causing additional CPU load when you disable (not uninstall) the unused driver for this NIC.
No you can not. This is controlled by the operating system. You can not select which driver is lower or higher in the stack.
You should not.
If you use a network analyzer software like Wireshark on your GigE Vision system you might run into problems.
Wireshark for example uses the WinPCap kernel mode protocol driver to analyze all incoming and outgoing packets.
Depending on the order in which the operating system has installed the filter drivers and WinPCap in the network stack, your filter driver might not receive any GigE Vision data stream packet.
In that case you will get no GigE Vision data!
You should not. We have seen problems e.g. using VPN software from Cisco.
There are similar reasons for this as mentioned under FAQ Can I use network analyzer software on my GigE Vision system?
VPN Software e.g.: Cisco VPN (Solution: disable network service called Deterministic Network Enhancer + Stop VPN service with "net stop cvpnd")
Network Analyzing Software like Wireshark (Solution: uninstall WinPCap)
See also FAQ's Can I use network analyzer software on my GigE Vision system? and Can I use software VPN on my GigE Vision system?
Yes you can. The optimization depends on the hardware you are using. So bear in mind that you should always use the proper hardware.
1.Install the latest drivers for your NIC.
2.Try to enable Jumbo Packages if your NIC has this feature.
3.Increase the size of the receive descriptor list entries on your network card to the maximum value.
4.Decrease the number of interrupts generated by the NIC.
Please have a look at our recommendations for the Performance Settings.
This sets the number of buffers used by the NIC driver that are used to copy data to the system memory (normally done with DMA).
Increasing this value can enhance receive performance, but also consumes system memory.
Receive descriptors are data segments that enable the adapter to allocate received packets to memory.
Each received packet requires one receive descriptor, and each descriptor uses certain amount of memory (e.g. 2kB for a INTEL Pro/1000).
If the number of receive descriptors is to low than the NIC might run out of memory which will cause packet loss.
For GigE Vision this might cause a lot of packet resend requests.
Depending on the NIC you are using, you can decrease the number of interrupts generated by the NIC.
This has some influence on the CPU load. On a INTEL Pro/1000 card you can decrease the number of interrupts that are generated by setting the "Interrupt Moderation Rate" to "Extreme".
For other network cards there are similar ways of decreasing the interrupt rate.
GigE Vision consists of a control and a streaming part. Only the control connection is claiming a static port on the device end (UDP 3956), which is also registered with IANA. The control connection source port as well as the streaming connection port(s) are dynamically allocated from the ephemeral port range 49152 through 65536.