Common Vision Blox 15.0
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Events Friends Modules Pages
Calibration of Linescan Cameras

Introduction

For high precision applications (e.g. pick and place), images acquired by linescan cameras have to be calibrated. Therefore the sensor direction (including the lens distortion errors) and the direction of movement (including the encoder step) need to be considered.

The following sections describe the theoretical aspects of calibration and provide step-by-step guidance on calibrating using CVB.

Setup and Prerequisites

Before starting the software calibration the camera hardware needs to be setup correctly and the device temperature should already be in the working point. Once these prerequisites are met, images containing a calibration target can be acquired. The main part of the linescan calibration is the correction of errors induced by lens distortion. These errors can be approximated by a 3rd order polynomial as described in detail in section Lens Distortion Calibration. For the estimation of the correction parameters a pattern with alternating black and white stripes where the width of the stripes is precisely known can be used. Ensure that the stripes are oriented vertically to the scan direction. The provided reference width corresponds to the stripe width as seen in the image scan direction. However, if the stripes are tilted, it's necessary to recalculate the reference width based on the tilt angle.

After applying the correction for the lens distortion, coordinates along the sensor line become metric. In order to get square pixels, where x- and y-axis share the same units, also the direction of movement has to be calibrated by a scaling factor. Therefore two circular markers with known position can be used (see dots in the figures below). Note, that for the calibration two separate acquisitions can be used: one with the markers and one with the stripe pattern. However, it is essential to ensure that both images have identical widths.

There are two distinct configurations based on whether the scan line of the sensor is stored as a row in the image, spanning all columns (indicating a scan direction along x-axis) or stored as a column, spanning all rows (indicating a scan direction is along y-axis).

ATTENTION: Please note that in some linescan cameras, the user can set the parameter "Scan Direction", which indicates the "direction of movement"! The "Scan Direction" parameter of the algorithm in Common Vision Blox is not related to the "Scan Direction" of the linescan camera.

Scan direction in X
Scan direction in Y

The functionality of the linescan camera calibration is implemented in the CVCFoundation.dll. To execute the code snippets provided in this documentation, following DLLs are additionally required:

To utilize this functionality, the following namespaces and include files are required:

Calibration

The results of the calibration will be stored as a 3rd order polynomial in the CVB transformation object:

The coefficients stored in this object represent a 3rd order polynomial defined as follows:

x' = a1 * x3 + a2 * x2y + a3 * xy2 + a4 * y3 + a5 * x2 + a6 * x * y + a7 * y2 + a8 * x + a9 * y + a10

y' = b1 * x3 + b2 * x2y + b3 * xy2 + b4 * y3 + b5 * x2 + b6 * x * y + b7 * y2 + b8 * x + b9 * y + b10

where
x: pixel position in x (column),
y: pixel position in y (row),
a1-a10: coefficients in x,
b1-b10: coefficients in y,
x' and y': transformed x and y coordinates.

Note, that most of the coefficients are zero for the linescan calibration. They are described in detail in the following two sections.

Lens Distortion Calibration

The main part of the linescan calibration is the correction of errors induced by lens distortion. These errors can be approximated by a 3rd order polynomial, neglecting mixed coefficients. Depending on the scan direction depicted in the figures above, the coefficients correcting the lens distortion are stored to the x or y coefficients in the CVB transformation object. From the previously described equation, it follows for the corresponding scan direction:

Scan direction in X:     x' = a1 * x3 + a5 * x2 + a8 * x + a10
Scan direction in Y:     y' = b4 * y3 + b7 * y2 + b9 * y + b10

For the estimation of the correction parameters a pattern with alternating black and white stripes where the width of the stripes is known can be used. The pattern should be designed as described in section Setup and Prerequisites. The stripes of the target must be detected in the image first, which will be described in the following section.

Stripe Target Detection

For the detection of the stripes, the CVB Edge Tool is internally used. The edges are detected using the contrast method. The following code example detects the stripes from an image. The parameters to be set are described in detail below.

// load image with stripes
auto imgfile = ...
auto image = Cvb::Image::Load(imgfile);
// configure detection
auto scanDirection = Cvb::Foundation::CalibrationLineScan::ScanDirection::X
int numStripes = ...
double threshold = ...
// define AOI including only areas with stripes
auto area = Cvb::Rect<int>(0, 0, image->Width()-1, image->Height()-1); // left, top, right, bottom
// detect edges
auto detectedEdges = Cvb::Foundation::CalibrationLineScan::DetectEdgesOfStripeTarget(*image, area, numStripes, scanDirection, threshold);
static std::unique_ptr< Image > Load(const String &fileName)
EdgeDetectionResult DetectEdgesOfStripeTarget(const Image &imageStripes, const Rect< int > &aoi, int numStripes, const ScanDirection &scanDirection, double threshold)

// load image with stripes
var image = Image.FromFile(...);
// configure detection
var scanDirection = ScanDirection.X;
int numStripes = ...
double threshold = ...
// define AOI including only areas with stripes
var area = new Rect(0, 0, image.Width - 1, image.Height - 1);
// detect edges
var detectedEdges = CalibrationLineScan.DetectEdgesOfStripeTarget(image, area, numStripes, scanDirection, threshold);
static EdgeDetectionResult DetectEdgesOfStripeTarget(Image imageStripes, Rect aoi, int numStripes, ScanDirection scanDirection, double threshold)
__int3264 Image

# load image with stripe target
imgfile = ...
image = cvb.Image.load(imgfile)
# configure detection
scan_direction = cvb.foundation.ScanDirection.X
num_stripes = ...
threshold = ...
# define AOI including only areas with stripes
area = cvb.Rect.create(0, 0, image.width-1, image.height-1) # left, top, right, bottom
# detect edges
detected_edges = cvb.foundation.detect_edges_of_stripe_target(image, area, num_stripes, scan_direction, threshold)
cvb.Image load(str file_name)
cvb.Rect create(float left=0.0, float top=0.0, float right=0.0, float bottom=0.0)
cvb.foundation.EdgeDetectionResult detect_edges_of_stripe_target(cvb.Image image_stripes, cvb.Rect aoi, int num_stripes, int scan_direction, float threshold)

After loading the image with the stripe target, the following input parameters need be set:

  • The area of interest

must only cover areas with stripes. Since the reference width corresponds to the stripe width as seen in the image scan direction, the stripes have to be oriented vertically to the scan direction. If they are tilted, the reference width has to be recalculated considering the tilt angle φ: ref_width_tilted = ref_width / cos(φ) (where ref_width represents the actual width of stripes).

  • The number of stripes included by the target must be specified. Note, that this value is only used for memory allocation. It does not necessarily have to precisely match the number of stripes, but it must be equal to or greater than the actual number of stripes.
  • The scan direction must be set correctly. It can be along the x- or the y-axis as depicted in the figures above.
  • Threshold for gray value change: Detailed information can be found here. A good value is 20. Experiment on your own to determine the value that yields satisfactory results.

The edge detection is done line by line. Only lines, where the same number of edges are found, are stored to the result object.

Direction of Movement Calibration

If square pixel with accurate metric values are needed, the direction of movement has to be calibrated, too. If the encode step (velocity of camera or object movement) is precisely known, you can use this value for the calibration as outlined in section Special Case: Known Encoder Step, which represents the simpler scenario.

In some cases, metric values along the y-axis might not be necessary. In such cases, users can also follow the steps outlined in Special Case: Known Encoder Step using a fictive encoder step.

But in most cases, you will have to calibrate the direction of movement. Therefore an image with two circular markers has to be acquired. Note, that it is essential to ensure that the marker image and the stripe image have identical widths. The distance between these two markers has to be precisely known. Section Calibration Point Detection describes how to detect the calibration points in the image.

The calibration of the direction of movement represents only a scaling factor. From the equation that describes the calibration polynomial, we derive the following for the corresponding scan direction:

Scan direction in X:     y' = b9 * y
Scan direction in Y:     x' = a8 * x

Calibration Point Detection

For the detection of the calibration points a blob search is conducted. If more than two blobs are found, the outermost blobs (most proximate to the edges of the image) are used. The following code snippet, shows how to detect the calibration points with CVB:

// load image with two calibration points
auto imgfile = ...
auto image = Cvb::Image::Load(imgfile);
// define AOI
auto areaPoints = Cvb::Area2D(Cvb::Rect<double>(0, 0, image->Width() - 1, image->Height() - 1)); // left, top, right, bottom
// extract
auto scanDirection = Cvb::Foundation::CalibrationLineScan::ScanDirection::X;
auto pointSize = Cvb::ValueRange<double>(..., ...);
image->Plane(0), areaPoints, Cvb::Foundation::CalibrationPatternContrast::WhiteOnBlack, 80,
pointSize, scanDirection);
auto p1 = calPoints.first;
auto p2 = calPoints.second;
std::cout << "extracted top point(x,y) : " << p1.X() << "," << p1.Y() << "\n";
std::cout << "extracted bottom point(x,y): " << p2.X() << "," << p2.Y() << "\n";
std::pair< Point2D< double >, Point2D< double > > CalculateTwoPointsForCalibrationOfMovement(const ImagePlane &imagePlane, const Area2D &aoi, Foundation::Transform2D::CalibrationPatternContrast contrast, int minContrast, const ValueRange< double > &pointSizeRange, const ScanDirection &scanDirection)

// load image with two calibration points
var image = Image.FromFile(...);
// define AOI
var areaPoints = new Area2D(new Rect(0, 0, image.Width - 1, image.Height - 1));
// extract
var scanDirection = ScanDirection.X;
var pointSize = new ValueRange<double>(..., ...);
image.Planes[0], areaPoints, CalibrationPatternContrast.WhiteOnBlack, 80,
pointSize, scanDirection);
var p1 = calibrationPoints[0];
var p2 = calibrationPoints[1];
Console.Write("extracted top point(x,y) : " + p1.X + "," + p1.Y + "\n");
Console.Write("extracted bottom point(x,y): " + p2.X + "," + p2.Y + "\n");
static Point2Dd[] CalculateTwoPointsForCalibrationOfMovement(ImagePlane imagePlane, Area2D aoi, CalibrationPatternContrast contrast, int minContrast, ValueRange< double > pointSizeRange, ScanDirection scanDirection)

# load image with calibration points
imgfile = ...
image = cvb.Image.load(imgfile)
# define AOI.
area_points = cvb.Area2D.create(cvb.Rect.create(0, 0, image.width-1, image.height-1)) # left, top, right, bottom
# detect calibration points
scan_direction = cvb.foundation.ScanDirection.X
point_size = cvb.NumberRange(...,...) # minimum and maximum point size in [number of pixels]
image.planes[0],
area_points,
cvb.foundation.CalibrationPatternContrast.BlackOnWhite, 80,
point_size,
scan_direction
)
P1 = cal_points[0]
P2 = cal_points[1]
print(f"extracted top or left point(x,y) : {P1.x},{P1.y}")
print(f"extracted bottom or right point(x,y): {P2.x},{P2.y}")
cvb.Area2D create()
Tuple[cvb.Point2D, cvb.Point2D] calculate_two_points_for_calibration_of_movement(cvb.ImagePlane image_plane, cvb.Area2D aoi, int contrast, int min_contrast, cvb.NumberRange point_size_range, int scan_direction)

After loading the image with the calibration points, the following input parameters have to be set:

  • Area of interest containing the calibration points: In this example the whole image is used. If the image includes disturbing background elements, consider narrowing the AOI to focus solely on the area containing the calibration points.
  • The type of the calibration pattern, which can be "black markers on white background" or vice versa.
  • The minimum gray value contrast between the object and the background of the calibration points. The optimal value depends on the quality of the image taken. A good value could be 80.
  • The minimum and maximum size of the markers in the image in [number of pixels].
  • The scan direction must be set correctly. It can be along the x- or the y-axis as depicted in the figures above.

Special Case: Known Encoder Step

If the encoder step of your setup is precisely known (e.g. in [mm/scanline]), you do not need to acquire an image with calibration points. You may manually define fictive calibration points and the corresponding reference distance. Be mindful to consistently utilize the same units (in this example [mm]).

double encoderStep = ... // mm/scanline
auto p1 = Cvb::Point2D<double>(0, 0); // first point
auto p2 = Cvb::Point2D<double>(0, 1); // second point
auto refDist = encoderStep; // reference distance between calibration points in [mm]

double encoderStep = ... // mm/scanline
var p1 = new Point2Dd(0, 0); // first point
var p2 = new Point2Dd(0, 1); // second point
var refDist = encoderStep; // reference distance between calibration points in[mm]

encoder_step = ... # in mm/scanline
P1 = cvb.Point2D(0,0) # first point
P2 = cvb.Point2D(0,1) # second point
ref_dist = encoder_step # reference distance between calibration points in [mm]

Calibration Coefficients Estimation

After you successfully created an object containing the result of the stripe detection and calculated the position of the calibration points in the image, you can start the calibration:

// distance between calibration points in [mm]
double refDist = ...
// with of stripes in [mm]
double refWidth = ...
// configure calibration
auto scanDirection = ...
configuration.SetScanDirection(scanDirection);
// estimate calibration
p1, p2, refDist, detectedEdges, refWidth, configuration);
std::cout << "--- Results linescan calibration: ---\n";
std::cout << "mean error and stdev: " << linescanCalibrator.MeanError() << " mm / " << linescanCalibrator.StandardDeviation() << " mm\n";
std::cout << "pixel_size after calibration: " << linescanCalibrator.PixelSize() << "\n";
// save calibration
linescanCalibrator->Transformation()->Save("linescan_calibrator.nlt");
std::unique_ptr< LineScanCalibrator > CreateLineScanCalibration(const Point2D< double > &calibrationPoint1, const Point2D< double > &calibrationPoint2, double referenceDistanceCalibrationPoints, const EdgeDetectionResult &edgeDetectionResult, double referenceWidthStripes, const LineScanCalibrationConfiguration &configuration)

// distance between calibration points in[mm]
double refDist = ...
// with of stripes in[mm]
double refWidth = ...
// configure calibration
var scanDirection = ...
var configuration = new LineScanCalibrationConfiguration();
configuration.ScanDirection = scanDirection;
// estimate calibration
p1, p2, refDist, detectedEdges, refWidth, configuration);
Console.Write("--- Results linescan calibration: ---\n");
Console.Write("mean error and stdev: " + linescanCalibrator.MeanError + " mm / " + linescanCalibrator.StandardDeviation + " mm\n");
Console.Write("pixel_size after calibration: " + linescanCalibrator.PixelSize + "\n");
// save calibration
linescanCalibrator.Transformation.Save("linescan_calibrator.nlt");
static LineScanCalibrator CreateLineScanCalibration(Point2Dd calibrationPoint1, Point2Dd calibrationPoint2, double referenceDistanceCalibrationPoints, EdgeDetectionResult edgeDetectionResult, double referenceWidthStripes, LineScanCalibrationConfiguration configuration)

# reference distance between calibration points in [mm]
ref_dist = ...
# reference width of stripes in [mm]
ref_width = ...
# configure calibration
configuration.scan_direction = ...
# estimate calibration
P1, P2, ref_dist, detected_edges, ref_width, configuration)
print("--- Calibration results:---")
print(f"mean error and stdev: {linescan_calibrator.mean_error} mm / {linescan_calibrator.standard_deviation} mm")
print(f"pixel_size after calibration: {linescan_calibrator.pixel_size}")
print(f"coeff X: {linescan_calibrator.transformation.coefficients_x}")
print(f"coeff Y: {linescan_calibrator.transformation.coefficients_y}")
# save calibration
linescan_calibrator.transformation.save("linescan_calibrator.nlt")
cvb.foundation.LineScanCalibrator create_line_scan_calibration(cvb.Point2D calibration_point_1, cvb.Point2D calibration_point_2, float reference_distance_calibration_points, cvb.foundation.EdgeDetectionResult edge_detection_result, float reference_width_stripes, cvb.foundation.LineScanCalibrationConfiguration configuration)

If you additionally like to fix the pixel size of the transformed image, you can configure that via the linescan calibration object. You have to set the "predefined pixel size mode" to "use" and specify the desired value for the "pixel size".

The reference distance between calibration points and the width of stripes have to be precisely known in metric units. They must be provided in the same units, as well as the pixel size (if specified). Again the scan direction must be set as depicted in the figures above. The resulting linscan calibrator object includes some error statistics, the pixel size after the calibration and the coefficients in X and Y.

Calibrated Image Creation

As the calibrated image should only encompass areas with values, suitable target dimensions have to be calculated. To determine the new width, height and offset for the calibrated image, you should transform the original aoi (defined by the width and height of the original/uncalibrated image). With the following code the target dimensions and offset can be calculated:

auto image = ... // original image
// original image dimensions (AOI)
auto aoiOrg = Cvb::Rect<double>(0, 0, static_cast<double>(image->Width() - 1), static_cast<double>(image->Height() - 1));
// transform AOI (double)
auto aoiD = linescanCalibrator.Transformation()->Transform(aoiOrg);
// round AOI (int)
auto aoi = Cvb::Rect<int>(
Cvb::Point2D<int>(static_cast<int>(round(aoiD.Location().X())), static_cast<int>(round(aoiD.Location().Y()))),
Cvb::Size2D<int>(static_cast<int>(round(aoiD.Width())), static_cast<int>(round(aoiD.Height()))));
std::cout << "--- Dimensions of calibrated image:---\n";
std::cout << "pixel_size after calibration: " << linescanCalibrator.PixelSize() << "\n";
std::cout << "w x h of original image : " << image->Width() << " x " << image->Height() << "\n";
std::cout << "w x h of calibrated image : " << aoi.Width() << " x " << aoi.Height() << "\n";
std::cout << "offset x,y : " << aoi.Left() << "," << aoi.Top() << "\n\n";

var image = ... // original image
// original image dimensions (AOI)
var aoiOrg = new RectD(0, 0, image.Width - 1, (image.Height - 1));
// transform AOI (double)
var aoiD = linescanCalibrator.Transformation.Transform(aoiOrg);
// round AOI (int)
var aoi = new Rect(
new Point2D((int) Math.Round(aoiD.Location.X), (int)Math.Round(aoiD.Location.Y)),
new Size2D((int)Math.Round(aoiD.Width), (int) Math.Round(aoiD.Height)));
Console.Write("--- Dimensions of calibrated image:---\n");
Console.Write("pixel_size after calibration: " + linescanCalibrator.PixelSize + "\n");
Console.Write("w x h of original image : " + image.Width + " x " + image.Height + "\n");
Console.Write("w x h of calibrated image : " + aoi.Width + " x " + aoi.Height + "\n");
Console.Write("offset x,y : " + aoi.Location.X + "," + aoi.Location.Y + "\n");

image = ... # original image
# original image dimensions (AOI)
aoi_org = cvb.Rect(0, 0, image.width-1, image.height-1) # left, top, right, bottom
# transform AOI
aoi = linescan_calibrator.transformation.transform_rect(aoi_org)
# target dimensions (rounded to integer values)
target_size = cvb.Size2D(round(aoi.width), round(aoi.height))
target_offset = cvb.Point2D(round(aoi.location.x),round(aoi.location.y))
print("--- Dimensions of calibrated image:---")
print(f"pixel_size after calibration: {linescan_calibrator.pixel_size}")
print(f" w x h: {width} x {height}")
print(f"offset (x,y): {target_offset.x},{target_offset.y}")

Finally the image can be simply transformed as follows:

auto calibratedImage = linescanCalibrator->Transformation()->Transform(*image, aoi.Size(), aoi.Location());
calibratedImage->Save("calibrated_image.bmp");

var calibratedImage = linescanCalibrator.Transformation.Transform(image, aoi.Size, aoi.Location);
calibratedImage.Save("calibrated_image.bmp");

image = ... # original image
calibratedImage = linescan_calibrator.transformation.transform_image(image, target_size, target_offset)
calibratedImage.save("calibrated_image.bmp")