C++ | .NET | Python |
Cvb::Match3D | Stemmer.Cvb.Match3D | cvb.match_3d |
In many manufacturing industries it is important that each product is checked for completeness and flaws. It is a common approach to compare a reference image (model) to an image of the test object (part). Therefore the object's image is compared to the model image. However, in practice the two images have to be perfectly aligned. This requires a transformation prior to the comparison step in order to move the part to the same position and orientation as the model. For objects in 2D a translation in x and y and one rotation have to be found. Whereas, aligning objects in 3D (point clouds) is more complex. Here, a translation in x, y and z and a rotation about the x-, y-, and z-axis have to be estimated. This results in 6 Degrees of Freedom (DoF) for the 3D case, as opposed to 3 DoF in the 2D case.
The core part of the Match3D tool is following function which aligns two point clouds in all 6 DOF: Cvb::Match3D::IcpMatch
In addition, a function is provided calculating the RMS of two point clouds: Cvb::Match3D::DistanceRMS
Note, that general functions handling 3D data (e.g. computing the euler angles from a rotation matrix and vice versa, transforming point clouds, loading a point cloud from a file, etc...) are implemented in the Core3D.dll.
The details of the alignment algorithm can be found in the section Alignment. In order to compare the point clouds they have to be reprojected along the z-axis and their differences have to be computed (see section Comparison of two point clouds).
Before two surfaces can be compared to each other, they have to be properly aligned in 3D. This can be done using the function Cvb::Match3D::IcpMatch, the core part of Match3D. Cvb::Match3D::IcpMatch is based on the iterative closest point algorithm (ICP) that follows Arun K., Huang T., Blostein S. (1987) "Least-squares fitting of two 3-D point sets" in IEEE Transactions on Pattern Analysis and Machine Intelligence 9:698–700. It is based on a least-squares fit which iteratively estimates a translation in x, y, z and a rotation matrix with the angles about the x-, y- and z-axis (roll, pitch and yaw).
Although the ICP is a stable and fast algorithm a coarse pre-alignement of the point clouds is required. For a successfully matching both input surfaces must meet the following conditions:
The function Cvb::Match3D::IcpMatch needs as input two organized or unorganized point clouds saved as Cvb::PointCloud. In addition several input parameters can be set:
With these parameters the accuracy of the alignment and the computation speed can be optimized. The output of Cvb::Match3D::IcpMatch is a translation vector (x, y, z) and a rotation matrix, stored in a Cvb::AffineMatrix3D object.
With the function Cvb::Matrix3D::RollPitchYaw the euler angles (roll, pitch, yaw) can be computed from the rotation matrix. In addition, a rough accuracy assessment of the matching results can also be done calculating the RMS with the function Cvb::Match3D::DistanceRMS. However, for the final quality inspection the differences between the model and the transformed point cloud have to be computed. This can be done by re-projecting the point clouds along the z-axis and computing the differences (see section Comparison of two point clouds). The following figures show two point clouds before (top) and after (bottom) the alignment. The blue points represent the model (golden template or reference object) and the red points represent the part to be inspected and aligned.
For the final quality inspection, the aligned point cloud has to be compared to the model. Therefore several functions can be used: In a first step both point clouds (model and target) must be reprojected along the z-axis by the function Cvb::PointCloud::RangeMap. The output will be a 2D image, we call it a rectified range map. Afterwards, the point differences between both surfaces can be computed using the function Cvb::DifferenceMap, which simply subtracts the previously generated rectified images from each other.
The whole process of matching and creating a disparity map is shown in the flow chart below.
A brief example is provided below. Two point clouds are loaded and matched. Afterwards they are converted into rectified range maps and a difference map is created from these images.
An example application can also be found in %cvb%Tutorial\Match3D.