Manto OCX Reference 14.1
Enumerations

Enumerations

enum  SearchMode { MC_SearchScene = 0 , MC_SearchAll = 1 , MC_SearchAll_Refined = 2 , MC_ReadToken = 3 }
 The #SearchTask property can be used to define the search mode for the next call to the #Execute method. More...
 

Detailed Description

Enumeration Type Documentation

◆ SearchMode

enum SearchMode

The #SearchTask property can be used to define the search mode for the next call to the #Execute method.

Supported platforms:
Win32
Related Topics:
#SearchTask Property
Enumerator
MC_SearchScene 

In MC_SearchScene mode the Manto OCX will search the whole rectangle defined by #X0, #Y0, #X1 and #Y1 (or the whole image if #Entire is set to TRUE) and return the result with the highest confidence value.

In other words: In this search mode only one (the optimum) result will be reported.

Attention
If the classifier knows only two classes (#NumClasses = 2), Manto will use an interpolation method to report a sub-pixel position in this mode. This is the most precise position measurement that Manto is capable of.
MC_SearchAll 

In MC_SearchAll mode the Manto OCX will search the whole rectangle defined by #X0, #Y0, #X1 and #Y1 (or the whole image if #Entire is set to TRUE) and return all the results that exceed the previously set #Threshold and making sure that the minimum spacing between two results is >= #Locality.

Attention
In this search mode, typically positional accuracy is comparatively poor as the result positions are restricted to a grid in the image defined by the length of the preprocessing code that has been used while learning the classifier. However, this is the quickest ( and most of the time sufficiently accurate) way to find all the results available in one image with just a single function call.
MC_SearchAll_Refined 

The MC_SearchAll_Refined combines the modes #MC_SearchAll and #MC_SearchScene.

First, a #MC_SearchAll search is carried out. Then the OCX will post-process each result 
found during the #MC_SearchAll run by applying #MC_SearchScene to a small area of interest 
around each of the results from the first step. The size of those areas of interest is defined 
by the length of the preprocessing code of the classifier: The area is 2^(n+1) 
where n is the number of characters in the preprocessing code.

That second step will usually lead to a much better positioning of the found objects. 
However this comes at the cost of significantly increased processing time. 
Attention
Furthermore, it is important to set the #Locality to a value bigger than 2^(n+1), otherwise results from different areas of interest may converge into a single result inadvertently.
MC_ReadToken 

The MC_ReadToken mode significantly differs from the other search modes.

At the beginning, the \a MC_ReadToken search will look for the first recognizable object in the rectangle 
defined by #X0, #Y0, #X1 and #Y1 (or the whole image if #Entire is set to \c TRUE).
It will then, at the location where this first hit has been found, start an iteration that advances 
a small area of interest whose size is defined by the property #Radius by #AdvanceX and #AdvanceY,
performing a new search at each transformed area of interest until no more results are found. 
The results encountered during this iteration are added to the result list and their \ref ClassID "ClassIDs" will be 
concatenated to a result token that is accessible through the #PToken property.

Therefore, \a MC_ReadToken provides a search strategy tailored to reading strings of characters 
(or, more generally, objects).