Table of Contents

Class Cv2

Namespace
OpenCvSharp
Assembly
OpenCvSharp.dll

OpenCV Functions of C++ I/F (cv::xxx)

public static class Cv2
Inheritance
Cv2
Inherited Members

Fields

FILLED

public const int FILLED = -1

Field Value

int

LOG2

public const double LOG2 = 0.6931471805599453

Field Value

double

PI

The ratio of a circle's circumference to its diameter

public const double PI = 3.141592653589793

Field Value

double

Methods

AGAST(InputArray, int, bool, DetectorType)

Detects corners using the AGAST algorithm

public static KeyPoint[] AGAST(InputArray image, int threshold, bool nonmaxSuppression, AgastFeatureDetector.DetectorType type)

Parameters

image InputArray

grayscale image where keypoints (corners) are detected.

threshold int

threshold on difference between intensity of the central pixel and pixels of a circle around this pixel.

nonmaxSuppression bool

if true, non-maximum suppression is applied to detected corners (keypoints).

type AgastFeatureDetector.DetectorType

one of the four neighborhoods as defined in the paper

Returns

KeyPoint[]

keypoints detected on the image.

Abs(Mat)

Computes absolute value of each matrix element

public static MatExpr Abs(Mat src)

Parameters

src Mat

matrix

Returns

MatExpr

Abs(MatExpr)

Computes absolute value of each matrix element

public static MatExpr Abs(MatExpr src)

Parameters

src MatExpr

matrix expression

Returns

MatExpr

Absdiff(InputArray, InputArray, OutputArray)

Calculates the per-element absolute difference between two arrays or between an array and a scalar.

public static void Absdiff(InputArray src1, InputArray src2, OutputArray dst)

Parameters

src1 InputArray

first input array or a scalar.

src2 InputArray

second input array or a scalar.

dst OutputArray

output array that has the same size and type as input arrays.

Accumulate(InputArray, InputOutputArray, InputArray)

Adds an image to the accumulator.

public static void Accumulate(InputArray src, InputOutputArray dst, InputArray mask)

Parameters

src InputArray

Input image as 1- or 3-channel, 8-bit or 32-bit floating point.

dst InputOutputArray

Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.

mask InputArray

Optional operation mask.

AccumulateProduct(InputArray, InputArray, InputOutputArray, InputArray)

Adds the per-element product of two input images to the accumulator.

public static void AccumulateProduct(InputArray src1, InputArray src2, InputOutputArray dst, InputArray mask)

Parameters

src1 InputArray

First input image, 1- or 3-channel, 8-bit or 32-bit floating point.

src2 InputArray

Second input image of the same type and the same size as src1

dst InputOutputArray

Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point.

mask InputArray

Optional operation mask.

AccumulateSquare(InputArray, InputOutputArray, InputArray)

Adds the square of a source image to the accumulator.

public static void AccumulateSquare(InputArray src, InputOutputArray dst, InputArray mask)

Parameters

src InputArray

Input image as 1- or 3-channel, 8-bit or 32-bit floating point.

dst InputOutputArray

Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.

mask InputArray

Optional operation mask.

AccumulateWeighted(InputArray, InputOutputArray, double, InputArray)

Updates a running average.

public static void AccumulateWeighted(InputArray src, InputOutputArray dst, double alpha, InputArray mask)

Parameters

src InputArray

Input image as 1- or 3-channel, 8-bit or 32-bit floating point.

dst InputOutputArray

Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.

alpha double

Weight of the input image.

mask InputArray

Optional operation mask.

AdaptiveThreshold(InputArray, OutputArray, double, AdaptiveThresholdTypes, ThresholdTypes, int, double)

Applies an adaptive threshold to an array.

public static void AdaptiveThreshold(InputArray src, OutputArray dst, double maxValue, AdaptiveThresholdTypes adaptiveMethod, ThresholdTypes thresholdType, int blockSize, double c)

Parameters

src InputArray

Source 8-bit single-channel image.

dst OutputArray

Destination image of the same size and the same type as src .

maxValue double

Non-zero value assigned to the pixels for which the condition is satisfied. See the details below.

adaptiveMethod AdaptiveThresholdTypes

Adaptive thresholding algorithm to use, ADAPTIVE_THRESH_MEAN_C or ADAPTIVE_THRESH_GAUSSIAN_C .

thresholdType ThresholdTypes

Thresholding type that must be either THRESH_BINARY or THRESH_BINARY_INV .

blockSize int

Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

c double

Constant subtracted from the mean or weighted mean (see the details below). Normally, it is positive but may be zero or negative as well.

Add(InputArray, InputArray, OutputArray, InputArray?, int)

Computes the per-element sum of two arrays or an array and a scalar.

public static void Add(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null, int dtype = -1)

Parameters

src1 InputArray

The first source array

src2 InputArray

The second source array. It must have the same size and same type as src1

dst OutputArray

The destination array; it will have the same size and same type as src1

mask InputArray

The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]

dtype int

AddWeighted(InputArray, double, InputArray, double, double, OutputArray, int)

computes weighted sum of two arrays (dst = alphasrc1 + betasrc2 + gamma)

public static void AddWeighted(InputArray src1, double alpha, InputArray src2, double beta, double gamma, OutputArray dst, int dtype = -1)

Parameters

src1 InputArray
alpha double
src2 InputArray
beta double
gamma double
dst OutputArray
dtype int

AlignSize(int, int)

Aligns buffer size by the certain number of bytes This small inline function aligns a buffer size by the certian number of bytes by enlarging it.

public static int AlignSize(int sz, int n)

Parameters

sz int
n int

Returns

int

ApplyColorMap(InputArray, OutputArray, ColormapTypes)

Applies a GNU Octave/MATLAB equivalent colormap on a given image.

public static void ApplyColorMap(InputArray src, OutputArray dst, ColormapTypes colormap)

Parameters

src InputArray

The source image, grayscale or colored of type CV_8UC1 or CV_8UC3.

dst OutputArray

The result is the colormapped source image. Note: Mat::create is called on dst.

colormap ColormapTypes

colormap The colormap to apply

ApplyColorMap(InputArray, OutputArray, InputArray)

Applies a user colormap on a given image.

public static void ApplyColorMap(InputArray src, OutputArray dst, InputArray userColor)

Parameters

src InputArray

The source image, grayscale or colored of type CV_8UC1 or CV_8UC3.

dst OutputArray

The result is the colormapped source image. Note: Mat::create is called on dst.

userColor InputArray

The colormap to apply of type CV_8UC1 or CV_8UC3 and size 256

ApproxPolyDP(InputArray, OutputArray, double, bool)

Approximates contour or a curve using Douglas-Peucker algorithm

public static void ApproxPolyDP(InputArray curve, OutputArray approxCurve, double epsilon, bool closed)

Parameters

curve InputArray

The polygon or curve to approximate. Must be 1 x N or N x 1 matrix of type CV_32SC2 or CV_32FC2.

approxCurve OutputArray

The result of the approximation; The type should match the type of the input curve

epsilon double

Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation.

closed bool

The result of the approximation; The type should match the type of the input curve

ApproxPolyDP(IEnumerable<Point2f>, double, bool)

Approximates contour or a curve using Douglas-Peucker algorithm

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point2f[] ApproxPolyDP(IEnumerable<Point2f> curve, double epsilon, bool closed)

Parameters

curve IEnumerable<Point2f>

The polygon or curve to approximate.

epsilon double

Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation.

closed bool

If true, the approximated curve is closed (i.e. its first and last vertices are connected), otherwise it’s not

Returns

Point2f[]

The result of the approximation; The type should match the type of the input curve

ApproxPolyDP(IEnumerable<Point>, double, bool)

Approximates contour or a curve using Douglas-Peucker algorithm

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point[] ApproxPolyDP(IEnumerable<Point> curve, double epsilon, bool closed)

Parameters

curve IEnumerable<Point>

The polygon or curve to approximate.

epsilon double

Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation.

closed bool

The result of the approximation; The type should match the type of the input curve

Returns

Point[]

The result of the approximation; The type should match the type of the input curve

ArcLength(InputArray, bool)

Calculates a contour perimeter or a curve length.

public static double ArcLength(InputArray curve, bool closed)

Parameters

curve InputArray

The input vector of 2D points, represented by CV_32SC2 or CV_32FC2 matrix.

closed bool

Indicates, whether the curve is closed or not.

Returns

double

ArcLength(IEnumerable<Point2f>, bool)

Calculates a contour perimeter or a curve length.

public static double ArcLength(IEnumerable<Point2f> curve, bool closed)

Parameters

curve IEnumerable<Point2f>

The input vector of 2D points.

closed bool

Indicates, whether the curve is closed or not.

Returns

double

ArcLength(IEnumerable<Point>, bool)

Calculates a contour perimeter or a curve length.

public static double ArcLength(IEnumerable<Point> curve, bool closed)

Parameters

curve IEnumerable<Point>

The input vector of 2D points.

closed bool

Indicates, whether the curve is closed or not.

Returns

double

ArrowedLine(InputOutputArray, Point, Point, Scalar, int, LineTypes, int, double)

Draws a arrow segment pointing from the first point to the second one. The function arrowedLine draws an arrow between pt1 and pt2 points in the image. See also cv::line.

public static void ArrowedLine(InputOutputArray img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0, double tipLength = 0.1)

Parameters

img InputOutputArray

Image.

pt1 Point

The point the arrow starts from.

pt2 Point

The point the arrow points to.

color Scalar

Line color.

thickness int

Line thickness.

lineType LineTypes

Type of the line, see cv::LineTypes

shift int

Number of fractional bits in the point coordinates.

tipLength double

The length of the arrow tip in relation to the arrow length

BatchDistance(InputArray, InputArray, OutputArray, int, OutputArray, NormTypes, int, InputArray?, int, bool)

naive nearest neighbor finder

public static void BatchDistance(InputArray src1, InputArray src2, OutputArray dist, int dtype, OutputArray nidx, NormTypes normType = NormTypes.L2, int k = 0, InputArray? mask = null, int update = 0, bool crosscheck = false)

Parameters

src1 InputArray
src2 InputArray
dist OutputArray
dtype int
nidx OutputArray
normType NormTypes
k int
mask InputArray
update int
crosscheck bool

BilateralFilter(InputArray, OutputArray, int, double, double, BorderTypes)

Applies bilateral filter to the image

public static void BilateralFilter(InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

The source 8-bit or floating-point, 1-channel or 3-channel image

dst OutputArray

The destination image; will have the same size and the same type as src

d int

The diameter of each pixel neighborhood, that is used during filtering. If it is non-positive, it's computed from sigmaSpace

sigmaColor double

Filter sigma in the color space. Larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color

sigmaSpace double

Filter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0 , it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace

borderType BorderTypes

BitwiseAnd(InputArray, InputArray, OutputArray, InputArray?)

computes bitwise conjunction of the two arrays (dst = src1 & src2)

public static void BitwiseAnd(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null)

Parameters

src1 InputArray

first input array or a scalar.

src2 InputArray

second input array or a scalar.

dst OutputArray

output array that has the same size and type as the input

mask InputArray

optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.

BitwiseNot(InputArray, OutputArray, InputArray?)

inverts each bit of array (dst = ~src)

public static void BitwiseNot(InputArray src, OutputArray dst, InputArray? mask = null)

Parameters

src InputArray

input array.

dst OutputArray

output array that has the same size and type as the input

mask InputArray

optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.

BitwiseOr(InputArray, InputArray, OutputArray, InputArray?)

computes bitwise disjunction of the two arrays (dst = src1 | src2)

public static void BitwiseOr(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null)

Parameters

src1 InputArray

first input array or a scalar.

src2 InputArray

second input array or a scalar.

dst OutputArray

output array that has the same size and type as the input

mask InputArray

optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.

BitwiseXor(InputArray, InputArray, OutputArray, InputArray?)

computes bitwise exclusive-or of the two arrays (dst = src1 ^ src2)

public static void BitwiseXor(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null)

Parameters

src1 InputArray

first input array or a scalar.

src2 InputArray

second input array or a scalar.

dst OutputArray

output array that has the same size and type as the input

mask InputArray

optional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.

BlendLinear(InputArray, InputArray, InputArray, InputArray, OutputArray)

Performs linear blending of two images: dst(i,j) = weights1(i,j)*src1(i,j) + weights2(i,j)*src2(i,j)

public static void BlendLinear(InputArray src1, InputArray src2, InputArray weights1, InputArray weights2, OutputArray dst)

Parameters

src1 InputArray

It has a type of CV_8UC(n) or CV_32FC(n), where n is a positive integer.

src2 InputArray

It has the same type and size as src1.

weights1 InputArray

It has a type of CV_32FC1 and the same size with src1.

weights2 InputArray

It has a type of CV_32FC1 and the same size with src1.

dst OutputArray

It is created if it does not have the same size and type with src1.

Blur(InputArray, OutputArray, Size, Point?, BorderTypes)

Smoothes image using normalized box filter

public static void Blur(InputArray src, OutputArray dst, Size ksize, Point? anchor = null, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

The source image

dst OutputArray

The destination image; will have the same size and the same type as src

ksize Size

The smoothing kernel size

anchor Point?

The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center

borderType BorderTypes

The border mode used to extrapolate pixels outside of the image

BorderInterpolate(int, int, BorderTypes)

Computes the source location of an extrapolated pixel.

public static int BorderInterpolate(int p, int len, BorderTypes borderType)

Parameters

p int

0-based coordinate of the extrapolated pixel along one of the axes, likely <0 or >= len

len int

Length of the array along the corresponding axis.

borderType BorderTypes

Border type, one of the #BorderTypes, except for #BORDER_TRANSPARENT and BORDER_ISOLATED. When borderType==BORDER_CONSTANT, the function always returns -1, regardless

Returns

int

BoundingRect(InputArray)

Calculates the up-right bounding rectangle of a point set.

public static Rect BoundingRect(InputArray curve)

Parameters

curve InputArray

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

Returns

Rect

Minimal up-right bounding rectangle for the specified point set.

BoundingRect(IEnumerable<Point2f>)

Calculates the up-right bounding rectangle of a point set.

public static Rect BoundingRect(IEnumerable<Point2f> curve)

Parameters

curve IEnumerable<Point2f>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

Returns

Rect

Minimal up-right bounding rectangle for the specified point set.

BoundingRect(IEnumerable<Point>)

Calculates the up-right bounding rectangle of a point set.

public static Rect BoundingRect(IEnumerable<Point> curve)

Parameters

curve IEnumerable<Point>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

Returns

Rect

Minimal up-right bounding rectangle for the specified point set.

BoxFilter(InputArray, OutputArray, MatType, Size, Point?, bool, BorderTypes)

Smoothes image using box filter

public static void BoxFilter(InputArray src, OutputArray dst, MatType ddepth, Size ksize, Point? anchor = null, bool normalize = true, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

The source image

dst OutputArray

The destination image; will have the same size and the same type as src

ddepth MatType
ksize Size

The smoothing kernel size

anchor Point?

The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center

normalize bool

Indicates, whether the kernel is normalized by its area or not

borderType BorderTypes

The border mode used to extrapolate pixels outside of the image

BoxPoints(RotatedRect)

Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.

The function finds the four vertices of a rotated rectangle.This function is useful to draw the rectangle.In C++, instead of using this function, you can directly use RotatedRect::points method. Please visit the @ref tutorial_bounding_rotated_ellipses "tutorial on Creating Bounding rotated boxes and ellipses for contours" for more information.

public static Point2f[] BoxPoints(RotatedRect box)

Parameters

box RotatedRect

The input rotated rectangle. It may be the output of

Returns

Point2f[]

The output array of four vertices of rectangles.

BoxPoints(RotatedRect, OutputArray)

Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.

The function finds the four vertices of a rotated rectangle.This function is useful to draw the rectangle.In C++, instead of using this function, you can directly use RotatedRect::points method. Please visit the @ref tutorial_bounding_rotated_ellipses "tutorial on Creating Bounding rotated boxes and ellipses for contours" for more information.

public static void BoxPoints(RotatedRect box, OutputArray points)

Parameters

box RotatedRect

The input rotated rectangle. It may be the output of

points OutputArray

The output array of four vertices of rectangles.

BuildOpticalFlowPyramid(InputArray, out Mat[], Size, int, bool, BorderTypes, BorderTypes, bool)

Constructs a pyramid which can be used as input for calcOpticalFlowPyrLK

public static int BuildOpticalFlowPyramid(InputArray img, out Mat[] pyramid, Size winSize, int maxLevel, bool withDerivatives = true, BorderTypes pyrBorder = BorderTypes.Default, BorderTypes derivBorder = BorderTypes.Constant, bool tryReuseInputImage = true)

Parameters

img InputArray

8-bit input image.

pyramid Mat[]

output pyramid.

winSize Size

window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK(). It is needed to calculate required padding for pyramid levels.

maxLevel int

0-based maximal pyramid level number.

withDerivatives bool

set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK() will calculate them internally.

pyrBorder BorderTypes

the border mode for pyramid layers.

derivBorder BorderTypes

the border mode for gradients.

tryReuseInputImage bool

put ROI of input image into the pyramid if possible. You can pass false to force data copying.

Returns

int

number of levels in constructed pyramid. Can be less than maxLevel.

BuildOpticalFlowPyramid(InputArray, OutputArray, Size, int, bool, BorderTypes, BorderTypes, bool)

Constructs a pyramid which can be used as input for calcOpticalFlowPyrLK

public static int BuildOpticalFlowPyramid(InputArray img, OutputArray pyramid, Size winSize, int maxLevel, bool withDerivatives = true, BorderTypes pyrBorder = BorderTypes.Default, BorderTypes derivBorder = BorderTypes.Constant, bool tryReuseInputImage = true)

Parameters

img InputArray

8-bit input image.

pyramid OutputArray

output pyramid.

winSize Size

window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK(). It is needed to calculate required padding for pyramid levels.

maxLevel int

0-based maximal pyramid level number.

withDerivatives bool

set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK() will calculate them internally.

pyrBorder BorderTypes

the border mode for pyramid layers.

derivBorder BorderTypes

the border mode for gradients.

tryReuseInputImage bool

put ROI of input image into the pyramid if possible. You can pass false to force data copying.

Returns

int

number of levels in constructed pyramid. Can be less than maxLevel.

BuildPyramid(InputArray, VectorOfMat, int, BorderTypes)

public static void BuildPyramid(InputArray src, VectorOfMat dst, int maxlevel, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray
dst VectorOfMat
maxlevel int
borderType BorderTypes

Exceptions

ArgumentNullException

CalcBackProject(Mat[], int[], InputArray, OutputArray, Rangef[], bool)

computes the joint dense histogram for a set of images.

public static void CalcBackProject(Mat[] images, int[] channels, InputArray hist, OutputArray backProject, Rangef[] ranges, bool uniform = true)

Parameters

images Mat[]
channels int[]
hist InputArray
backProject OutputArray
ranges Rangef[]
uniform bool

CalcCovarMatrix(InputArray, OutputArray, InputOutputArray, CovarFlags, MatType?)

computes covariation matrix of a set of samples

public static void CalcCovarMatrix(InputArray samples, OutputArray covar, InputOutputArray mean, CovarFlags flags, MatType? ctype = null)

Parameters

samples InputArray

samples stored as rows/columns of a single matrix.

covar OutputArray

output covariance matrix of the type ctype and square size.

mean InputOutputArray

input or output (depending on the flags) array as the average value of the input vectors.

flags CovarFlags

operation flags as a combination of CovarFlags

ctype MatType?

type of the matrixl; it equals 'CV_64F' by default.

CalcCovarMatrix(Mat[], Mat, Mat, CovarFlags, MatType?)

computes covariation matrix of a set of samples

public static void CalcCovarMatrix(Mat[] samples, Mat covar, Mat mean, CovarFlags flags, MatType? ctype = null)

Parameters

samples Mat[]

samples stored as separate matrices

covar Mat

output covariance matrix of the type ctype and square size.

mean Mat

input or output (depending on the flags) array as the average value of the input vectors.

flags CovarFlags

operation flags as a combination of CovarFlags

ctype MatType?

type of the matrixl; it equals 'CV_64F' by default.

CalcHist(Mat[], int[], InputArray?, OutputArray, int, int[], Rangef[], bool, bool)

computes the joint dense histogram for a set of images.

public static void CalcHist(Mat[] images, int[] channels, InputArray? mask, OutputArray hist, int dims, int[] histSize, Rangef[] ranges, bool uniform = true, bool accumulate = false)

Parameters

images Mat[]
channels int[]
mask InputArray
hist OutputArray
dims int
histSize int[]
ranges Rangef[]
uniform bool
accumulate bool

CalcHist(Mat[], int[], InputArray?, OutputArray, int, int[], float[][], bool, bool)

computes the joint dense histogram for a set of images.

public static void CalcHist(Mat[] images, int[] channels, InputArray? mask, OutputArray hist, int dims, int[] histSize, float[][] ranges, bool uniform = true, bool accumulate = false)

Parameters

images Mat[]
channels int[]
mask InputArray
hist OutputArray
dims int
histSize int[]
ranges float[][]
uniform bool
accumulate bool

CalcOpticalFlowFarneback(InputArray, InputArray, InputOutputArray, double, int, int, int, int, double, OpticalFlowFlags)

Computes a dense optical flow using the Gunnar Farneback's algorithm.

public static void CalcOpticalFlowFarneback(InputArray prev, InputArray next, InputOutputArray flow, double pyrScale, int levels, int winsize, int iterations, int polyN, double polySigma, OpticalFlowFlags flags)

Parameters

prev InputArray

first 8-bit single-channel input image.

next InputArray

second input image of the same size and the same type as prev.

flow InputOutputArray

computed flow image that has the same size as prev and type CV_32FC2.

pyrScale double

parameter, specifying the image scale (<1) to build pyramids for each image; pyrScale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one.

levels int

number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used.

winsize int

averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field.

iterations int

number of iterations the algorithm does at each pyramid level.

polyN int

size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly_n =5 or 7.

polySigma double

standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for polyN=5, you can set polySigma=1.1, for polyN=7, a good value would be polySigma=1.5.

flags OpticalFlowFlags

operation flags that can be a combination of OPTFLOW_USE_INITIAL_FLOW and/or OPTFLOW_FARNEBACK_GAUSSIAN

CalcOpticalFlowPyrLK(InputArray, InputArray, InputArray, InputOutputArray, OutputArray, OutputArray, Size?, int, TermCriteria?, OpticalFlowFlags, double)

computes sparse optical flow using multi-scale Lucas-Kanade algorithm

public static void CalcOpticalFlowPyrLK(InputArray prevImg, InputArray nextImg, InputArray prevPts, InputOutputArray nextPts, OutputArray status, OutputArray err, Size? winSize = null, int maxLevel = 3, TermCriteria? criteria = null, OpticalFlowFlags flags = OpticalFlowFlags.None, double minEigThreshold = 0.0001)

Parameters

prevImg InputArray
nextImg InputArray
prevPts InputArray
nextPts InputOutputArray
status OutputArray
err OutputArray
winSize Size?
maxLevel int
criteria TermCriteria?
flags OpticalFlowFlags
minEigThreshold double

CalcOpticalFlowPyrLK(InputArray, InputArray, Point2f[], ref Point2f[], out byte[], out float[], Size?, int, TermCriteria?, OpticalFlowFlags, double)

computes sparse optical flow using multi-scale Lucas-Kanade algorithm

public static void CalcOpticalFlowPyrLK(InputArray prevImg, InputArray nextImg, Point2f[] prevPts, ref Point2f[] nextPts, out byte[] status, out float[] err, Size? winSize = null, int maxLevel = 3, TermCriteria? criteria = null, OpticalFlowFlags flags = OpticalFlowFlags.None, double minEigThreshold = 0.0001)

Parameters

prevImg InputArray
nextImg InputArray
prevPts Point2f[]
nextPts Point2f[]
status byte[]
err float[]
winSize Size?
maxLevel int
criteria TermCriteria?
flags OpticalFlowFlags
minEigThreshold double

CalibrateCamera(IEnumerable<Mat>, IEnumerable<Mat>, Size, InputOutputArray, InputOutputArray, out Mat[], out Mat[], CalibrationFlags, TermCriteria?)

finds intrinsic and extrinsic camera parameters from several fews of a known calibration pattern.

public static double CalibrateCamera(IEnumerable<Mat> objectPoints, IEnumerable<Mat> imagePoints, Size imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, out Mat[] rvecs, out Mat[] tvecs, CalibrationFlags flags = CalibrationFlags.None, TermCriteria? criteria = null)

Parameters

objectPoints IEnumerable<Mat>

In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of the pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different. The points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object point is 0. In the old interface all the vectors of object points from different views are concatenated together.

imagePoints IEnumerable<Mat>

In the new interface it is a vector of vectors of the projections of calibration pattern points. imagePoints.Count() and objectPoints.Count() and imagePoints[i].Count() must be equal to objectPoints[i].Count() for each i.

imageSize Size

Size of the image used only to initialize the intrinsic camera matrix.

cameraMatrix InputOutputArray

Output 3x3 floating-point camera matrix. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.

distCoeffs InputOutputArray

Output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements.

rvecs Mat[]

Output vector of rotation vectors (see Rodrigues() ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1)

tvecs Mat[]

Output vector of translation vectors estimated for each pattern view.

flags CalibrationFlags

Different flags that may be zero or a combination of the CalibrationFlag values

criteria TermCriteria?

Termination criteria for the iterative optimization algorithm.

Returns

double

Root mean square (RMS) re-projection error. A value below 1.0 is generally considered acceptable for a good calibration.

CalibrateCamera(IEnumerable<IEnumerable<Point3f>>, IEnumerable<IEnumerable<Point2f>>, Size, double[,], double[], out Vec3d[], out Vec3d[], CalibrationFlags, TermCriteria?)

finds intrinsic and extrinsic camera parameters from several fews of a known calibration pattern.

public static double CalibrateCamera(IEnumerable<IEnumerable<Point3f>> objectPoints, IEnumerable<IEnumerable<Point2f>> imagePoints, Size imageSize, double[,] cameraMatrix, double[] distCoeffs, out Vec3d[] rvecs, out Vec3d[] tvecs, CalibrationFlags flags = CalibrationFlags.None, TermCriteria? criteria = null)

Parameters

objectPoints IEnumerable<IEnumerable<Point3f>>

In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of the pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different. The points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object point is 0. In the old interface all the vectors of object points from different views are concatenated together.

imagePoints IEnumerable<IEnumerable<Point2f>>

In the new interface it is a vector of vectors of the projections of calibration pattern points. imagePoints.Count() and objectPoints.Count() and imagePoints[i].Count() must be equal to objectPoints[i].Count() for each i.

imageSize Size

Size of the image used only to initialize the intrinsic camera matrix.

cameraMatrix double[,]

Output 3x3 floating-point camera matrix. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.

distCoeffs double[]

Output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements.

rvecs Vec3d[]

Output vector of rotation vectors (see Rodrigues() ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1)

tvecs Vec3d[]

Output vector of translation vectors estimated for each pattern view.

flags CalibrationFlags

Different flags that may be zero or a combination of the CalibrationFlag values

criteria TermCriteria?

Termination criteria for the iterative optimization algorithm.

Returns

double

Root mean square (RMS) re-projection error. A value below 1.0 is generally considered acceptable for a good calibration.

CalibrateHandEye(IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, OutputArray, OutputArray, HandEyeCalibrationMethod)

Computes Hand-Eye calibration.

The function performs the Hand-Eye calibration using various methods. One approach consists in estimating the rotation then the translation(separable solutions) and the following methods are implemented:

  • R.Tsai, R.Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration \cite Tsai89
  • F.Park, B.Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group \cite Park94
  • R.Horaud, F.Dornaika Hand-Eye Calibration \cite Horaud95

Another approach consists in estimating simultaneously the rotation and the translation(simultaneous solutions), with the following implemented method:

  • N.Andreff, R.Horaud, B.Espiau On-line Hand-Eye Calibration \cite Andreff99
  • K.Daniilidis Hand-Eye Calibration Using Dual Quaternions \cite Daniilidis98
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CalibrateHandEye(IEnumerable<Mat> R_gripper2base, IEnumerable<Mat> t_gripper2base, IEnumerable<Mat> R_target2cam, IEnumerable<Mat> t_target2cam, OutputArray R_cam2gripper, OutputArray t_cam2gripper, HandEyeCalibrationMethod method = HandEyeCalibrationMethod.TSAI)

Parameters

R_gripper2base IEnumerable<Mat>

Rotation part extracted from the homogeneous matrix that transforms a pointexpressed in the gripper frame to the robot base frame that contains the rotation matrices for all the transformationsfrom gripper frame to robot base frame.

t_gripper2base IEnumerable<Mat>

Translation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame. This is a vector(vector<Mat>) that contains the translation vectors for all the transformations from gripper frame to robot base frame.

R_target2cam IEnumerable<Mat>

Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector(vector<Mat>) that contains the rotation matrices for all the transformations from calibration target frame to camera frame.

t_target2cam IEnumerable<Mat>

Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector(vector<Mat>) that contains the translation vectors for all the transformations from calibration target frame to camera frame.

R_cam2gripper OutputArray

Estimated rotation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.

t_cam2gripper OutputArray

Estimated translation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.

method HandEyeCalibrationMethod

One of the implemented Hand-Eye calibration method

CalibrateRobotWorldHandEye(IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, OutputArray, OutputArray, OutputArray, OutputArray, RobotWorldHandEyeCalibrationMethod)

Computes Robot-World/Hand-Eye calibration. The function performs the Robot-World/Hand-Eye calibration using various methods. One approach consists in estimating the rotation then the translation(separable solutions):

  • M.Shah, Solving the robot-world/hand-eye calibration problem using the kronecker product \cite Shah2013SolvingTR
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CalibrateRobotWorldHandEye(IEnumerable<Mat> R_world2cam, IEnumerable<Mat> t_world2cam, IEnumerable<Mat> R_base2gripper, IEnumerable<Mat> t_base2gripper, OutputArray R_base2world, OutputArray t_base2world, OutputArray R_gripper2cam, OutputArray t_gripper2cam, RobotWorldHandEyeCalibrationMethod method = RobotWorldHandEyeCalibrationMethod.SHAH)

Parameters

R_world2cam IEnumerable<Mat>

[in] R_world2cam Rotation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector of Mat that contains the rotation, (3x3) rotation matrices or (3x1) rotation vectors,for all the transformations from world frame to the camera frame.

t_world2cam IEnumerable<Mat>

[in] Translation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector (vector<Mat>) that contains the (3x1) translation vectors for all the transformations from world frame to the camera frame.

R_base2gripper IEnumerable<Mat>

[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (vector<Mat>) that contains the rotation, (3x3) rotation matrices or (3x1) rotation vectors, for all the transformations from robot base frame to the gripper frame.

t_base2gripper IEnumerable<Mat>

[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (vector<Mat>) that contains the (3x1) translation vectors for all the transformations from robot base frame to the gripper frame.

R_base2world OutputArray

[out] R_base2world Estimated (3x3) rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.

t_base2world OutputArray

[out] t_base2world Estimated (3x1) translation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.

R_gripper2cam OutputArray

[out] R_gripper2cam Estimated (3x3) rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame.

t_gripper2cam OutputArray

[out] Estimated (3x1) translation part extracted from the homogeneous matrix that transforms a pointexpressed in the gripper frame to the camera frame.

method RobotWorldHandEyeCalibrationMethod

One of the implemented Robot-World/Hand-Eye calibration method

CalibrateRobotWorldHandEye(IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, out double[,], out double[], out double[,], out double[], RobotWorldHandEyeCalibrationMethod)

omputes Robot-World/Hand-Eye calibration. The function performs the Robot-World/Hand-Eye calibration using various methods. One approach consists in estimating the rotation then the translation(separable solutions):

  • M.Shah, Solving the robot-world/hand-eye calibration problem using the kronecker product \cite Shah2013SolvingTR
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CalibrateRobotWorldHandEye(IEnumerable<Mat> R_world2cam, IEnumerable<Mat> t_world2cam, IEnumerable<Mat> R_base2gripper, IEnumerable<Mat> t_base2gripper, out double[,] R_base2world, out double[] t_base2world, out double[,] R_gripper2cam, out double[] t_gripper2cam, RobotWorldHandEyeCalibrationMethod method = RobotWorldHandEyeCalibrationMethod.SHAH)

Parameters

R_world2cam IEnumerable<Mat>

[in] R_world2cam Rotation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector of Mat that contains the rotation, (3x3) rotation matrices or (3x1) rotation vectors,for all the transformations from world frame to the camera frame.

t_world2cam IEnumerable<Mat>

[in] Translation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector (vector<Mat>) that contains the (3x1) translation vectors for all the transformations from world frame to the camera frame.

R_base2gripper IEnumerable<Mat>

[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (vector<Mat>) that contains the rotation, (3x3) rotation matrices or (3x1) rotation vectors, for all the transformations from robot base frame to the gripper frame.

t_base2gripper IEnumerable<Mat>

[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (vector<Mat>) that contains the (3x1) translation vectors for all the transformations from robot base frame to the gripper frame.

R_base2world double[,]

[out] R_base2world Estimated (3x3) rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.

t_base2world double[]

[out] t_base2world Estimated (3x1) translation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.

R_gripper2cam double[,]

[out] R_gripper2cam Estimated (3x3) rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame.

t_gripper2cam double[]

[out] Estimated (3x1) translation part extracted from the homogeneous matrix that transforms a pointexpressed in the gripper frame to the camera frame.

method RobotWorldHandEyeCalibrationMethod

One of the implemented Robot-World/Hand-Eye calibration method

CalibrationMatrixValues(InputArray, Size, double, double, out double, out double, out double, out Point2d, out double)

computes several useful camera characteristics from the camera matrix, camera frame resolution and the physical sensor size.

public static void CalibrationMatrixValues(InputArray cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, out double fovx, out double fovy, out double focalLength, out Point2d principalPoint, out double aspectRatio)

Parameters

cameraMatrix InputArray

Input camera matrix that can be estimated by calibrateCamera() or stereoCalibrate() .

imageSize Size

Input image size in pixels.

apertureWidth double

Physical width of the sensor.

apertureHeight double

Physical height of the sensor.

fovx double

Output field of view in degrees along the horizontal sensor axis.

fovy double

Output field of view in degrees along the vertical sensor axis.

focalLength double

Focal length of the lens in mm.

principalPoint Point2d

Principal point in pixels.

aspectRatio double

fy / fx

CalibrationMatrixValues(double[,], Size, double, double, out double, out double, out double, out Point2d, out double)

computes several useful camera characteristics from the camera matrix, camera frame resolution and the physical sensor size.

public static void CalibrationMatrixValues(double[,] cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, out double fovx, out double fovy, out double focalLength, out Point2d principalPoint, out double aspectRatio)

Parameters

cameraMatrix double[,]

Input camera matrix that can be estimated by calibrateCamera() or stereoCalibrate() .

imageSize Size

Input image size in pixels.

apertureWidth double

Physical width of the sensor.

apertureHeight double

Physical height of the sensor.

fovx double

Output field of view in degrees along the horizontal sensor axis.

fovy double

Output field of view in degrees along the vertical sensor axis.

focalLength double

Focal length of the lens in mm.

principalPoint Point2d

Principal point in pixels.

aspectRatio double

fy / fx

CamShift(InputArray, ref Rect, TermCriteria)

Finds an object center, size, and orientation.

public static RotatedRect CamShift(InputArray probImage, ref Rect window, TermCriteria criteria)

Parameters

probImage InputArray

Back projection of the object histogram.

window Rect

Initial search window.

criteria TermCriteria

Stop criteria for the underlying MeanShift() .

Returns

RotatedRect

Canny(InputArray, InputArray, OutputArray, double, double, bool)

Finds edges in an image using the Canny algorithm with custom image gradient.

public static void Canny(InputArray dx, InputArray dy, OutputArray edges, double threshold1, double threshold2, bool L2gradient = false)

Parameters

dx InputArray

16-bit x derivative of input image (CV_16SC1 or CV_16SC3).

dy InputArray

16-bit y derivative of input image (same type as dx).

edges OutputArray

output edge map; single channels 8-bit image, which has the same size as image.

threshold1 double

first threshold for the hysteresis procedure.

threshold2 double

second threshold for the hysteresis procedure.

L2gradient bool

Indicates, whether the more accurate L2 norm should be used to compute the image gradient magnitude (true), or a faster default L1 norm is enough (false). [By default this is false]

Canny(InputArray, OutputArray, double, double, int, bool)

Finds edges in an image using Canny algorithm.

public static void Canny(InputArray src, OutputArray edges, double threshold1, double threshold2, int apertureSize = 3, bool L2gradient = false)

Parameters

src InputArray

Single-channel 8-bit input image

edges OutputArray

The output edge map. It will have the same size and the same type as image

threshold1 double

The first threshold for the hysteresis procedure

threshold2 double

The second threshold for the hysteresis procedure

apertureSize int

Aperture size for the Sobel operator [By default this is ApertureSize.Size3]

L2gradient bool

Indicates, whether the more accurate L2 norm should be used to compute the image gradient magnitude (true), or a faster default L1 norm is enough (false). [By default this is false]

CartToPolar(InputArray, InputArray, OutputArray, OutputArray, bool)

Calculates the magnitude and angle of 2D vectors.

public static void CartToPolar(InputArray x, InputArray y, OutputArray magnitude, OutputArray angle, bool angleInDegrees = false)

Parameters

x InputArray

array of x-coordinates; this must be a single-precision or double-precision floating-point array.

y InputArray

array of y-coordinates, that must have the same size and same type as x.

magnitude OutputArray

output array of magnitudes of the same size and type as x.

angle OutputArray

output array of angles that has the same size and type as x; the angles are measured in radians(from 0 to 2*Pi) or in degrees(0 to 360 degrees).

angleInDegrees bool

a flag, indicating whether the angles are measured in radians(which is by default), or in degrees.

CheckChessboard(InputArray, Size)

Checks whether the image contains chessboard of the specific size or not.

public static bool CheckChessboard(InputArray img, Size size)

Parameters

img InputArray
size Size

Returns

bool

CheckHardwareSupport(CpuFeatures)

Returns true if the specified feature is supported by the host hardware. The function returns true if the host hardware supports the specified feature.When user calls setUseOptimized(false), the subsequent calls to checkHardwareSupport() will return false until setUseOptimized(true) is called.This way user can dynamically switch on and off the optimized code in OpenCV.

public static bool CheckHardwareSupport(CpuFeatures feature)

Parameters

feature CpuFeatures

The feature of interest, one of cv::CpuFeatures

Returns

bool

CheckRange(InputArray, bool)

checks that each matrix element is within the specified range.

public static bool CheckRange(InputArray src, bool quiet = true)

Parameters

src InputArray

The array to check

quiet bool

The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception.

Returns

bool

CheckRange(InputArray, bool, out Point, double, double)

checks that each matrix element is within the specified range.

public static bool CheckRange(InputArray src, bool quiet, out Point pos, double minVal = -1.7976931348623157E+308, double maxVal = 1.7976931348623157E+308)

Parameters

src InputArray

The array to check

quiet bool

The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception.

pos Point

The optional output parameter, where the position of the first outlier is stored.

minVal double

The inclusive lower boundary of valid values range

maxVal double

The exclusive upper boundary of valid values range

Returns

bool

Circle(InputOutputArray, Point, int, Scalar, int, LineTypes, int)

Draws a circle

public static void Circle(InputOutputArray img, Point center, int radius, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

Image where the circle is drawn.

center Point

Center of the circle.

radius int

Radius of the circle.

color Scalar

Circle color.

thickness int

Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1]

lineType LineTypes

Type of the circle boundary. [By default this is LineType.Link8]

shift int

Number of fractional bits in the center coordinates and radius value. [By default this is 0]

Circle(InputOutputArray, int, int, int, Scalar, int, LineTypes, int)

Draws a circle

public static void Circle(InputOutputArray img, int centerX, int centerY, int radius, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

Image where the circle is drawn.

centerX int

X-coordinate of the center of the circle.

centerY int

Y-coordinate of the center of the circle.

radius int

Radius of the circle.

color Scalar

Circle color.

thickness int

Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1]

lineType LineTypes

Type of the circle boundary. [By default this is LineType.Link8]

shift int

Number of fractional bits in the center coordinates and radius value. [By default this is 0]

ClipLine(Rect, ref Point, ref Point)

Clips the line against the image rectangle

public static bool ClipLine(Rect imgRect, ref Point pt1, ref Point pt2)

Parameters

imgRect Rect

sThe image rectangle

pt1 Point

The first line point

pt2 Point

The second line point

Returns

bool

ClipLine(Size, ref Point, ref Point)

Clips the line against the image rectangle

public static bool ClipLine(Size imgSize, ref Point pt1, ref Point pt2)

Parameters

imgSize Size

The image size

pt1 Point

The first line point

pt2 Point

The second line point

Returns

bool

ColorChange(InputArray, InputArray?, OutputArray, float, float, float)

Given an original color image, two differently colored versions of this image can be mixed seamlessly. Multiplication factor is between 0.5 to 2.5.

public static void ColorChange(InputArray src, InputArray? mask, OutputArray dst, float redMul = 1, float greenMul = 1, float blueMul = 1)

Parameters

src InputArray

Input 8-bit 3-channel image.

mask InputArray

Input 8-bit 1 or 3-channel image.

dst OutputArray

Output image with the same size and type as src.

redMul float

R-channel multiply factor.

greenMul float

G-channel multiply factor.

blueMul float

B-channel multiply factor.

Compare(InputArray, InputArray, OutputArray, CmpType)

Performs the per-element comparison of two arrays or an array and scalar value.

public static void Compare(InputArray src1, InputArray src2, OutputArray dst, CmpType cmpop)

Parameters

src1 InputArray

first input array or a scalar; when it is an array, it must have a single channel.

src2 InputArray

second input array or a scalar; when it is an array, it must have a single channel.

dst OutputArray

output array of type ref CV_8U that has the same size and the same number of channels as the input arrays.

cmpop CmpType

a flag, that specifies correspondence between the arrays (cv::CmpTypes)

CompareHist(InputArray, InputArray, HistCompMethods)

compares two histograms stored in dense arrays

public static double CompareHist(InputArray h1, InputArray h2, HistCompMethods method)

Parameters

h1 InputArray

The first compared histogram

h2 InputArray

The second compared histogram of the same size as h1

method HistCompMethods

The comparison method

Returns

double

CompleteSymm(InputOutputArray, bool)

extends the symmetrical matrix from the lower half or from the upper half

public static void CompleteSymm(InputOutputArray mtx, bool lowerToUpper = false)

Parameters

mtx InputOutputArray

Input-output floating-point square matrix

lowerToUpper bool

If true, the lower half is copied to the upper half, otherwise the upper half is copied to the lower half

ComposeRT(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?)

composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments

public static void ComposeRT(InputArray rvec1, InputArray tvec1, InputArray rvec2, InputArray tvec2, OutputArray rvec3, OutputArray tvec3, OutputArray? dr3dr1 = null, OutputArray? dr3dt1 = null, OutputArray? dr3dr2 = null, OutputArray? dr3dt2 = null, OutputArray? dt3dr1 = null, OutputArray? dt3dt1 = null, OutputArray? dt3dr2 = null, OutputArray? dt3dt2 = null)

Parameters

rvec1 InputArray

First rotation vector.

tvec1 InputArray

First translation vector.

rvec2 InputArray

Second rotation vector.

tvec2 InputArray

Second translation vector.

rvec3 OutputArray

Output rotation vector of the superposition.

tvec3 OutputArray

Output translation vector of the superposition.

dr3dr1 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dr3dt1 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dr3dr2 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dr3dt2 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dr1 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dt1 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dr2 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dt2 OutputArray

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

ComposeRT(double[], double[], double[], double[], out double[], out double[])

composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments

public static void ComposeRT(double[] rvec1, double[] tvec1, double[] rvec2, double[] tvec2, out double[] rvec3, out double[] tvec3)

Parameters

rvec1 double[]

First rotation vector.

tvec1 double[]

First translation vector.

rvec2 double[]

Second rotation vector.

tvec2 double[]

Second translation vector.

rvec3 double[]

Output rotation vector of the superposition.

tvec3 double[]

Output translation vector of the superposition.

ComposeRT(double[], double[], double[], double[], out double[], out double[], out double[,], out double[,], out double[,], out double[,], out double[,], out double[,], out double[,], out double[,])

composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments

public static void ComposeRT(double[] rvec1, double[] tvec1, double[] rvec2, double[] tvec2, out double[] rvec3, out double[] tvec3, out double[,] dr3dr1, out double[,] dr3dt1, out double[,] dr3dr2, out double[,] dr3dt2, out double[,] dt3dr1, out double[,] dt3dt1, out double[,] dt3dr2, out double[,] dt3dt2)

Parameters

rvec1 double[]

First rotation vector.

tvec1 double[]

First translation vector.

rvec2 double[]

Second rotation vector.

tvec2 double[]

Second translation vector.

rvec3 double[]

Output rotation vector of the superposition.

tvec3 double[]

Output translation vector of the superposition.

dr3dr1 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dr3dt1 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dr3dr2 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dr3dt2 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dr1 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dt1 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dr2 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

dt3dt2 double[,]

Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.

ComputeCorrespondEpilines(InputArray, int, InputArray, OutputArray)

For points in an image of a stereo pair, computes the corresponding epilines in the other image.

public static void ComputeCorrespondEpilines(InputArray points, int whichImage, InputArray F, OutputArray lines)

Parameters

points InputArray

Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2.

whichImage int

Index of the image (1 or 2) that contains the points .

F InputArray

Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() .

lines OutputArray

Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) .

ComputeCorrespondEpilines(IEnumerable<Point2d>, int, double[,])

For points in an image of a stereo pair, computes the corresponding epilines in the other image.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point3f[] ComputeCorrespondEpilines(IEnumerable<Point2d> points, int whichImage, double[,] F)

Parameters

points IEnumerable<Point2d>

Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2.

whichImage int

Index of the image (1 or 2) that contains the points .

F double[,]

Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() .

Returns

Point3f[]

Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) .

ComputeCorrespondEpilines(IEnumerable<Point3d>, int, double[,])

For points in an image of a stereo pair, computes the corresponding epilines in the other image.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point3f[] ComputeCorrespondEpilines(IEnumerable<Point3d> points, int whichImage, double[,] F)

Parameters

points IEnumerable<Point3d>

Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2.

whichImage int

Index of the image (1 or 2) that contains the points .

F double[,]

Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() .

Returns

Point3f[]

Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) .

ComputeECC(InputArray, InputArray, InputArray?)

Computes the Enhanced Correlation Coefficient value between two images @cite EP08 .

public static double ComputeECC(InputArray templateImage, InputArray inputImage, InputArray? inputMask = null)

Parameters

templateImage InputArray

single-channel template image; CV_8U or CV_32F array.

inputImage InputArray

single-channel input image to be warped to provide an image similar to templateImage, same type as templateImage.

inputMask InputArray

An optional mask to indicate valid values of inputImage.

Returns

double

ComputeRecallPrecisionCurve(DMatch[][], byte[][])

public static Point2f[] ComputeRecallPrecisionCurve(DMatch[][] matches1to2, byte[][] correctMatches1to2Mask)

Parameters

matches1to2 DMatch[][]
correctMatches1to2Mask byte[][]

Returns

Point2f[]

recallPrecisionCurve

ConnectedComponents(InputArray, OutputArray, PixelConnectivity)

computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.

public static int ConnectedComponents(InputArray image, OutputArray labels, PixelConnectivity connectivity = PixelConnectivity.Connectivity8)

Parameters

image InputArray

the image to be labeled

labels OutputArray

destination labeled image

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

Returns

int

The number of labels

ConnectedComponents(InputArray, OutputArray, PixelConnectivity, MatType)

computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.

public static int ConnectedComponents(InputArray image, OutputArray labels, PixelConnectivity connectivity, MatType ltype)

Parameters

image InputArray

the image to be labeled

labels OutputArray

destination labeled image

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

ltype MatType

output image label type. Currently CV_32S and CV_16U are supported.

Returns

int

The number of labels

ConnectedComponents(InputArray, out int[,], PixelConnectivity)

computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.

public static int ConnectedComponents(InputArray image, out int[,] labels, PixelConnectivity connectivity)

Parameters

image InputArray

the image to be labeled

labels int[,]

destination labeled rectangular array

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

Returns

int

The number of labels

ConnectedComponentsEx(InputArray, PixelConnectivity, ConnectedComponentsAlgorithmsTypes)

computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.

public static ConnectedComponents ConnectedComponentsEx(InputArray image, PixelConnectivity connectivity = PixelConnectivity.Connectivity8, ConnectedComponentsAlgorithmsTypes ccltype = ConnectedComponentsAlgorithmsTypes.Default)

Parameters

image InputArray

the image to be labeled

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

ccltype ConnectedComponentsAlgorithmsTypes

Returns

ConnectedComponents

ConnectedComponentsWithAlgorithm(InputArray, OutputArray, PixelConnectivity, MatType, ConnectedComponentsAlgorithmsTypes)

Computes the connected components labeled image of boolean image.

image with 4 or 8 way connectivity - returns N, the total number of labels[0, N - 1] where 0 represents the background label.ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.ccltype specifies the connected components labeling algorithm to use, currently Grana (BBDT) and Wu's (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details.Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu's algorithms if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.

public static int ConnectedComponentsWithAlgorithm(InputArray image, OutputArray labels, PixelConnectivity connectivity, MatType ltype, ConnectedComponentsAlgorithmsTypes ccltype)

Parameters

image InputArray

the 8-bit single-channel image to be labeled

labels OutputArray

destination labeled image

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

ltype MatType

output image label type. Currently CV_32S and CV_16U are supported.

ccltype ConnectedComponentsAlgorithmsTypes

connected components algorithm type.

Returns

int

ConnectedComponentsWithStats(InputArray, OutputArray, OutputArray, OutputArray, PixelConnectivity)

computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.

public static int ConnectedComponentsWithStats(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, PixelConnectivity connectivity = PixelConnectivity.Connectivity8)

Parameters

image InputArray

the image to be labeled

labels OutputArray

destination labeled image

stats OutputArray

statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes

centroids OutputArray

floating point centroid (x,y) output for each label, including the background label

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

Returns

int

ConnectedComponentsWithStats(InputArray, OutputArray, OutputArray, OutputArray, PixelConnectivity, MatType)

computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.

public static int ConnectedComponentsWithStats(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, PixelConnectivity connectivity, MatType ltype)

Parameters

image InputArray

the image to be labeled

labels OutputArray

destination labeled image

stats OutputArray

statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes

centroids OutputArray

floating point centroid (x,y) output for each label, including the background label

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

ltype MatType

output image label type. Currently CV_32S and CV_16U are supported.

Returns

int

ConnectedComponentsWithStatsWithAlgorithm(InputArray, OutputArray, OutputArray, OutputArray, PixelConnectivity, MatType, ConnectedComponentsAlgorithmsTypes)

computes the connected components labeled image of boolean image and also produces a statistics output for each label.

image with 4 or 8 way connectivity - returns N, the total number of labels[0, N - 1] where 0 represents the background label.ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.ccltype specifies the connected components labeling algorithm to use, currently Grana's (BBDT) and Wu's (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details.Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu's algorithms (statistics included) if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.

public static int ConnectedComponentsWithStatsWithAlgorithm(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, PixelConnectivity connectivity, MatType ltype, ConnectedComponentsAlgorithmsTypes ccltype)

Parameters

image InputArray

the 8-bit single-channel image to be labeled

labels OutputArray

destination labeled image

stats OutputArray

statistics output for each label, including the background label, see below for available statistics.Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes. The data type is CV_32S.

centroids OutputArray

centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y.The data type CV_64F.

connectivity PixelConnectivity

8 or 4 for 8-way or 4-way connectivity respectively

ltype MatType

output image label type. Currently CV_32S and CV_16U are supported.

ccltype ConnectedComponentsAlgorithmsTypes

connected components algorithm type.

Returns

int

ContourArea(InputArray, bool)

Calculates the contour area

public static double ContourArea(InputArray contour, bool oriented = false)

Parameters

contour InputArray

The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix

oriented bool

Returns

double

ContourArea(IEnumerable<Point2f>, bool)

Calculates the contour area

public static double ContourArea(IEnumerable<Point2f> contour, bool oriented = false)

Parameters

contour IEnumerable<Point2f>

The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix

oriented bool

Returns

double

ContourArea(IEnumerable<Point>, bool)

Calculates the contour area

public static double ContourArea(IEnumerable<Point> contour, bool oriented = false)

Parameters

contour IEnumerable<Point>

The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix

oriented bool

Returns

double

ConvertFp16(InputArray, OutputArray)

Converts an array to half precision floating number.

This function converts FP32(single precision floating point) from/to FP16(half precision floating point). CV_16S format is used to represent FP16 data. There are two use modes(src -> dst) : CV_32F -> CV_16S and CV_16S -> CV_32F.The input array has to have type of CV_32F or CV_16S to represent the bit depth.If the input array is neither of them, the function will raise an error. The format of half precision floating point is defined in IEEE 754-2008.

public static void ConvertFp16(InputArray src, OutputArray dst)

Parameters

src InputArray

input array.

dst OutputArray

output array.

ConvertMaps(InputArray, InputArray, OutputArray, OutputArray, MatType, bool)

Converts image transformation maps from one representation to another.

public static void ConvertMaps(InputArray map1, InputArray map2, OutputArray dstmap1, OutputArray dstmap2, MatType dstmap1Type, bool nnInterpolation = false)

Parameters

map1 InputArray

The first input map of type CV_16SC2 , CV_32FC1 , or CV_32FC2 .

map2 InputArray

The second input map of type CV_16UC1 , CV_32FC1 , or none (empty matrix), respectively.

dstmap1 OutputArray

The first output map that has the type dstmap1type and the same size as src.

dstmap2 OutputArray

The second output map.

dstmap1Type MatType

Type of the first output map that should be CV_16SC2 , CV_32FC1 , or CV_32FC2 .

nnInterpolation bool

Flag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation.

ConvertPointsFromHomogeneous(InputArray, OutputArray)

converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z))

public static void ConvertPointsFromHomogeneous(InputArray src, OutputArray dst)

Parameters

src InputArray

Input vector of N-dimensional points.

dst OutputArray

Output vector of N-1-dimensional points.

ConvertPointsFromHomogeneous(IEnumerable<Vec3f>)

converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z))

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec2f[] ConvertPointsFromHomogeneous(IEnumerable<Vec3f> src)

Parameters

src IEnumerable<Vec3f>

Input vector of N-dimensional points.

Returns

Vec2f[]

Output vector of N-1-dimensional points.

ConvertPointsFromHomogeneous(IEnumerable<Vec4f>)

converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z))

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec3f[] ConvertPointsFromHomogeneous(IEnumerable<Vec4f> src)

Parameters

src IEnumerable<Vec4f>

Input vector of N-dimensional points.

Returns

Vec3f[]

Output vector of N-1-dimensional points.

ConvertPointsHomogeneous(InputArray, OutputArray)

Converts points to/from homogeneous coordinates.

public static void ConvertPointsHomogeneous(InputArray src, OutputArray dst)

Parameters

src InputArray

Input array or vector of 2D, 3D, or 4D points.

dst OutputArray

Output vector of 2D, 3D, or 4D points.

ConvertPointsToHomogeneous(InputArray, OutputArray)

converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1))

public static void ConvertPointsToHomogeneous(InputArray src, OutputArray dst)

Parameters

src InputArray

Input vector of N-dimensional points.

dst OutputArray

Output vector of N+1-dimensional points.

ConvertPointsToHomogeneous(IEnumerable<Vec2f>)

converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1))

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec3f[] ConvertPointsToHomogeneous(IEnumerable<Vec2f> src)

Parameters

src IEnumerable<Vec2f>

Input vector of N-dimensional points.

Returns

Vec3f[]

Output vector of N+1-dimensional points.

ConvertPointsToHomogeneous(IEnumerable<Vec3f>)

converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1))

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec4f[] ConvertPointsToHomogeneous(IEnumerable<Vec3f> src)

Parameters

src IEnumerable<Vec3f>

Input vector of N-dimensional points.

Returns

Vec4f[]

Output vector of N+1-dimensional points.

ConvertScaleAbs(InputArray, OutputArray, double, double)

Scales, computes absolute values and converts the result to 8-bit.

public static void ConvertScaleAbs(InputArray src, OutputArray dst, double alpha = 1, double beta = 0)

Parameters

src InputArray

The source array

dst OutputArray

The destination array

alpha double

The optional scale factor. [By default this is 1]

beta double

The optional delta added to the scaled values. [By default this is 0]

ConvexHull(InputArray, OutputArray, bool, bool)

Computes convex hull for a set of 2D points.

public static void ConvexHull(InputArray points, OutputArray hull, bool clockwise = false, bool returnPoints = true)

Parameters

points InputArray

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix

hull OutputArray

The output convex hull. It is either a vector of points that form the hull (must have the same type as the input points), or a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set).

clockwise bool

If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.

returnPoints bool

ConvexHull(IEnumerable<Point2f>, bool)

Computes convex hull for a set of 2D points.

public static Point2f[] ConvexHull(IEnumerable<Point2f> points, bool clockwise = false)

Parameters

points IEnumerable<Point2f>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix

clockwise bool

If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.

Returns

Point2f[]

The output convex hull. It is a vector of points that form the hull (must have the same type as the input points).

ConvexHull(IEnumerable<Point>, bool)

Computes convex hull for a set of 2D points.

public static Point[] ConvexHull(IEnumerable<Point> points, bool clockwise = false)

Parameters

points IEnumerable<Point>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix

clockwise bool

If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.

Returns

Point[]

The output convex hull. It is a vector of points that form the hull (must have the same type as the input points).

ConvexHullIndices(IEnumerable<Point2f>, bool)

Computes convex hull for a set of 2D points.

public static int[] ConvexHullIndices(IEnumerable<Point2f> points, bool clockwise = false)

Parameters

points IEnumerable<Point2f>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix

clockwise bool

If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.

Returns

int[]

The output convex hull. It is a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set).

ConvexHullIndices(IEnumerable<Point>, bool)

Computes convex hull for a set of 2D points.

public static int[] ConvexHullIndices(IEnumerable<Point> points, bool clockwise = false)

Parameters

points IEnumerable<Point>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix

clockwise bool

If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.

Returns

int[]

The output convex hull. It is a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set).

ConvexityDefects(InputArray, InputArray, OutputArray)

Computes the contour convexity defects

public static void ConvexityDefects(InputArray contour, InputArray convexHull, OutputArray convexityDefects)

Parameters

contour InputArray

Input contour.

convexHull InputArray

Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull.

convexityDefects OutputArray

The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.

ConvexityDefects(IEnumerable<Point2f>, IEnumerable<int>)

Computes the contour convexity defects

public static Vec4i[] ConvexityDefects(IEnumerable<Point2f> contour, IEnumerable<int> convexHull)

Parameters

contour IEnumerable<Point2f>

Input contour.

convexHull IEnumerable<int>

Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull.

Returns

Vec4i[]

The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.

ConvexityDefects(IEnumerable<Point>, IEnumerable<int>)

Computes the contour convexity defects

public static Vec4i[] ConvexityDefects(IEnumerable<Point> contour, IEnumerable<int> convexHull)

Parameters

contour IEnumerable<Point>

Input contour.

convexHull IEnumerable<int>

Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull.

Returns

Vec4i[]

The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.

CopyMakeBorder(InputArray, OutputArray, int, int, int, int, BorderTypes, Scalar?)

Forms a border around the image

public static void CopyMakeBorder(InputArray src, OutputArray dst, int top, int bottom, int left, int right, BorderTypes borderType, Scalar? value = null)

Parameters

src InputArray

The source image

dst OutputArray

The destination image; will have the same type as src and the size Size(src.cols+left+right, src.rows+top+bottom)

top int

Specify how much pixels in each direction from the source image rectangle one needs to extrapolate

bottom int

Specify how much pixels in each direction from the source image rectangle one needs to extrapolate

left int

Specify how much pixels in each direction from the source image rectangle one needs to extrapolate

right int

Specify how much pixels in each direction from the source image rectangle one needs to extrapolate

borderType BorderTypes

The border type

value Scalar?

The border value if borderType == Constant

CopyTo(InputArray, OutputArray, InputArray?)

Copies the matrix to another one. When the operation mask is specified, if the Mat::create call shown above reallocates the matrix, the newly allocated matrix is initialized with all zeros before copying the data.

public static void CopyTo(InputArray src, OutputArray dst, InputArray? mask = null)

Parameters

src InputArray

Source matrix.

dst OutputArray

Destination matrix. If it does not have a proper size or type before the operation, it is reallocated.

mask InputArray

Operation mask of the same size as *this. Its non-zero elements indicate which matrix elements need to be copied.The mask has to be of type CV_8U and can have 1 or multiple channels.

CornerEigenValsAndVecs(InputArray, OutputArray, int, int, BorderTypes)

computes both eigenvalues and the eigenvectors of 2x2 derivative covariation matrix at each pixel. The output is stored as 6-channel matrix.

public static void CornerEigenValsAndVecs(InputArray src, OutputArray dst, int blockSize, int ksize, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray
dst OutputArray
blockSize int
ksize int
borderType BorderTypes

CornerHarris(InputArray, OutputArray, int, int, double, BorderTypes)

Harris corner detector.

public static void CornerHarris(InputArray src, OutputArray dst, int blockSize, int ksize, double k, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

Input single-channel 8-bit or floating-point image.

dst OutputArray

Image to store the Harris detector responses. It has the type CV_32FC1 and the same size as src.

blockSize int

Neighborhood size (see the details on #cornerEigenValsAndVecs ).

ksize int

Aperture parameter for the Sobel operator.

k double

Harris detector free parameter. See the formula above.

borderType BorderTypes

Pixel extrapolation method. See #BorderTypes. #BORDER_WRAP is not supported.

CornerMinEigenVal(InputArray, OutputArray, int, int, BorderTypes)

Calculates the minimal eigenvalue of gradient matrices for corner detection.

public static void CornerMinEigenVal(InputArray src, OutputArray dst, int blockSize, int ksize = 3, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

Input single-channel 8-bit or floating-point image.

dst OutputArray

Image to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src .

blockSize int

Neighborhood size (see the details on #cornerEigenValsAndVecs ).

ksize int

Aperture parameter for the Sobel operator.

borderType BorderTypes

Pixel extrapolation method. See #BorderTypes. #BORDER_WRAP is not supported.

CornerSubPix(InputArray, IEnumerable<Point2f>, Size, Size, TermCriteria)

adjusts the corner locations with sub-pixel accuracy to maximize the certain cornerness criteria

public static Point2f[] CornerSubPix(InputArray image, IEnumerable<Point2f> inputCorners, Size winSize, Size zeroZone, TermCriteria criteria)

Parameters

image InputArray

Input image.

inputCorners IEnumerable<Point2f>

Initial coordinates of the input corners and refined coordinates provided for output.

winSize Size

Half of the side length of the search window.

zeroZone Size

Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size.

criteria TermCriteria

Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration.

Returns

Point2f[]

CorrectMatches(InputArray, InputArray, InputArray, OutputArray, OutputArray)

Refines coordinates of corresponding points.

public static void CorrectMatches(InputArray F, InputArray points1, InputArray points2, OutputArray newPoints1, OutputArray newPoints2)

Parameters

F InputArray

3x3 fundamental matrix.

points1 InputArray

1xN array containing the first set of points.

points2 InputArray

1xN array containing the second set of points.

newPoints1 OutputArray

The optimized points1.

newPoints2 OutputArray

The optimized points2.

CorrectMatches(double[,], IEnumerable<Point2d>, IEnumerable<Point2d>, out Point2d[], out Point2d[])

Refines coordinates of corresponding points.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CorrectMatches(double[,] F, IEnumerable<Point2d> points1, IEnumerable<Point2d> points2, out Point2d[] newPoints1, out Point2d[] newPoints2)

Parameters

F double[,]

3x3 fundamental matrix.

points1 IEnumerable<Point2d>

1xN array containing the first set of points.

points2 IEnumerable<Point2d>

1xN array containing the second set of points.

newPoints1 Point2d[]

The optimized points1.

newPoints2 Point2d[]

The optimized points2.

CountNonZero(InputArray)

computes the number of nonzero array elements

public static int CountNonZero(InputArray mtx)

Parameters

mtx InputArray

Single-channel array

Returns

int

number of non-zero elements in mtx

CreateCLAHE(double, Size?)

Creates a predefined CLAHE object

public static CLAHE CreateCLAHE(double clipLimit = 40, Size? tileGridSize = null)

Parameters

clipLimit double
tileGridSize Size?

Returns

CLAHE

CreateFrameSource_Camera(int)

public static FrameSource CreateFrameSource_Camera(int deviceId)

Parameters

deviceId int

Returns

FrameSource

CreateFrameSource_Empty()

public static FrameSource CreateFrameSource_Empty()

Returns

FrameSource

CreateFrameSource_Video(string)

public static FrameSource CreateFrameSource_Video(string fileName)

Parameters

fileName string

Returns

FrameSource

CreateFrameSource_Video_CUDA(string)

public static FrameSource CreateFrameSource_Video_CUDA(string fileName)

Parameters

fileName string

Returns

FrameSource

CreateHanningWindow(InputOutputArray, Size, MatType)

Computes a Hanning window coefficients in two dimensions.

public static void CreateHanningWindow(InputOutputArray dst, Size winSize, MatType type)

Parameters

dst InputOutputArray

Destination array to place Hann coefficients in

winSize Size

The window size specifications

type MatType

Created array type

CreateOptFlow_Brox_GPU()

public static DenseOpticalFlowExt CreateOptFlow_Brox_GPU()

Returns

DenseOpticalFlowExt

CreateOptFlow_DualTVL1_GPU()

public static DenseOpticalFlowExt CreateOptFlow_DualTVL1_GPU()

Returns

DenseOpticalFlowExt

CreateOptFlow_Farneback()

public static DenseOpticalFlowExt CreateOptFlow_Farneback()

Returns

DenseOpticalFlowExt

CreateOptFlow_Farneback_GPU()

public static DenseOpticalFlowExt CreateOptFlow_Farneback_GPU()

Returns

DenseOpticalFlowExt

CreateOptFlow_PyrLK_GPU()

public static DenseOpticalFlowExt CreateOptFlow_PyrLK_GPU()

Returns

DenseOpticalFlowExt

CreateSuperResolution_BTVL1()

Create Bilateral TV-L1 Super Resolution.

public static SuperResolution CreateSuperResolution_BTVL1()

Returns

SuperResolution

CreateSuperResolution_BTVL1_CUDA()

Create Bilateral TV-L1 Super Resolution.

public static SuperResolution CreateSuperResolution_BTVL1_CUDA()

Returns

SuperResolution

CreateTrackbar(string, string, int, TrackbarCallbackNative?, nint)

Creates a trackbar and attaches it to the specified window. The function createTrackbar creates a trackbar(a slider or range control) with the specified name and range, assigns a variable value to be a position synchronized with the trackbar and specifies the callback function onChange to be called on the trackbar position change.The created trackbar is displayed in the specified window winName.

public static int CreateTrackbar(string trackbarName, string winName, int count, TrackbarCallbackNative? onChange = null, nint userData = 0)

Parameters

trackbarName string

Name of the created trackbar.

winName string

Name of the window that will be used as a parent of the created trackbar.

count int

Maximal position of the slider. The minimal position is always 0.

onChange TrackbarCallbackNative

Pointer to the function to be called every time the slider changes position. This function should be prototyped as void Foo(int, void*); , where the first parameter is the trackbar position and the second parameter is the user data(see the next parameter). If the callback is the NULL pointer, no callbacks are called, but only value is updated.

userData nint

User data that is passed as is to the callback. It can be used to handle trackbar events without using global variables.

Returns

int

CreateTrackbar(string, string, ref int, int, TrackbarCallbackNative?, nint)

Creates a trackbar and attaches it to the specified window. The function createTrackbar creates a trackbar(a slider or range control) with the specified name and range, assigns a variable value to be a position synchronized with the trackbar and specifies the callback function onChange to be called on the trackbar position change.The created trackbar is displayed in the specified window winName.

public static int CreateTrackbar(string trackbarName, string winName, ref int value, int count, TrackbarCallbackNative? onChange = null, nint userData = 0)

Parameters

trackbarName string

Name of the created trackbar.

winName string

Name of the window that will be used as a parent of the created trackbar.

value int

Optional pointer to an integer variable whose value reflects the position of the slider.Upon creation, the slider position is defined by this variable.

count int

Maximal position of the slider. The minimal position is always 0.

onChange TrackbarCallbackNative

Pointer to the function to be called every time the slider changes position. This function should be prototyped as void Foo(int, void*); , where the first parameter is the trackbar position and the second parameter is the user data(see the next parameter). If the callback is the NULL pointer, no callbacks are called, but only value is updated.

userData nint

User data that is passed as is to the callback. It can be used to handle trackbar events without using global variables.

Returns

int

CubeRoot(float)

computes cube root of the argument

public static float CubeRoot(float val)

Parameters

val float

Returns

float

CvtColor(InputArray, OutputArray, ColorConversionCodes, int)

Converts image from one color space to another

public static void CvtColor(InputArray src, OutputArray dst, ColorConversionCodes code, int dstCn = 0)

Parameters

src InputArray

The source image, 8-bit unsigned, 16-bit unsigned or single-precision floating-point

dst OutputArray

The destination image; will have the same size and the same depth as src

code ColorConversionCodes

The color space conversion code

dstCn int

The number of channels in the destination image; if the parameter is 0, the number of the channels will be derived automatically from src and the code

CvtColorTwoPlane(InputArray, InputArray, OutputArray, ColorConversionCodes)

Converts an image from one color space to another where the source image is stored in two planes. This function only supports YUV420 to RGB conversion as of now.

public static void CvtColorTwoPlane(InputArray src1, InputArray src2, OutputArray dst, ColorConversionCodes code)

Parameters

src1 InputArray

8-bit image (#CV_8U) of the Y plane.

src2 InputArray

image containing interleaved U/V plane.

dst OutputArray

output image.

code ColorConversionCodes

Specifies the type of conversion. It can take any of the following values: - #COLOR_YUV2BGR_NV12 - #COLOR_YUV2RGB_NV12 - #COLOR_YUV2BGRA_NV12 - #COLOR_YUV2RGBA_NV12 - #COLOR_YUV2BGR_NV21 - #COLOR_YUV2RGB_NV21 - #COLOR_YUV2BGRA_NV21 - #COLOR_YUV2RGBA_NV21

Dct(InputArray, OutputArray, DctFlags)

Performs forward or inverse 1D or 2D Discrete Cosine Transformation

public static void Dct(InputArray src, OutputArray dst, DctFlags flags = DctFlags.None)

Parameters

src InputArray

The source floating-point array

dst OutputArray

The destination array; will have the same size and same type as src

flags DctFlags

Transformation flags, a combination of DctFlag2 values

Decolor(InputArray, OutputArray, OutputArray)

Transforms a color image to a grayscale image. It is a basic tool in digital printing, stylized black-and-white photograph rendering, and in many single channel image processing applications @cite CL12 .

public static void Decolor(InputArray src, OutputArray grayscale, OutputArray colorBoost)

Parameters

src InputArray

Input 8-bit 3-channel image.

grayscale OutputArray

Output 8-bit 1-channel image.

colorBoost OutputArray

Output 8-bit 3-channel image.

DecomposeHomographyMat(InputArray, InputArray, out Mat[], out Mat[], out Mat[])

Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).

public static int DecomposeHomographyMat(InputArray h, InputArray k, out Mat[] rotations, out Mat[] translations, out Mat[] normals)

Parameters

h InputArray

The input homography matrix between two images.

k InputArray

The input intrinsic camera calibration matrix.

rotations Mat[]

Array of rotation matrices.

translations Mat[]

Array of translation matrices.

normals Mat[]

Array of plane normal matrices.

Returns

int

DecomposeProjectionMatrix(InputArray, OutputArray, OutputArray, OutputArray, OutputArray?, OutputArray?, OutputArray?, OutputArray?)

Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector

public static void DecomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray? rotMatrixX = null, OutputArray? rotMatrixY = null, OutputArray? rotMatrixZ = null, OutputArray? eulerAngles = null)

Parameters

projMatrix InputArray

3x4 input projection matrix P.

cameraMatrix OutputArray

Output 3x3 camera matrix K.

rotMatrix OutputArray

Output 3x3 external rotation matrix R.

transVect OutputArray

Output 4x1 translation vector T.

rotMatrixX OutputArray

Optional 3x3 rotation matrix around x-axis.

rotMatrixY OutputArray

Optional 3x3 rotation matrix around y-axis.

rotMatrixZ OutputArray

Optional 3x3 rotation matrix around z-axis.

eulerAngles OutputArray

ptional three-element vector containing three Euler angles of rotation in degrees.

DecomposeProjectionMatrix(double[,], out double[,], out double[,], out double[])

Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector

public static void DecomposeProjectionMatrix(double[,] projMatrix, out double[,] cameraMatrix, out double[,] rotMatrix, out double[] transVect)

Parameters

projMatrix double[,]

3x4 input projection matrix P.

cameraMatrix double[,]

Output 3x3 camera matrix K.

rotMatrix double[,]

Output 3x3 external rotation matrix R.

transVect double[]

Output 4x1 translation vector T.

DecomposeProjectionMatrix(double[,], out double[,], out double[,], out double[], out double[,], out double[,], out double[,], out double[])

Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector

public static void DecomposeProjectionMatrix(double[,] projMatrix, out double[,] cameraMatrix, out double[,] rotMatrix, out double[] transVect, out double[,] rotMatrixX, out double[,] rotMatrixY, out double[,] rotMatrixZ, out double[] eulerAngles)

Parameters

projMatrix double[,]

3x4 input projection matrix P.

cameraMatrix double[,]

Output 3x3 camera matrix K.

rotMatrix double[,]

Output 3x3 external rotation matrix R.

transVect double[]

Output 4x1 translation vector T.

rotMatrixX double[,]

Optional 3x3 rotation matrix around x-axis.

rotMatrixY double[,]

Optional 3x3 rotation matrix around y-axis.

rotMatrixZ double[,]

Optional 3x3 rotation matrix around z-axis.

eulerAngles double[]

ptional three-element vector containing three Euler angles of rotation in degrees.

Demosaicing(InputArray, OutputArray, ColorConversionCodes, int)

main function for all demosaicing processes

public static void Demosaicing(InputArray src, OutputArray dst, ColorConversionCodes code, int dstCn = 0)

Parameters

src InputArray

input image: 8-bit unsigned or 16-bit unsigned.

dst OutputArray

output image of the same size and depth as src.

code ColorConversionCodes

Color space conversion code (see the description below).

dstCn int

number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.

Remarks

The function can do the following transformations:

  • Demosaicing using bilinear interpolation

    #COLOR_BayerBG2BGR , #COLOR_BayerGB2BGR , #COLOR_BayerRG2BGR , #COLOR_BayerGR2BGR #COLOR_BayerBG2GRAY , #COLOR_BayerGB2GRAY , #COLOR_BayerRG2GRAY , #COLOR_BayerGR2GRAY

  • Demosaicing using Variable Number of Gradients.

    #COLOR_BayerBG2BGR_VNG , #COLOR_BayerGB2BGR_VNG , #COLOR_BayerRG2BGR_VNG , #COLOR_BayerGR2BGR_VNG

  • Edge-Aware Demosaicing.

    #COLOR_BayerBG2BGR_EA , #COLOR_BayerGB2BGR_EA , #COLOR_BayerRG2BGR_EA , #COLOR_BayerGR2BGR_EA

  • Demosaicing with alpha channel

    COLOR_BayerBG2BGRA , #COLOR_BayerGB2BGRA , #COLOR_BayerRG2BGRA , #COLOR_BayerGR2BGRA

DenoiseTVL1(IEnumerable<Mat>, Mat, double, int)

Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented.

public static void DenoiseTVL1(IEnumerable<Mat> observations, Mat result, double lambda = 1, int niters = 30)

Parameters

observations IEnumerable<Mat>

This array should contain one or more noised versions of the image that is to be restored.

result Mat

Here the denoised image will be stored. There is no need to do pre-allocation of storage space, as it will be automatically allocated, if necessary.

lambda double

Corresponds to \f$\lambda\f$ in the formulas above. As it is enlarged, the smooth (blurred) images are treated more favorably than detailed (but maybe more noised) ones. Roughly speaking, as it becomes smaller, the result will be more blur but more sever outliers will be removed.

niters int

Number of iterations that the algorithm will run. Of course, as more iterations as better, but it is hard to quantitatively refine this statement, so just use the default and increase it if the results are poor.

DestroyAllWindows()

Destroys all of the HighGUI windows.

public static void DestroyAllWindows()

DestroyWindow(string)

Destroys the specified window.

public static void DestroyWindow(string winName)

Parameters

winName string

DetailEnhance(InputArray, OutputArray, float, float)

This filter enhances the details of a particular image.

public static void DetailEnhance(InputArray src, OutputArray dst, float sigmaS = 10, float sigmaR = 0.15)

Parameters

src InputArray

Input 8-bit 3-channel image.

dst OutputArray

Output image with the same size and type as src.

sigmaS float

Range between 0 to 200.

sigmaR float

Range between 0 to 1.

Determinant(InputArray)

computes determinant of a square matrix

public static double Determinant(InputArray mtx)

Parameters

mtx InputArray

The input matrix; must have CV_32FC1 or CV_64FC1 type and square size

Returns

double

determinant of the specified matrix.

Dft(InputArray, OutputArray, DftFlags, int)

Performs a forward Discrete Fourier transform of 1D or 2D floating-point array.

public static void Dft(InputArray src, OutputArray dst, DftFlags flags = DftFlags.None, int nonzeroRows = 0)

Parameters

src InputArray

The source array, real or complex

dst OutputArray

The destination array, which size and type depends on the flags

flags DftFlags

Transformation flags, a combination of the DftFlag2 values

nonzeroRows int

When the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT

Dilate(InputArray, OutputArray, InputArray?, Point?, int, BorderTypes, Scalar?)

Dilates an image by using a specific structuring element.

public static void Dilate(InputArray src, OutputArray dst, InputArray? element, Point? anchor = null, int iterations = 1, BorderTypes borderType = BorderTypes.Constant, Scalar? borderValue = null)

Parameters

src InputArray

The source image

dst OutputArray

The destination image. It will have the same size and the same type as src

element InputArray

The structuring element used for dilation. If element=new Mat() , a 3x3 rectangular structuring element is used

anchor Point?

Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center

iterations int

The number of times dilation is applied. [By default this is 1]

borderType BorderTypes

The pixel extrapolation method. [By default this is BorderType.Constant]

borderValue Scalar?

The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()]

DistanceTransform(InputArray, OutputArray, DistanceTypes, DistanceTransformMasks, int)

computes the distance transform map

public static void DistanceTransform(InputArray src, OutputArray dst, DistanceTypes distanceType, DistanceTransformMasks maskSize, int dstType = 4)

Parameters

src InputArray

8-bit, single-channel (binary) source image.

dst OutputArray

Output image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src.

distanceType DistanceTypes

Type of distance

maskSize DistanceTransformMasks

Size of the distance transform mask, see #DistanceTransformMasks. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a 3x3 mask gives the same result as 5x5 or any larger aperture.

dstType int

Type of output image. It can be MatType.CV_8U or MatType.CV_32F. Type CV_8U can be used only for the first variant of the function and distanceType == #DIST_L1.

DistanceTransformWithLabels(InputArray, OutputArray, OutputArray, DistanceTypes, DistanceTransformMasks, DistanceTransformLabelTypes)

Calculates the distance to the closest zero pixel for each pixel of the source image.

public static void DistanceTransformWithLabels(InputArray src, OutputArray dst, OutputArray labels, DistanceTypes distanceType, DistanceTransformMasks maskSize, DistanceTransformLabelTypes labelType = DistanceTransformLabelTypes.CComp)

Parameters

src InputArray

8-bit, single-channel (binary) source image.

dst OutputArray

Output image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src.

labels OutputArray

Output 2D array of labels (the discrete Voronoi diagram). It has the type CV_32SC1 and the same size as src.

distanceType DistanceTypes

Type of distance

maskSize DistanceTransformMasks

Size of the distance transform mask, see #DistanceTransformMasks. #DIST_MASK_PRECISE is not supported by this variant. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a 3x3 mask gives the same result as 5x5 or any larger aperture.

labelType DistanceTransformLabelTypes

Type of the label array to build

Divide(InputArray, InputArray, OutputArray, double, MatType?)

Performs per-element division of two arrays or a scalar by an array.

public static void Divide(InputArray src1, InputArray src2, OutputArray dst, double scale = 1, MatType? dtype = null)

Parameters

src1 InputArray

The first source array

src2 InputArray

The second source array; should have the same size and same type as src1

dst OutputArray

The destination array; will have the same size and same type as src2

scale double

Scale factor [By default this is 1]

dtype MatType?

Divide(double, InputArray, OutputArray, int)

Performs per-element division of two arrays or a scalar by an array.

public static void Divide(double scale, InputArray src2, OutputArray dst, int dtype = -1)

Parameters

scale double

Scale factor

src2 InputArray

The first source array

dst OutputArray

The destination array; will have the same size and same type as src2

dtype int

DrawChessboardCorners(InputOutputArray, Size, InputArray, bool)

Renders the detected chessboard corners.

public static void DrawChessboardCorners(InputOutputArray image, Size patternSize, InputArray corners, bool patternWasFound)

Parameters

image InputOutputArray

Destination image. It must be an 8-bit color image.

patternSize Size

Number of inner corners per a chessboard row and column (patternSize = cv::Size(points_per_row,points_per_column)).

corners InputArray

Array of detected corners, the output of findChessboardCorners.

patternWasFound bool

Parameter indicating whether the complete board was found or not. The return value of findChessboardCorners() should be passed here.

DrawChessboardCorners(InputOutputArray, Size, IEnumerable<Point2f>, bool)

Renders the detected chessboard corners.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void DrawChessboardCorners(InputOutputArray image, Size patternSize, IEnumerable<Point2f> corners, bool patternWasFound)

Parameters

image InputOutputArray

Destination image. It must be an 8-bit color image.

patternSize Size

Number of inner corners per a chessboard row and column (patternSize = cv::Size(points_per_row,points_per_column)).

corners IEnumerable<Point2f>

Array of detected corners, the output of findChessboardCorners.

patternWasFound bool

Parameter indicating whether the complete board was found or not. The return value of findChessboardCorners() should be passed here.

DrawContours(InputOutputArray, IEnumerable<Mat>, int, Scalar, int, LineTypes, Mat?, int, Point?)

draws contours in the image

public static void DrawContours(InputOutputArray image, IEnumerable<Mat> contours, int contourIdx, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, Mat? hierarchy = null, int maxLevel = 2147483647, Point? offset = null)

Parameters

image InputOutputArray

Destination image.

contours IEnumerable<Mat>

All the input contours. Each contour is stored as a point vector.

contourIdx int

Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

color Scalar

Color of the contours.

thickness int

Thickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn.

lineType LineTypes

Line connectivity.

hierarchy Mat

Optional information about hierarchy. It is only needed if you want to draw only some of the contours

maxLevel int

Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

offset Point?

Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy)

DrawContours(InputOutputArray, IEnumerable<IEnumerable<Point>>, int, Scalar, int, LineTypes, IEnumerable<HierarchyIndex>?, int, Point?)

draws contours in the image

public static void DrawContours(InputOutputArray image, IEnumerable<IEnumerable<Point>> contours, int contourIdx, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, IEnumerable<HierarchyIndex>? hierarchy = null, int maxLevel = 2147483647, Point? offset = null)

Parameters

image InputOutputArray

Destination image.

contours IEnumerable<IEnumerable<Point>>

All the input contours. Each contour is stored as a point vector.

contourIdx int

Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

color Scalar

Color of the contours.

thickness int

Thickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn.

lineType LineTypes

Line connectivity.

hierarchy IEnumerable<HierarchyIndex>

Optional information about hierarchy. It is only needed if you want to draw only some of the contours

maxLevel int

Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.

offset Point?

Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy)

DrawFrameAxes(InputOutputArray, InputArray, InputArray, InputArray, InputArray, float, int)

Draw axes of the world/object coordinate system from pose estimation.

public static void DrawFrameAxes(InputOutputArray image, InputArray cameraMatrix, InputArray distCoeffs, InputArray rvec, InputArray tvec, float length, int thickness = 3)

Parameters

image InputOutputArray

Input/output image. It must have 1 or 3 channels. The number of channels is not altered.

cameraMatrix InputArray

Input 3x3 floating-point matrix of camera intrinsic parameters.

distCoeffs InputArray

Input vector of distortion coefficients \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of 4, 5, 8, 12 or 14 elements.If the vector is empty, the zero distortion coefficients are assumed.

rvec InputArray

Rotation vector (see @ref Rodrigues ) that, together with tvec , brings points from the model coordinate system to the camera coordinate system.

tvec InputArray

Translation vector.

length float

Length of the painted axes in the same unit than tvec (usually in meters).

thickness int

Line thickness of the painted axes.

Remarks

This function draws the axes of the world/object coordinate system w.r.t. to the camera frame. OX is drawn in red, OY in green and OZ in blue.

DrawKeypoints(InputArray, IEnumerable<KeyPoint>, InputOutputArray, Scalar?, DrawMatchesFlags)

Draw keypoints.

public static void DrawKeypoints(InputArray image, IEnumerable<KeyPoint> keypoints, InputOutputArray outImage, Scalar? color = null, DrawMatchesFlags flags = DrawMatchesFlags.Default)

Parameters

image InputArray

Source image.

keypoints IEnumerable<KeyPoint>

Keypoints from the source image.

outImage InputOutputArray

Output image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.

color Scalar?

Color of keypoints.

flags DrawMatchesFlags

Flags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.

DrawMarker(InputOutputArray, Point, Scalar, MarkerTypes, int, int, LineTypes)

Draws a marker on a predefined position in an image.

The function cv::drawMarker draws a marker on a given position in the image.For the moment several marker types are supported, see #MarkerTypes for more information.

public static void DrawMarker(InputOutputArray img, Point position, Scalar color, MarkerTypes markerType = MarkerTypes.Cross, int markerSize = 20, int thickness = 1, LineTypes lineType = LineTypes.Link8)

Parameters

img InputOutputArray

Image.

position Point

The point where the crosshair is positioned.

color Scalar

Line color.

markerType MarkerTypes

The specific type of marker you want to use.

markerSize int

The length of the marker axis [default = 20 pixels]

thickness int

Line thickness.

lineType LineTypes

Type of the line.

DrawMatches(Mat, IEnumerable<KeyPoint>, Mat, IEnumerable<KeyPoint>, IEnumerable<DMatch>, Mat, Scalar?, Scalar?, IEnumerable<byte>?, DrawMatchesFlags)

Draws the found matches of keypoints from two images.

public static void DrawMatches(Mat img1, IEnumerable<KeyPoint> keypoints1, Mat img2, IEnumerable<KeyPoint> keypoints2, IEnumerable<DMatch> matches1To2, Mat outImg, Scalar? matchColor = null, Scalar? singlePointColor = null, IEnumerable<byte>? matchesMask = null, DrawMatchesFlags flags = DrawMatchesFlags.Default)

Parameters

img1 Mat

First source image.

keypoints1 IEnumerable<KeyPoint>

Keypoints from the first source image.

img2 Mat

Second source image.

keypoints2 IEnumerable<KeyPoint>

Keypoints from the second source image.

matches1To2 IEnumerable<DMatch>

Matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]] .

outImg Mat

Output image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.

matchColor Scalar?

Color of matches (lines and connected keypoints). If matchColor==Scalar::all(-1), the color is generated randomly.

singlePointColor Scalar?

Color of single keypoints (circles), which means that keypoints do not have the matches. If singlePointColor==Scalar::all(-1) , the color is generated randomly.

matchesMask IEnumerable<byte>

Mask determining which matches are drawn. If the mask is empty, all matches are drawn.

flags DrawMatchesFlags

Flags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.

DrawMatchesKnn(Mat, IEnumerable<KeyPoint>, Mat, IEnumerable<KeyPoint>, IEnumerable<IEnumerable<DMatch>>, Mat, Scalar?, Scalar?, IEnumerable<IEnumerable<byte>>?, DrawMatchesFlags)

Draws the found matches of keypoints from two images.

public static void DrawMatchesKnn(Mat img1, IEnumerable<KeyPoint> keypoints1, Mat img2, IEnumerable<KeyPoint> keypoints2, IEnumerable<IEnumerable<DMatch>> matches1To2, Mat outImg, Scalar? matchColor = null, Scalar? singlePointColor = null, IEnumerable<IEnumerable<byte>>? matchesMask = null, DrawMatchesFlags flags = DrawMatchesFlags.Default)

Parameters

img1 Mat

First source image.

keypoints1 IEnumerable<KeyPoint>

Keypoints from the first source image.

img2 Mat

Second source image.

keypoints2 IEnumerable<KeyPoint>

Keypoints from the second source image.

matches1To2 IEnumerable<IEnumerable<DMatch>>

Matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]] .

outImg Mat

Output image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.

matchColor Scalar?

Color of matches (lines and connected keypoints). If matchColor==Scalar::all(-1), the color is generated randomly.

singlePointColor Scalar?

Color of single keypoints (circles), which means that keypoints do not have the matches. If singlePointColor==Scalar::all(-1) , the color is generated randomly.

matchesMask IEnumerable<IEnumerable<byte>>

Mask determining which matches are drawn. If the mask is empty, all matches are drawn.

flags DrawMatchesFlags

Flags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.

EMD(InputArray, InputArray, DistanceTypes)

Computes the "minimal work" distance between two weighted point configurations.

The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations.One of the applications described in @cite RubnerSept98, @cite Rubner2000 is multi-dimensional histogram comparison for image retrieval.EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster.In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.

public static float EMD(InputArray signature1, InputArray signature2, DistanceTypes distType)

Parameters

signature1 InputArray

First signature, a \f$\texttt{size1}\times \texttt{dims}+1\f$ floating-point matrix. Each row stores the point weight followed by the point coordinates.The matrix is allowed to have a single column(weights only) if the user-defined cost matrix is used.The weights must be non-negative and have at least one non-zero value.

signature2 InputArray

Second signature of the same format as signature1 , though the number of rows may be different.The total weights may be different.In this case an extra "dummy" point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.

distType DistanceTypes

Used metric.

Returns

float

EMD(InputArray, InputArray, DistanceTypes, InputArray?)

Computes the "minimal work" distance between two weighted point configurations.

The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations.One of the applications described in @cite RubnerSept98, @cite Rubner2000 is multi-dimensional histogram comparison for image retrieval.EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster.In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.

public static float EMD(InputArray signature1, InputArray signature2, DistanceTypes distType, InputArray? cost)

Parameters

signature1 InputArray

First signature, a \f$\texttt{size1}\times \texttt{dims}+1\f$ floating-point matrix. Each row stores the point weight followed by the point coordinates.The matrix is allowed to have a single column(weights only) if the user-defined cost matrix is used.The weights must be non-negative and have at least one non-zero value.

signature2 InputArray

Second signature of the same format as signature1 , though the number of rows may be different.The total weights may be different.In this case an extra "dummy" point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.

distType DistanceTypes

Used metric.

cost InputArray

User-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.

Returns

float

EMD(InputArray, InputArray, DistanceTypes, InputArray?, out float, OutputArray?)

Computes the "minimal work" distance between two weighted point configurations.

The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations.One of the applications described in @cite RubnerSept98, @cite Rubner2000 is multi-dimensional histogram comparison for image retrieval.EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster.In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.

public static float EMD(InputArray signature1, InputArray signature2, DistanceTypes distType, InputArray? cost, out float lowerBound, OutputArray? flow = null)

Parameters

signature1 InputArray

First signature, a \f$\texttt{size1}\times \texttt{dims}+1\f$ floating-point matrix. Each row stores the point weight followed by the point coordinates.The matrix is allowed to have a single column(weights only) if the user-defined cost matrix is used.The weights must be non-negative and have at least one non-zero value.

signature2 InputArray

Second signature of the same format as signature1 , though the number of rows may be different.The total weights may be different.In this case an extra "dummy" point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.

distType DistanceTypes

Used metric.

cost InputArray

User-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.

lowerBound float

Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers.The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only(the signature matrices have a single column). You ** must** initialize *lowerBound.If the calculated distance between mass centers is greater or equal to *lowerBound(it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0.

flow OutputArray

Resultant size1 x size2 flow matrix: flow[i,j] is a flow from i-th point of signature1 to j-th point of signature2.

Returns

float

EdgePreservingFilter(InputArray, OutputArray, EdgePreservingMethods, float, float)

Filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications @cite EM11 .

public static void EdgePreservingFilter(InputArray src, OutputArray dst, EdgePreservingMethods flags = EdgePreservingMethods.RecursFilter, float sigmaS = 60, float sigmaR = 0.4)

Parameters

src InputArray

Input 8-bit 3-channel image.

dst OutputArray

Output 8-bit 3-channel image.

flags EdgePreservingMethods

Edge preserving filters

sigmaS float

Range between 0 to 200.

sigmaR float

Range between 0 to 1.

Eigen(InputArray, OutputArray, OutputArray)

Computes eigenvalues and eigenvectors of a symmetric matrix.

public static bool Eigen(InputArray src, OutputArray eigenvalues, OutputArray eigenvectors)

Parameters

src InputArray

The input matrix; must have CV_32FC1 or CV_64FC1 type, square size and be symmetric: src^T == src

eigenvalues OutputArray

The output vector of eigenvalues of the same type as src; The eigenvalues are stored in the descending order.

eigenvectors OutputArray

The output matrix of eigenvectors; It will have the same size and the same type as src; The eigenvectors are stored as subsequent matrix rows, in the same order as the corresponding eigenvalues

Returns

bool

EigenNonSymmetric(InputArray, OutputArray, OutputArray)

Calculates eigenvalues and eigenvectors of a non-symmetric matrix (real eigenvalues only).

public static void EigenNonSymmetric(InputArray src, OutputArray eigenvalues, OutputArray eigenvectors)

Parameters

src InputArray

input matrix (CV_32FC1 or CV_64FC1 type).

eigenvalues OutputArray

output vector of eigenvalues (type is the same type as src).

eigenvectors OutputArray

output matrix of eigenvectors (type is the same type as src). The eigenvectors are stored as subsequent matrix rows, in the same order as the corresponding eigenvalues.

Ellipse(InputOutputArray, Point, Size, double, double, double, Scalar, int, LineTypes, int)

Draws simple or thick elliptic arc or fills ellipse sector

public static void Ellipse(InputOutputArray img, Point center, Size axes, double angle, double startAngle, double endAngle, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

Image.

center Point

Center of the ellipse.

axes Size

Length of the ellipse axes.

angle double

Rotation angle.

startAngle double

Starting angle of the elliptic arc.

endAngle double

Ending angle of the elliptic arc.

color Scalar

Ellipse color.

thickness int

Thickness of the ellipse arc. [By default this is 1]

lineType LineTypes

Type of the ellipse boundary. [By default this is LineType.Link8]

shift int

Number of fractional bits in the center coordinates and axes' values. [By default this is 0]

Ellipse(InputOutputArray, RotatedRect, Scalar, int, LineTypes)

Draws simple or thick elliptic arc or fills ellipse sector

public static void Ellipse(InputOutputArray img, RotatedRect box, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8)

Parameters

img InputOutputArray

Image.

box RotatedRect

The enclosing box of the ellipse drawn

color Scalar

Ellipse color.

thickness int

Thickness of the ellipse boundary. [By default this is 1]

lineType LineTypes

Type of the ellipse boundary. [By default this is LineType.Link8]

Ellipse2Poly(Point, Size, int, int, int, int)

Approximates an elliptic arc with a polyline. The function ellipse2Poly computes the vertices of a polyline that approximates the specified elliptic arc. It is used by cv::ellipse.

public static Point[] Ellipse2Poly(Point center, Size axes, int angle, int arcStart, int arcEnd, int delta)

Parameters

center Point

Center of the arc.

axes Size

Half of the size of the ellipse main axes. See the ellipse for details.

angle int

Rotation angle of the ellipse in degrees. See the ellipse for details.

arcStart int

Starting angle of the elliptic arc in degrees.

arcEnd int

Ending angle of the elliptic arc in degrees.

delta int

Angle between the subsequent polyline vertices. It defines the approximation

Returns

Point[]

Output vector of polyline vertices.

Ellipse2Poly(Point2d, Size2d, int, int, int, int)

Approximates an elliptic arc with a polyline. The function ellipse2Poly computes the vertices of a polyline that approximates the specified elliptic arc. It is used by cv::ellipse.

public static Point2d[] Ellipse2Poly(Point2d center, Size2d axes, int angle, int arcStart, int arcEnd, int delta)

Parameters

center Point2d

Center of the arc.

axes Size2d

Half of the size of the ellipse main axes. See the ellipse for details.

angle int

Rotation angle of the ellipse in degrees. See the ellipse for details.

arcStart int

Starting angle of the elliptic arc in degrees.

arcEnd int

Ending angle of the elliptic arc in degrees.

delta int

Angle between the subsequent polyline vertices. It defines the approximation

Returns

Point2d[]

Output vector of polyline vertices.

EqualizeHist(InputArray, OutputArray)

normalizes the grayscale image brightness and contrast by normalizing its histogram

public static void EqualizeHist(InputArray src, OutputArray dst)

Parameters

src InputArray

The source 8-bit single channel image

dst OutputArray

The destination image; will have the same size and the same type as src

Erode(InputArray, OutputArray, InputArray?, Point?, int, BorderTypes, Scalar?)

Erodes an image by using a specific structuring element.

public static void Erode(InputArray src, OutputArray dst, InputArray? element, Point? anchor = null, int iterations = 1, BorderTypes borderType = BorderTypes.Constant, Scalar? borderValue = null)

Parameters

src InputArray

The source image

dst OutputArray

The destination image. It will have the same size and the same type as src

element InputArray

The structuring element used for dilation. If element=new Mat(), a 3x3 rectangular structuring element is used

anchor Point?

Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center

iterations int

The number of times erosion is applied

borderType BorderTypes

The pixel extrapolation method

borderValue Scalar?

The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()]

EstimateAffine2D(InputArray, InputArray, OutputArray?, RobustEstimationAlgorithms, double, ulong, double, ulong)

Computes an optimal affine transformation between two 2D point sets.

public static Mat? EstimateAffine2D(InputArray from, InputArray to, OutputArray? inliers = null, RobustEstimationAlgorithms method = RobustEstimationAlgorithms.RANSAC, double ransacReprojThreshold = 3, ulong maxIters = 2000, double confidence = 0.99, ulong refineIters = 10)

Parameters

from InputArray

First input 2D point set containing (X,Y).

to InputArray

Second input 2D point set containing (x,y).

inliers OutputArray

Output vector indicating which points are inliers (1-inlier, 0-outlier).

method RobustEstimationAlgorithms

Robust method used to compute transformation.

ransacReprojThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.Applies only to RANSAC.

maxIters ulong

The maximum number of robust method iterations.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough.Values too close to 1 can slow down the estimation significantly.Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

refineIters ulong

Maximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.

Returns

Mat

Output 2D affine transformation matrix \f$2 \times 3\f$ or empty matrix if transformation could not be estimated.

EstimateAffine3D(InputArray, InputArray, OutputArray, OutputArray, double, double)

Computes an optimal affine transformation between two 3D point sets.

public static int EstimateAffine3D(InputArray src, InputArray dst, OutputArray outVal, OutputArray inliers, double ransacThreshold = 3, double confidence = 0.99)

Parameters

src InputArray

First input 3D point set.

dst InputArray

Second input 3D point set.

outVal OutputArray

Output 3D affine transformation matrix 3 x 4 .

inliers OutputArray

Output vector indicating which points are inliers.

ransacThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

Returns

int

EstimateAffinePartial2D(InputArray, InputArray, OutputArray?, RobustEstimationAlgorithms, double, ulong, double, ulong)

Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.

public static Mat? EstimateAffinePartial2D(InputArray from, InputArray to, OutputArray? inliers = null, RobustEstimationAlgorithms method = RobustEstimationAlgorithms.RANSAC, double ransacReprojThreshold = 3, ulong maxIters = 2000, double confidence = 0.99, ulong refineIters = 10)

Parameters

from InputArray

First input 2D point set.

to InputArray

Second input 2D point set.

inliers OutputArray

Output vector indicating which points are inliers.

method RobustEstimationAlgorithms

Robust method used to compute transformation.

ransacReprojThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.Applies only to RANSAC.

maxIters ulong

The maximum number of robust method iterations.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough.Values too close to 1 can slow down the estimation significantly.Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

refineIters ulong

Returns

Mat

Output 2D affine transformation (4 degrees of freedom) matrix 2x3 or empty matrix if transformation could not be estimated.

EvaluateFeatureDetector(Mat, Mat, Mat, ref KeyPoint[], ref KeyPoint[], out float, out int)

public static void EvaluateFeatureDetector(Mat img1, Mat img2, Mat H1to2, ref KeyPoint[] keypoints1, ref KeyPoint[] keypoints2, out float repeatability, out int correspCount)

Parameters

img1 Mat
img2 Mat
H1to2 Mat
keypoints1 KeyPoint[]
keypoints2 KeyPoint[]
repeatability float
correspCount int

Exp(InputArray, OutputArray)

computes exponent of each matrix element (dst = e**src)

public static void Exp(InputArray src, OutputArray dst)

Parameters

src InputArray

The source array

dst OutputArray

The destination array; will have the same size and same type as src

ExtractChannel(InputArray, OutputArray, int)

extracts a single channel from src (coi is 0-based index)

public static void ExtractChannel(InputArray src, OutputArray dst, int coi)

Parameters

src InputArray
dst OutputArray
coi int

FAST(InputArray, int, bool)

Detects corners using the FAST algorithm

public static KeyPoint[] FAST(InputArray image, int threshold, bool nonmaxSupression = true)

Parameters

image InputArray

grayscale image where keypoints (corners) are detected.

threshold int

threshold on difference between intensity of the central pixel and pixels of a circle around this pixel.

nonmaxSupression bool

if true, non-maximum suppression is applied to detected corners (keypoints).

Returns

KeyPoint[]

keypoints detected on the image.

FAST(InputArray, int, bool, FASTType)

Detects corners using the FAST algorithm

public static KeyPoint[] FAST(InputArray image, int threshold, bool nonmaxSupression, FASTType type)

Parameters

image InputArray

grayscale image where keypoints (corners) are detected.

threshold int

threshold on difference between intensity of the central pixel and pixels of a circle around this pixel.

nonmaxSupression bool

if true, non-maximum suppression is applied to detected corners (keypoints).

type FASTType

one of the three neighborhoods as defined in the paper

Returns

KeyPoint[]

keypoints detected on the image.

FastAtan2(float, float)

computes the angle in degrees (0..360) of the vector (x,y)

public static float FastAtan2(float y, float x)

Parameters

y float
x float

Returns

float

FastNlMeansDenoising(InputArray, OutputArray, float, int, int)

Perform image denoising using Non-local Means Denoising algorithm with several computational optimizations. Noise expected to be a gaussian white noise

public static void FastNlMeansDenoising(InputArray src, OutputArray dst, float h = 3, int templateWindowSize = 7, int searchWindowSize = 21)

Parameters

src InputArray

Input 8-bit 1-channel, 2-channel or 3-channel image.

dst OutputArray

Output image with the same size and type as src .

h float

Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

templateWindowSize int

Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

searchWindowSize int

Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels

FastNlMeansDenoisingColored(InputArray, OutputArray, float, float, int, int)

Modification of fastNlMeansDenoising function for colored images

public static void FastNlMeansDenoisingColored(InputArray src, OutputArray dst, float h = 3, float hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21)

Parameters

src InputArray

Input 8-bit 3-channel image.

dst OutputArray

Output image with the same size and type as src.

h float

Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

hColor float

The same as h but for color components. For most images value equals 10 will be enought to remove colored noise and do not distort colors

templateWindowSize int

Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

searchWindowSize int

Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels

FastNlMeansDenoisingColoredMulti(IEnumerable<Mat>, OutputArray, int, int, float, float, int, int)

Modification of fastNlMeansDenoisingMulti function for colored images sequences

public static void FastNlMeansDenoisingColoredMulti(IEnumerable<Mat> srcImgs, OutputArray dst, int imgToDenoiseIndex, int temporalWindowSize, float h = 3, float hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21)

Parameters

srcImgs IEnumerable<Mat>

Input 8-bit 3-channel images sequence. All images should have the same type and size.

dst OutputArray

Output image with the same size and type as srcImgs images.

imgToDenoiseIndex int

Target image to denoise index in srcImgs sequence

temporalWindowSize int

Number of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image.

h float

Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.

hColor float

The same as h but for color components.

templateWindowSize int

Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

searchWindowSize int

Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels

FastNlMeansDenoisingMulti(IEnumerable<Mat>, OutputArray, int, int, float, int, int)

Modification of fastNlMeansDenoising function for images sequence where consequtive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces.

public static void FastNlMeansDenoisingMulti(IEnumerable<Mat> srcImgs, OutputArray dst, int imgToDenoiseIndex, int temporalWindowSize, float h = 3, int templateWindowSize = 7, int searchWindowSize = 21)

Parameters

srcImgs IEnumerable<Mat>

Input 8-bit 1-channel, 2-channel or 3-channel images sequence. All images should have the same type and size.

dst OutputArray

Output image with the same size and type as srcImgs images.

imgToDenoiseIndex int

Target image to denoise index in srcImgs sequence

temporalWindowSize int

Number of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image.

h float

Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise

templateWindowSize int

Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels

searchWindowSize int

Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels

FillConvexPoly(InputOutputArray, InputArray, Scalar, LineTypes, int)

Fills a convex polygon.

public static void FillConvexPoly(InputOutputArray img, InputArray pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

Image

pts InputArray

The polygon vertices

color Scalar

Polygon color

lineType LineTypes

Type of the polygon boundaries

shift int

The number of fractional bits in the vertex coordinates

FillConvexPoly(Mat, IEnumerable<Point>, Scalar, LineTypes, int)

Fills a convex polygon.

public static void FillConvexPoly(Mat img, IEnumerable<Point> pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img Mat

Image

pts IEnumerable<Point>

The polygon vertices

color Scalar

Polygon color

lineType LineTypes

Type of the polygon boundaries

shift int

The number of fractional bits in the vertex coordinates

FillPoly(InputOutputArray, InputArray, Scalar, LineTypes, int, Point?)

Fills the area bounded by one or more polygons

public static void FillPoly(InputOutputArray img, InputArray pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0, Point? offset = null)

Parameters

img InputOutputArray

Image

pts InputArray

Array of polygons, each represented as an array of points

color Scalar

Polygon color

lineType LineTypes

Type of the polygon boundaries

shift int

The number of fractional bits in the vertex coordinates

offset Point?

FillPoly(Mat, IEnumerable<IEnumerable<Point>>, Scalar, LineTypes, int, Point?)

Fills the area bounded by one or more polygons

public static void FillPoly(Mat img, IEnumerable<IEnumerable<Point>> pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0, Point? offset = null)

Parameters

img Mat

Image

pts IEnumerable<IEnumerable<Point>>

Array of polygons, each represented as an array of points

color Scalar

Polygon color

lineType LineTypes

Type of the polygon boundaries

shift int

The number of fractional bits in the vertex coordinates

offset Point?

Filter2D(InputArray, OutputArray, MatType, InputArray, Point?, double, BorderTypes)

Convolves an image with the kernel

public static void Filter2D(InputArray src, OutputArray dst, MatType ddepth, InputArray kernel, Point? anchor = null, double delta = 0, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

The source image

dst OutputArray

The destination image. It will have the same size and the same number of channels as src

ddepth MatType

The desired depth of the destination image. If it is negative, it will be the same as src.depth()

kernel InputArray

Convolution kernel (or rather a correlation kernel), a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually

anchor Point?

The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor should lie within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center

delta double

The optional value added to the filtered pixels before storing them in dst

borderType BorderTypes

The pixel extrapolation method

FilterHomographyDecompByVisibleRefpoints(IEnumerable<Mat>, IEnumerable<Mat>, InputArray, InputArray, OutputArray, InputArray?)

Filters homography decompositions based on additional information.

public static void FilterHomographyDecompByVisibleRefpoints(IEnumerable<Mat> rotations, IEnumerable<Mat> normals, InputArray beforePoints, InputArray afterPoints, OutputArray possibleSolutions, InputArray? pointsMask = null)

Parameters

rotations IEnumerable<Mat>

Vector of rotation matrices.

normals IEnumerable<Mat>

Vector of plane normal matrices.

beforePoints InputArray

Vector of (rectified) visible reference points before the homography is applied

afterPoints InputArray

Vector of (rectified) visible reference points after the homography is applied

possibleSolutions OutputArray

Vector of int indices representing the viable solution set after filtering

pointsMask InputArray

optional Mat/Vector of 8u type representing the mask for the inliers as given by the findHomography function

FilterSpeckles(InputOutputArray, double, int, double, InputOutputArray?)

filters off speckles (small regions of incorrectly computed disparity)

public static void FilterSpeckles(InputOutputArray img, double newVal, int maxSpeckleSize, double maxDiff, InputOutputArray? buf = null)

Parameters

img InputOutputArray

The input 16-bit signed disparity image

newVal double

The disparity value used to paint-off the speckles

maxSpeckleSize int

The maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithm

maxDiff double

Maximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value.

buf InputOutputArray

The optional temporary buffer to avoid memory allocation within the function.

Find4QuadCornerSubpix(InputArray, InputOutputArray, Size)

finds subpixel-accurate positions of the chessboard corners

public static bool Find4QuadCornerSubpix(InputArray img, InputOutputArray corners, Size regionSize)

Parameters

img InputArray
corners InputOutputArray
regionSize Size

Returns

bool

Find4QuadCornerSubpix(InputArray, Point2f[], Size)

finds subpixel-accurate positions of the chessboard corners

public static bool Find4QuadCornerSubpix(InputArray img, Point2f[] corners, Size regionSize)

Parameters

img InputArray
corners Point2f[]
regionSize Size

Returns

bool

FindChessboardCorners(InputArray, Size, OutputArray, ChessboardFlags)

Finds the positions of internal corners of the chessboard.

public static bool FindChessboardCorners(InputArray image, Size patternSize, OutputArray corners, ChessboardFlags flags = ChessboardFlags.AdaptiveThresh | ChessboardFlags.NormalizeImage)

Parameters

image InputArray

Source chessboard view. It must be an 8-bit grayscale or color image.

patternSize Size

Number of inner corners per a chessboard row and column ( patternSize = Size(points_per_row,points_per_colum) = Size(columns, rows) ).

corners OutputArray

Output array of detected corners.

flags ChessboardFlags

Various operation flags that can be zero or a combination of the ChessboardFlag values

Returns

bool

The function returns true if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder them, it returns false.

FindChessboardCorners(InputArray, Size, out Point2f[], ChessboardFlags)

Finds the positions of internal corners of the chessboard.

public static bool FindChessboardCorners(InputArray image, Size patternSize, out Point2f[] corners, ChessboardFlags flags = ChessboardFlags.AdaptiveThresh | ChessboardFlags.NormalizeImage)

Parameters

image InputArray

Source chessboard view. It must be an 8-bit grayscale or color image.

patternSize Size

Number of inner corners per a chessboard row and column ( patternSize = Size(points_per_row,points_per_colum) = Size(columns, rows) ).

corners Point2f[]

Output array of detected corners.

flags ChessboardFlags

Various operation flags that can be zero or a combination of the ChessboardFlag values

Returns

bool

The function returns true if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder them, it returns false.

FindChessboardCornersSB(InputArray, Size, OutputArray, ChessboardFlags)

Finds the positions of internal corners of the chessboard using a sector based approach.

public static bool FindChessboardCornersSB(InputArray image, Size patternSize, OutputArray corners, ChessboardFlags flags = ChessboardFlags.None)

Parameters

image InputArray

image Source chessboard view. It must be an 8-bit grayscale or color image.

patternSize Size

Number of inner corners per a chessboard row and column (patternSize = Size(points_per_row, points_per_column) = Size(columns, rows) ).

corners OutputArray

Output array of detected corners.

flags ChessboardFlags

flags Various operation flags that can be zero or a combination of the ChessboardFlags values.

Returns

bool

FindChessboardCornersSB(InputArray, Size, out Point2f[], ChessboardFlags)

Finds the positions of internal corners of the chessboard using a sector based approach.

public static bool FindChessboardCornersSB(InputArray image, Size patternSize, out Point2f[] corners, ChessboardFlags flags = ChessboardFlags.None)

Parameters

image InputArray

image Source chessboard view. It must be an 8-bit grayscale or color image.

patternSize Size

Number of inner corners per a chessboard row and column (patternSize = Size(points_per_row, points_per_column) = Size(columns, rows) ).

corners Point2f[]

Output array of detected corners.

flags ChessboardFlags

flags Various operation flags that can be zero or a combination of the ChessboardFlags values.

Returns

bool

FindCirclesGrid(InputArray, Size, OutputArray, FindCirclesGridFlags, Feature2D?)

Finds centers in the grid of circles.

public static bool FindCirclesGrid(InputArray image, Size patternSize, OutputArray centers, FindCirclesGridFlags flags = FindCirclesGridFlags.SymmetricGrid, Feature2D? blobDetector = null)

Parameters

image InputArray

grid view of input circles; it must be an 8-bit grayscale or color image.

patternSize Size

number of circles per row and column ( patternSize = Size(points_per_row, points_per_colum) ).

centers OutputArray

output array of detected centers.

flags FindCirclesGridFlags

various operation flags that can be one of the FindCirclesGridFlag values

blobDetector Feature2D

feature detector that finds blobs like dark circles on light background.

Returns

bool

FindCirclesGrid(InputArray, Size, out Point2f[], FindCirclesGridFlags, Feature2D?)

Finds centers in the grid of circles.

public static bool FindCirclesGrid(InputArray image, Size patternSize, out Point2f[] centers, FindCirclesGridFlags flags = FindCirclesGridFlags.SymmetricGrid, Feature2D? blobDetector = null)

Parameters

image InputArray

grid view of input circles; it must be an 8-bit grayscale or color image.

patternSize Size

number of circles per row and column ( patternSize = Size(points_per_row, points_per_colum) ).

centers Point2f[]

output array of detected centers.

flags FindCirclesGridFlags

various operation flags that can be one of the FindCirclesGridFlag values

blobDetector Feature2D

feature detector that finds blobs like dark circles on light background.

Returns

bool

FindContours(InputArray, out Mat[], OutputArray, RetrievalModes, ContourApproximationModes, Point?)

Finds contours in a binary image.

public static void FindContours(InputArray image, out Mat[] contours, OutputArray hierarchy, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)

Parameters

image InputArray

Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.

contours Mat[]

Detected contours. Each contour is stored as a vector of points.

hierarchy OutputArray

Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative.

mode RetrievalModes

Contour retrieval mode

method ContourApproximationModes

Contour approximation method

offset Point?

Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.

FindContours(InputArray, out Point[][], out HierarchyIndex[], RetrievalModes, ContourApproximationModes, Point?)

Finds contours in a binary image.

public static void FindContours(InputArray image, out Point[][] contours, out HierarchyIndex[] hierarchy, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)

Parameters

image InputArray

Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.

contours Point[][]

Detected contours. Each contour is stored as a vector of points.

hierarchy HierarchyIndex[]

Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative.

mode RetrievalModes

Contour retrieval mode

method ContourApproximationModes

Contour approximation method

offset Point?

Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.

FindContoursAsArray(InputArray, RetrievalModes, ContourApproximationModes, Point?)

Finds contours in a binary image.

public static Point[][] FindContoursAsArray(InputArray image, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)

Parameters

image InputArray

Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.

mode RetrievalModes

Contour retrieval mode

method ContourApproximationModes

Contour approximation method

offset Point?

Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.

Returns

Point[][]

Detected contours. Each contour is stored as a vector of points.

FindContoursAsMat(InputArray, RetrievalModes, ContourApproximationModes, Point?)

Finds contours in a binary image.

public static Mat<Point>[] FindContoursAsMat(InputArray image, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)

Parameters

image InputArray

Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.

mode RetrievalModes

Contour retrieval mode

method ContourApproximationModes

Contour approximation method

offset Point?

Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.

Returns

Mat<Point>[]

Detected contours. Each contour is stored as a vector of points.

FindEssentialMat(InputArray, InputArray, InputArray, EssentialMatMethod, double, double, OutputArray?)

Calculates an essential matrix from the corresponding points in two images.

public static Mat FindEssentialMat(InputArray points1, InputArray points2, InputArray cameraMatrix, EssentialMatMethod method = EssentialMatMethod.Ransac, double prob = 0.999, double threshold = 1, OutputArray? mask = null)

Parameters

points1 InputArray

Array of N (N >= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision).

points2 InputArray

Array of the second image points of the same size and format as points1 .

cameraMatrix InputArray

Camera matrix K=⎡⎣⎢fx000fy0cxcy1⎤⎦⎥ . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.

method EssentialMatMethod

Method for computing an essential matrix. RANSAC for the RANSAC algorithm. LMEDS for the LMedS algorithm.

prob double

Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

threshold double

Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.

mask OutputArray

Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.

Returns

Mat

essential matrix

FindEssentialMat(InputArray, InputArray, double, Point2d, EssentialMatMethod, double, double, OutputArray?)

Calculates an essential matrix from the corresponding points in two images.

public static Mat FindEssentialMat(InputArray points1, InputArray points2, double focal, Point2d pp, EssentialMatMethod method = EssentialMatMethod.Ransac, double prob = 0.999, double threshold = 1, OutputArray? mask = null)

Parameters

points1 InputArray

Array of N (N >= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision).

points2 InputArray

Array of the second image por LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

focal double

Focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.

pp Point2d

principal point of the camera.

method EssentialMatMethod

Method for computing an essential matrix. RANSAC for the RANSAC algorithm. LMEDS for the LMedS algorithm.

prob double

Parameter used for the RANSAC othe accuracy of the point localization, image resolution, and the image noise.

threshold double

Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on ints of the same size and format as points1 .

mask OutputArray

Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.

Returns

Mat

essential matrix

FindFundamentalMat(InputArray, InputArray, FundamentalMatMethods, double, double, OutputArray?)

Calculates a fundamental matrix from the corresponding points in two images.

public static Mat FindFundamentalMat(InputArray points1, InputArray points2, FundamentalMatMethods method = FundamentalMatMethods.Ransac, double param1 = 3, double param2 = 0.99, OutputArray? mask = null)

Parameters

points1 InputArray

Array of N points from the first image. The point coordinates should be floating-point (single or double precision).

points2 InputArray

Array of the second image points of the same size and format as points1 .

method FundamentalMatMethods

Method for computing a fundamental matrix.

param1 double

Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.

param2 double

Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

mask OutputArray

Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.

Returns

Mat

fundamental matrix

FindFundamentalMat(IEnumerable<Point2d>, IEnumerable<Point2d>, FundamentalMatMethods, double, double, OutputArray?)

Calculates a fundamental matrix from the corresponding points in two images.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Mat FindFundamentalMat(IEnumerable<Point2d> points1, IEnumerable<Point2d> points2, FundamentalMatMethods method = FundamentalMatMethods.Ransac, double param1 = 3, double param2 = 0.99, OutputArray? mask = null)

Parameters

points1 IEnumerable<Point2d>

Array of N points from the first image. The point coordinates should be floating-point (single or double precision).

points2 IEnumerable<Point2d>

Array of the second image points of the same size and format as points1 .

method FundamentalMatMethods

Method for computing a fundamental matrix.

param1 double

Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.

param2 double

Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

mask OutputArray

Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.

Returns

Mat

fundamental matrix

FindFundamentalMat(IEnumerable<Point2f>, IEnumerable<Point2f>, FundamentalMatMethods, double, double, OutputArray?)

Calculates a fundamental matrix from the corresponding points in two images.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Mat FindFundamentalMat(IEnumerable<Point2f> points1, IEnumerable<Point2f> points2, FundamentalMatMethods method = FundamentalMatMethods.Ransac, double param1 = 3, double param2 = 0.99, OutputArray? mask = null)

Parameters

points1 IEnumerable<Point2f>

Array of N points from the first image. The point coordinates should be floating-point (single or double precision).

points2 IEnumerable<Point2f>

Array of the second image points of the same size and format as points1 .

method FundamentalMatMethods

Method for computing a fundamental matrix.

param1 double

Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.

param2 double

Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

mask OutputArray

Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.

Returns

Mat

fundamental matrix

FindHomography(InputArray, InputArray, HomographyMethods, double, OutputArray?, int, double)

computes the best-fit perspective transformation mapping srcPoints to dstPoints.

public static Mat FindHomography(InputArray srcPoints, InputArray dstPoints, HomographyMethods method = HomographyMethods.None, double ransacReprojThreshold = 3, OutputArray? mask = null, int maxIters = 2000, double confidence = 0.995)

Parameters

srcPoints InputArray

Coordinates of the points in the original plane, a matrix of the type CV_32FC2

dstPoints InputArray

Coordinates of the points in the target plane, a matrix of the type CV_32FC2

method HomographyMethods

Method used to computed a homography matrix.

ransacReprojThreshold double

Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only)

mask OutputArray

Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.

maxIters int

The maximum number of RANSAC iterations.

confidence double

Confidence level, between 0 and 1.

Returns

Mat

FindHomography(InputArray, InputArray, OutputArray, UsacParams?)

computes the best-fit perspective transformation mapping srcPoints to dstPoints.

public static Mat FindHomography(InputArray srcPoints, InputArray dstPoints, OutputArray mask, UsacParams? @params)

Parameters

srcPoints InputArray

Coordinates of the points in the original plane, a matrix of the type CV_32FC2

dstPoints InputArray

Coordinates of the points in the target plane, a matrix of the type CV_32FC2

mask OutputArray

Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.

params UsacParams

Returns

Mat

Exceptions

ArgumentNullException

FindHomography(IEnumerable<Point2d>, IEnumerable<Point2d>, HomographyMethods, double, OutputArray?, int, double)

computes the best-fit perspective transformation mapping srcPoints to dstPoints.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Mat FindHomography(IEnumerable<Point2d> srcPoints, IEnumerable<Point2d> dstPoints, HomographyMethods method = HomographyMethods.None, double ransacReprojThreshold = 3, OutputArray? mask = null, int maxIters = 2000, double confidence = 0.995)

Parameters

srcPoints IEnumerable<Point2d>

Coordinates of the points in the original plane

dstPoints IEnumerable<Point2d>

Coordinates of the points in the target plane

method HomographyMethods

Method used to computed a homography matrix.

ransacReprojThreshold double

Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only)

mask OutputArray

Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.

maxIters int

The maximum number of RANSAC iterations.

confidence double

Confidence level, between 0 and 1.

Returns

Mat

FindNonZero(InputArray, OutputArray)

returns the list of locations of non-zero pixels

public static void FindNonZero(InputArray src, OutputArray idx)

Parameters

src InputArray
idx OutputArray

FindTransformECC(InputArray, InputArray, InputOutputArray, MotionTypes, TermCriteria, InputArray?, int)

Finds the geometric transform (warp) between two images in terms of the ECC criterion @cite EP08 .

public static double FindTransformECC(InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, MotionTypes motionType, TermCriteria criteria, InputArray? inputMask = null, int gaussFiltSize = 5)

Parameters

templateImage InputArray

single-channel template image; CV_8U or CV_32F array.

inputImage InputArray

single-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as templateImage.

warpMatrix InputOutputArray

floating-point \f$2\times 3\f$ or \f$3\times 3\f$ mapping matrix (warp).

motionType MotionTypes

parameter, specifying the type of motion

criteria TermCriteria

parameter, specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations(a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values are shown in the declaration above.

inputMask InputArray

An optional mask to indicate valid values of inputImage.

gaussFiltSize int

An optional value indicating size of gaussian blur filter; (DEFAULT: 5)

Returns

double

FindTransformECC(InputArray, InputArray, InputOutputArray, MotionTypes, TermCriteria?, InputArray?)

Finds the geometric transform (warp) between two images in terms of the ECC criterion @cite EP08 .

public static double FindTransformECC(InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, MotionTypes motionType = MotionTypes.Affine, TermCriteria? criteria = null, InputArray? inputMask = null)

Parameters

templateImage InputArray

single-channel template image; CV_8U or CV_32F array.

inputImage InputArray

single-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as templateImage.

warpMatrix InputOutputArray

floating-point \f$2\times 3\f$ or \f$3\times 3\f$ mapping matrix (warp).

motionType MotionTypes

parameter, specifying the type of motion

criteria TermCriteria?

parameter, specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations(a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values are shown in the declaration above.

inputMask InputArray

An optional mask to indicate valid values of inputImage.

Returns

double

FitEllipse(InputArray)

Fits ellipse to the set of 2D points.

public static RotatedRect FitEllipse(InputArray points)

Parameters

points InputArray

Input 2D point set

Returns

RotatedRect

FitEllipse(IEnumerable<Point2f>)

Fits ellipse to the set of 2D points.

public static RotatedRect FitEllipse(IEnumerable<Point2f> points)

Parameters

points IEnumerable<Point2f>

Input 2D point set

Returns

RotatedRect

FitEllipse(IEnumerable<Point>)

Fits ellipse to the set of 2D points.

public static RotatedRect FitEllipse(IEnumerable<Point> points)

Parameters

points IEnumerable<Point>

Input 2D point set

Returns

RotatedRect

FitEllipseAMS(InputArray)

Fits an ellipse around a set of 2D points.

The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square(AMS) proposed by @cite Taubin1991 is used.

public static RotatedRect FitEllipseAMS(InputArray points)

Parameters

points InputArray

Input 2D point set

Returns

RotatedRect

FitEllipseAMS(IEnumerable<Point2f>)

Fits an ellipse around a set of 2D points.

The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square(AMS) proposed by @cite Taubin1991 is used.

public static RotatedRect FitEllipseAMS(IEnumerable<Point2f> points)

Parameters

points IEnumerable<Point2f>

Input 2D point set

Returns

RotatedRect

FitEllipseAMS(IEnumerable<Point>)

Fits an ellipse around a set of 2D points.

The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square(AMS) proposed by @cite Taubin1991 is used.

public static RotatedRect FitEllipseAMS(IEnumerable<Point> points)

Parameters

points IEnumerable<Point>

Input 2D point set

Returns

RotatedRect

FitEllipseDirect(InputArray)

Fits an ellipse around a set of 2D points.

The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square(Direct) method by @cite Fitzgibbon1999 is used.

public static RotatedRect FitEllipseDirect(InputArray points)

Parameters

points InputArray

Input 2D point set

Returns

RotatedRect

FitEllipseDirect(IEnumerable<Point2f>)

Fits an ellipse around a set of 2D points.

The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square(Direct) method by @cite Fitzgibbon1999 is used.

public static RotatedRect FitEllipseDirect(IEnumerable<Point2f> points)

Parameters

points IEnumerable<Point2f>

Input 2D point set

Returns

RotatedRect

FitEllipseDirect(IEnumerable<Point>)

Fits an ellipse around a set of 2D points.

The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square(Direct) method by @cite Fitzgibbon1999 is used.

public static RotatedRect FitEllipseDirect(IEnumerable<Point> points)

Parameters

points IEnumerable<Point>

Input 2D point set

Returns

RotatedRect

FitLine(InputArray, OutputArray, DistanceTypes, double, double, double)

Fits line to the set of 2D points using M-estimator algorithm

public static void FitLine(InputArray points, OutputArray line, DistanceTypes distType, double param, double reps, double aeps)

Parameters

points InputArray

Input vector of 2D or 3D points

line OutputArray

Output line parameters. In case of 2D fitting, it should be a vector of 4 elements (like Vec4f) - (vx, vy, x0, y0), where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is a point on the line. In case of 3D fitting, it should be a vector of 6 elements (like Vec6f) - (vx, vy, vz, x0, y0, z0), where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is a point on the line.

distType DistanceTypes

Distance used by the M-estimator

param double

Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.

reps double

Sufficient accuracy for the radius (distance between the coordinate origin and the line).

aeps double

Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.

FitLine(IEnumerable<Point2f>, DistanceTypes, double, double, double)

Fits line to the set of 2D points using M-estimator algorithm

public static Line2D FitLine(IEnumerable<Point2f> points, DistanceTypes distType, double param, double reps, double aeps)

Parameters

points IEnumerable<Point2f>

Input vector of 2D or 3D points

distType DistanceTypes

Distance used by the M-estimator

param double

Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.

reps double

Sufficient accuracy for the radius (distance between the coordinate origin and the line).

aeps double

Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.

Returns

Line2D

Output line parameters.

FitLine(IEnumerable<Point3f>, DistanceTypes, double, double, double)

Fits line to the set of 3D points using M-estimator algorithm

public static Line3D FitLine(IEnumerable<Point3f> points, DistanceTypes distType, double param, double reps, double aeps)

Parameters

points IEnumerable<Point3f>

Input vector of 2D or 3D points

distType DistanceTypes

Distance used by the M-estimator

param double

Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.

reps double

Sufficient accuracy for the radius (distance between the coordinate origin and the line).

aeps double

Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.

Returns

Line3D

Output line parameters.

FitLine(IEnumerable<Point3i>, DistanceTypes, double, double, double)

Fits line to the set of 3D points using M-estimator algorithm

public static Line3D FitLine(IEnumerable<Point3i> points, DistanceTypes distType, double param, double reps, double aeps)

Parameters

points IEnumerable<Point3i>

Input vector of 2D or 3D points

distType DistanceTypes

Distance used by the M-estimator

param double

Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.

reps double

Sufficient accuracy for the radius (distance between the coordinate origin and the line).

aeps double

Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.

Returns

Line3D

Output line parameters.

FitLine(IEnumerable<Point>, DistanceTypes, double, double, double)

Fits line to the set of 2D points using M-estimator algorithm

public static Line2D FitLine(IEnumerable<Point> points, DistanceTypes distType, double param, double reps, double aeps)

Parameters

points IEnumerable<Point>

Input vector of 2D or 3D points

distType DistanceTypes

Distance used by the M-estimator

param double

Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.

reps double

Sufficient accuracy for the radius (distance between the coordinate origin and the line).

aeps double

Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.

Returns

Line2D

Output line parameters.

Flip(InputArray, OutputArray, FlipMode)

reverses the order of the rows, columns or both in a matrix

public static void Flip(InputArray src, OutputArray dst, FlipMode flipCode)

Parameters

src InputArray

The source array

dst OutputArray

The destination array; will have the same size and same type as src

flipCode FlipMode

Specifies how to flip the array: 0 means flipping around the x-axis, positive (e.g., 1) means flipping around y-axis, and negative (e.g., -1) means flipping around both axes. See also the discussion below for the formulas.

FloodFill(InputOutputArray, InputOutputArray, Point, Scalar)

Fills a connected component with the given color.

public static int FloodFill(InputOutputArray image, InputOutputArray mask, Point seedPoint, Scalar newVal)

Parameters

image InputOutputArray

Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

mask InputOutputArray

(For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap.

seedPoint Point

Starting point.

newVal Scalar

New value of the repainted domain pixels.

Returns

int

FloodFill(InputOutputArray, InputOutputArray, Point, Scalar, out Rect, Scalar?, Scalar?, FloodFillFlags)

Fills a connected component with the given color.

public static int FloodFill(InputOutputArray image, InputOutputArray mask, Point seedPoint, Scalar newVal, out Rect rect, Scalar? loDiff = null, Scalar? upDiff = null, FloodFillFlags flags = FloodFillFlags.Link4)

Parameters

image InputOutputArray

Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

mask InputOutputArray

(For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap.

seedPoint Point

Starting point.

newVal Scalar

New value of the repainted domain pixels.

rect Rect

Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain.

loDiff Scalar?

Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

upDiff Scalar?

Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

flags FloodFillFlags

Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Using FloodFillFlags.MaskOnly will fill in the mask using the grey value 255 (white).

Returns

int

FloodFill(InputOutputArray, Point, Scalar)

Fills a connected component with the given color.

public static int FloodFill(InputOutputArray image, Point seedPoint, Scalar newVal)

Parameters

image InputOutputArray

Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

seedPoint Point

Starting point.

newVal Scalar

New value of the repainted domain pixels.

Returns

int

FloodFill(InputOutputArray, Point, Scalar, out Rect, Scalar?, Scalar?, FloodFillFlags)

Fills a connected component with the given color.

public static int FloodFill(InputOutputArray image, Point seedPoint, Scalar newVal, out Rect rect, Scalar? loDiff = null, Scalar? upDiff = null, FloodFillFlags flags = FloodFillFlags.Link4)

Parameters

image InputOutputArray

Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.

seedPoint Point

Starting point.

newVal Scalar

New value of the repainted domain pixels.

rect Rect

Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain.

loDiff Scalar?

Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

upDiff Scalar?

Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.

flags FloodFillFlags

Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Using FloodFillFlags.MaskOnly will fill in the mask using the grey value 255 (white).

Returns

int

Format(InputArray, FormatType)

public static string Format(InputArray mtx, FormatType format = FormatType.Default)

Parameters

mtx InputArray
format FormatType

Returns

string

GaussianBlur(InputArray, OutputArray, Size, double, double, BorderTypes)

Blurs an image using a Gaussian filter.

public static void GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY = 0, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

dst OutputArray

output image of the same size and type as src.

ksize Size

Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma* .

sigmaX double

Gaussian kernel standard deviation in X direction.

sigmaY double

Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

borderType BorderTypes

pixel extrapolation method

Gemm(InputArray, InputArray, double, InputArray, double, OutputArray, GemmFlags)

implements generalized matrix product algorithm GEMM from BLAS

public static void Gemm(InputArray src1, InputArray src2, double alpha, InputArray src3, double gamma, OutputArray dst, GemmFlags flags = GemmFlags.None)

Parameters

src1 InputArray
src2 InputArray
alpha double
src3 InputArray
gamma double
dst OutputArray
flags GemmFlags

GetAffineTransform(InputArray, InputArray)

Calculates an affine transform from three pairs of the corresponding points. The function calculates the 2×3 matrix of an affine transform.

public static Mat GetAffineTransform(InputArray src, InputArray dst)

Parameters

src InputArray

Coordinates of triangle vertices in the source image.

dst InputArray

Coordinates of the corresponding triangle vertices in the destination image.

Returns

Mat

GetAffineTransform(IEnumerable<Point2f>, IEnumerable<Point2f>)

Calculates an affine transform from three pairs of the corresponding points. The function calculates the 2×3 matrix of an affine transform.

public static Mat GetAffineTransform(IEnumerable<Point2f> src, IEnumerable<Point2f> dst)

Parameters

src IEnumerable<Point2f>

Coordinates of triangle vertices in the source image.

dst IEnumerable<Point2f>

Coordinates of the corresponding triangle vertices in the destination image.

Returns

Mat

GetBuildInformation()

Returns full configuration time cmake output.

Returned value is raw cmake output including version control system revision, compiler version, compiler flags, enabled modules and third party libraries, etc.Output format depends on target architecture.

public static string GetBuildInformation()

Returns

string

GetCpuFeaturesLine()

Returns list of CPU features enabled during compilation. Returned value is a string containing space separated list of CPU features with following markers:

  • no markers - baseline features
  • prefix * - features enabled in dispatcher
  • suffix ? - features enabled but not available in HW
public static string GetCpuFeaturesLine()

Returns

string

Examples

SSE SSE2 SSE3* SSE4.1 *SSE4.2 *FP16* AVX *AVX2* AVX512-SKX?

GetCpuTickCount()

Returns the number of CPU ticks.

The function returns the current number of CPU ticks on some architectures(such as x86, x64, PowerPC). On other platforms the function is equivalent to getTickCount.It can also be used for very accurate time measurements, as well as for RNG initialization.Note that in case of multi-CPU systems a thread, from which getCPUTickCount is called, can be suspended and resumed at another CPU with its own counter. So, theoretically (and practically) the subsequent calls to the function do not necessary return the monotonously increasing values. Also, since a modern CPU varies the CPU frequency depending on the load, the number of CPU clocks spent in some code cannot be directly converted to time units.Therefore, getTickCount is generally a preferable solution for measuringexecution time.

public static long GetCpuTickCount()

Returns

long

GetDefaultNewCameraMatrix(InputArray, Size?, bool)

returns the default new camera matrix (by default it is the same as cameraMatrix unless centerPricipalPoint=true)

public static Mat GetDefaultNewCameraMatrix(InputArray cameraMatrix, Size? imgSize = null, bool centerPrincipalPoint = false)

Parameters

cameraMatrix InputArray

Input camera matrix.

imgSize Size?

Camera view image size in pixels.

centerPrincipalPoint bool

Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not.

Returns

Mat

the camera matrix that is either an exact copy of the input cameraMatrix (when centerPrinicipalPoint=false), or the modified one (when centerPrincipalPoint=true).

GetDerivKernels(OutputArray, OutputArray, int, int, int, bool, MatType?)

Returns filter coefficients for computing spatial image derivatives.

public static void GetDerivKernels(OutputArray kx, OutputArray ky, int dx, int dy, int ksize, bool normalize = false, MatType? ktype = null)

Parameters

kx OutputArray

Output matrix of row filter coefficients. It has the type ktype.

ky OutputArray

Output matrix of column filter coefficients. It has the type ktype.

dx int

Derivative order in respect of x.

dy int

Derivative order in respect of y.

ksize int

Aperture size. It can be CV_SCHARR, 1, 3, 5, or 7.

normalize bool

Flag indicating whether to normalize (scale down) the filter coefficients or not. Theoretically, the coefficients should have the denominator \f$=2^{ksize*2-dx-dy-2}\f$. If you are going to filter floating-point images, you are likely to use the normalized kernels. But if you compute derivatives of an 8-bit image, store the results in a 16-bit image, and wish to preserve all the fractional bits, you may want to set normalize = false.

ktype MatType?

Type of filter coefficients. It can be CV_32f or CV_64F.

GetFontScaleFromHeight(HersheyFonts, int, int)

Calculates the font-specific size to use to achieve a given height in pixels.

public static double GetFontScaleFromHeight(HersheyFonts fontFace, int pixelHeight, int thickness = 1)

Parameters

fontFace HersheyFonts

Font to use, see cv::HersheyFonts.

pixelHeight int

Pixel height to compute the fontScale for

thickness int

Thickness of lines used to render the text.See putText for details.

Returns

double

The fontSize to use for cv::putText

GetGaborKernel(Size, double, double, double, double, double, int)

Returns Gabor filter coefficients.

public static Mat GetGaborKernel(Size ksize, double sigma, double theta, double lambd, double gamma, double psi, int ktype)

Parameters

ksize Size

Size of the filter returned.

sigma double

Standard deviation of the gaussian envelope.

theta double

Orientation of the normal to the parallel stripes of a Gabor function.

lambd double

Wavelength of the sinusoidal factor.

gamma double

Spatial aspect ratio.

psi double

Phase offset.

ktype int

Type of filter coefficients. It can be CV_32F or CV_64F.

Returns

Mat

Remarks

For more details about gabor filter equations and parameters, see: https://en.wikipedia.org/wiki/Gabor_filter

GetGaussianKernel(int, double, MatType?)

Returns Gaussian filter coefficients.

public static Mat? GetGaussianKernel(int ksize, double sigma, MatType? ktype = null)

Parameters

ksize int

Aperture size. It should be odd and positive.

sigma double

Gaussian standard deviation. If it is non-positive, it is computed from ksize as sigma = 0.3*((ksize-1)*0.5 - 1) + 0.8.

ktype MatType?

Type of filter coefficients. It can be CV_32F or CV_64F.

Returns

Mat

GetHardwareFeatureName(CpuFeatures)

Returns feature name by ID. Returns empty string if feature is not defined

public static string GetHardwareFeatureName(CpuFeatures feature)

Parameters

feature CpuFeatures

Returns

string

GetLogLevel()

Get global logging level

public static LogLevel GetLogLevel()

Returns

LogLevel

logging level

GetMouseWheelDelta(MouseEventFlags)

Gets the mouse-wheel motion delta, when handling mouse-wheel events cv::EVENT_MOUSEWHEEL and cv::EVENT_MOUSEHWHEEL.

For regular mice with a scroll-wheel, delta will be a multiple of 120. The value 120 corresponds to a one notch rotation of the wheel or the threshold for action to be taken and one such action should occur for each delta.Some high-precision mice with higher-resolution freely-rotating wheels may generate smaller values.

For cv::EVENT_MOUSEWHEEL positive and negative values mean forward and backward scrolling, respectively.For cv::EVENT_MOUSEHWHEEL, where available, positive and negative values mean right and left scrolling, respectively.

public static int GetMouseWheelDelta(MouseEventFlags flags)

Parameters

flags MouseEventFlags

The mouse callback flags parameter.

Returns

int

GetNearestPoint(IEnumerable<Point2f>, float)

public static int GetNearestPoint(IEnumerable<Point2f> recallPrecisionCurve, float lPrecision)

Parameters

recallPrecisionCurve IEnumerable<Point2f>
lPrecision float

Returns

int

GetNumThreads()

Returns the number of threads used by OpenCV for parallel regions.

Always returns 1 if OpenCV is built without threading support. The exact meaning of return value depends on the threading framework used by OpenCV library:

  • TBB - The number of threads, that OpenCV will try to use for parallel regions. If there is any tbb::thread_scheduler_init in user code conflicting with OpenCV, then function returns default number of threads used by TBB library.
  • OpenMP - An upper bound on the number of threads that could be used to form a new team.
  • Concurrency - The number of threads, that OpenCV will try to use for parallel regions.
  • GCD - Unsupported; returns the GCD thread pool limit(512) for compatibility.
  • C= - The number of threads, that OpenCV will try to use for parallel regions, if before called setNumThreads with threads > 0, otherwise returns the number of logical CPUs, available for the process.
public static int GetNumThreads()

Returns

int

GetNumberOfCpus()

Returns the number of logical CPUs available for the process.

public static int GetNumberOfCpus()

Returns

int

GetOptimalDFTSize(int)

Returns the optimal DFT size for a given vector size.

public static int GetOptimalDFTSize(int vecSize)

Parameters

vecSize int

vector size.

Returns

int

GetOptimalNewCameraMatrix(InputArray, InputArray?, Size, double, Size, out Rect, bool)

Returns the new camera matrix based on the free scaling parameter.

public static Mat GetOptimalNewCameraMatrix(InputArray cameraMatrix, InputArray? distCoeffs, Size imageSize, double alpha, Size newImgSize, out Rect validPixROI, bool centerPrincipalPoint = false)

Parameters

cameraMatrix InputArray

Input camera matrix.

distCoeffs InputArray

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the array is null, the zero distortion coefficients are assumed.

imageSize Size

Original image size.

alpha double

Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image).

newImgSize Size

Image size after rectification. By default,it is set to imageSize .

validPixROI Rect

Optional output rectangle that outlines all-good-pixels region in the undistorted image. See roi1, roi2 description in stereoRectify() .

centerPrincipalPoint bool

Optional flag that indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image.

Returns

Mat

optimal new camera matrix

GetOptimalNewCameraMatrix(double[,], double[], Size, double, Size, out Rect, bool)

Returns the new camera matrix based on the free scaling parameter.

public static double[,]? GetOptimalNewCameraMatrix(double[,] cameraMatrix, double[] distCoeffs, Size imageSize, double alpha, Size newImgSize, out Rect validPixROI, bool centerPrincipalPoint = false)

Parameters

cameraMatrix double[,]

Input camera matrix.

distCoeffs double[]

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the array is null, the zero distortion coefficients are assumed.

imageSize Size

Original image size.

alpha double

Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image).

newImgSize Size

Image size after rectification. By default,it is set to imageSize .

validPixROI Rect

Optional output rectangle that outlines all-good-pixels region in the undistorted image. See roi1, roi2 description in stereoRectify() .

centerPrincipalPoint bool

Optional flag that indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image.

Returns

double[,]

optimal new camera matrix

GetPerspectiveTransform(InputArray, InputArray)

Calculates a perspective transform from four pairs of the corresponding points. The function calculates the 3×3 matrix of a perspective transform.

public static Mat GetPerspectiveTransform(InputArray src, InputArray dst)

Parameters

src InputArray

Coordinates of quadrangle vertices in the source image.

dst InputArray

Coordinates of the corresponding quadrangle vertices in the destination image.

Returns

Mat

GetPerspectiveTransform(IEnumerable<Point2f>, IEnumerable<Point2f>)

Calculates a perspective transform from four pairs of the corresponding points. The function calculates the 3×3 matrix of a perspective transform.

public static Mat GetPerspectiveTransform(IEnumerable<Point2f> src, IEnumerable<Point2f> dst)

Parameters

src IEnumerable<Point2f>

Coordinates of quadrangle vertices in the source image.

dst IEnumerable<Point2f>

Coordinates of the corresponding quadrangle vertices in the destination image.

Returns

Mat

GetRecall(IEnumerable<Point2f>, float)

public static float GetRecall(IEnumerable<Point2f> recallPrecisionCurve, float lPrecision)

Parameters

recallPrecisionCurve IEnumerable<Point2f>
lPrecision float

Returns

float

GetRectSubPix(InputArray, Size, Point2f, OutputArray, int)

Retrieves a pixel rectangle from an image with sub-pixel accuracy.

public static void GetRectSubPix(InputArray image, Size patchSize, Point2f center, OutputArray patch, int patchType = -1)

Parameters

image InputArray

Source image.

patchSize Size

Size of the extracted patch.

center Point2f

Floating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image.

patch OutputArray

Extracted patch that has the size patchSize and the same number of channels as src .

patchType int

Depth of the extracted pixels. By default, they have the same depth as src.

GetRotationMatrix2D(Point2f, double, double)

Calculates an affine matrix of 2D rotation.

public static Mat GetRotationMatrix2D(Point2f center, double angle, double scale)

Parameters

center Point2f

Center of the rotation in the source image.

angle double

Rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner).

scale double

Isotropic scale factor.

Returns

Mat

GetStructuringElement(MorphShapes, Size)

Returns a structuring element of the specified size and shape for morphological operations. The function constructs and returns the structuring element that can be further passed to erode, dilate or morphologyEx.But you can also construct an arbitrary binary mask yourself and use it as the structuring element.

public static Mat GetStructuringElement(MorphShapes shape, Size ksize)

Parameters

shape MorphShapes

Element shape that could be one of MorphShapes

ksize Size

Size of the structuring element.

Returns

Mat

GetStructuringElement(MorphShapes, Size, Point)

Returns a structuring element of the specified size and shape for morphological operations. The function constructs and returns the structuring element that can be further passed to erode, dilate or morphologyEx.But you can also construct an arbitrary binary mask yourself and use it as the structuring element.

public static Mat GetStructuringElement(MorphShapes shape, Size ksize, Point anchor)

Parameters

shape MorphShapes

Element shape that could be one of MorphShapes

ksize Size

Size of the structuring element.

anchor Point

Anchor position within the element. The default value (−1,−1) means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.

Returns

Mat

GetTextSize(string, HersheyFonts, double, int, out int)

returns bounding box of the text string

public static Size GetTextSize(string text, HersheyFonts fontFace, double fontScale, int thickness, out int baseLine)

Parameters

text string

Input text string.

fontFace HersheyFonts

Font to use, see #HersheyFonts.

fontScale double

Font scale factor that is multiplied by the font-specific base size.

thickness int

Thickness of lines used to render the text. See #putText for details.

baseLine int

baseLine y-coordinate of the baseline relative to the bottom-most text

Returns

Size

The size of a box that contains the specified text.

GetTheRNG()

Returns the thread-local Random number generator

public static RNG GetTheRNG()

Returns

RNG

GetThreadNum()

Returns the index of the currently executed thread within the current parallel region. Always returns 0 if called outside of parallel region. @deprecated Current implementation doesn't corresponding to this documentation. The exact meaning of the return value depends on the threading framework used by OpenCV library:

  • TBB - Unsupported with current 4.1 TBB release.Maybe will be supported in future.
  • OpenMP - The thread number, within the current team, of the calling thread.
  • Concurrency - An ID for the virtual processor that the current context is executing on(0 for master thread and unique number for others, but not necessary 1,2,3,...).
  • GCD - System calling thread's ID. Never returns 0 inside parallel region.
  • C= - The index of the current parallel task.
public static int GetThreadNum()

Returns

int

GetTickCount()

Returns the number of ticks. The function returns the number of ticks after the certain event (for example, when the machine was turned on). It can be used to initialize RNG or to measure a function execution time by reading the tick count before and after the function call.

public static long GetTickCount()

Returns

long

GetTickFrequency()

Returns the number of ticks per second. The function returns the number of ticks per second.That is, the following code computes the execution time in seconds:

public static double GetTickFrequency()

Returns

double

GetTrackbarPos(string, string)

Returns the trackbar position.

public static int GetTrackbarPos(string trackbarName, string winName)

Parameters

trackbarName string

Name of the trackbar.

winName string

Name of the window that is the parent of the trackbar.

Returns

int

trackbar position

GetValidDisparityROI(Rect, Rect, int, int, int)

computes valid disparity ROI from the valid ROIs of the rectified images (that are returned by cv::stereoRectify())

public static Rect GetValidDisparityROI(Rect roi1, Rect roi2, int minDisparity, int numberOfDisparities, int SADWindowSize)

Parameters

roi1 Rect
roi2 Rect
minDisparity int
numberOfDisparities int
SADWindowSize int

Returns

Rect

GetVersionMajor()

Returns major library version

public static int GetVersionMajor()

Returns

int

GetVersionMinor()

Returns minor library version

public static int GetVersionMinor()

Returns

int

GetVersionRevision()

Returns revision field of the library version

public static int GetVersionRevision()

Returns

int

GetVersionString()

Returns library version string. For example "3.4.1-dev".

public static string? GetVersionString()

Returns

string

GetWindowHandle(string)

get native window handle (HWND in case of Win32 and Widget in case of X Window)

public static nint GetWindowHandle(string windowName)

Parameters

windowName string

Returns

nint

GetWindowImageRect(string)

Provides rectangle of image in the window. The function getWindowImageRect returns the client screen coordinates, width and height of the image rendering area.

public static Rect GetWindowImageRect(string winName)

Parameters

winName string

Name of the window.

Returns

Rect

GetWindowProperty(string, WindowPropertyFlags)

Provides parameters of a window.

public static double GetWindowProperty(string winName, WindowPropertyFlags propId)

Parameters

winName string

Name of the window.

propId WindowPropertyFlags

Window property to retrieve.

Returns

double

GoodFeaturesToTrack(InputArray, int, double, double, InputArray, int, bool, double)

finds the strong enough corners where the cornerMinEigenVal() or cornerHarris() report the local maxima

public static Point2f[] GoodFeaturesToTrack(InputArray src, int maxCorners, double qualityLevel, double minDistance, InputArray mask, int blockSize, bool useHarrisDetector, double k)

Parameters

src InputArray

Input 8-bit or floating-point 32-bit, single-channel image.

maxCorners int

Maximum number of corners to return. If there are more corners than are found, the strongest of them is returned.

qualityLevel double

Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue or the Harris function response (see cornerHarris() ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01, then all the corners with the quality measure less than 15 are rejected.

minDistance double

Minimum possible Euclidean distance between the returned corners.

mask InputArray

Optional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected.

blockSize int

Size of an average block for computing a derivative covariation matrix over each pixel neighborhood.

useHarrisDetector bool

Parameter indicating whether to use a Harris detector

k double

Free parameter of the Harris detector.

Returns

Point2f[]

Output vector of detected corners.

GrabCut(InputArray, InputOutputArray, Rect, InputOutputArray, InputOutputArray, int, GrabCutModes)

Segments the image using GrabCut algorithm

public static void GrabCut(InputArray img, InputOutputArray mask, Rect rect, InputOutputArray bgdModel, InputOutputArray fgdModel, int iterCount, GrabCutModes mode)

Parameters

img InputArray

Input 8-bit 3-channel image.

mask InputOutputArray

Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have Cv2.GC_BGD / Cv2.GC_FGD / Cv2.GC_PR_BGD / Cv2.GC_PR_FGD

rect Rect

ROI containing a segmented object. The pixels outside of the ROI are marked as "obvious background". The parameter is only used when mode==GC_INIT_WITH_RECT.

bgdModel InputOutputArray

Temporary array for the background model. Do not modify it while you are processing the same image.

fgdModel InputOutputArray

Temporary arrays for the foreground model. Do not modify it while you are processing the same image.

iterCount int

Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==GC_INIT_WITH_MASK or mode==GC_EVAL .

mode GrabCutModes

Operation mode that could be one of GrabCutFlag value.

GroupRectangles(IList<Rect>, int, double)

Groups the object candidate rectangles.

public static void GroupRectangles(IList<Rect> rectList, int groupThreshold, double eps = 0.2)

Parameters

rectList IList<Rect>

Input/output vector of rectangles. Output vector includes retained and grouped rectangles.

groupThreshold int

Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.

eps double

GroupRectangles(IList<Rect>, int, double, out int[], out double[])

Groups the object candidate rectangles.

public static void GroupRectangles(IList<Rect> rectList, int groupThreshold, double eps, out int[] weights, out double[] levelWeights)

Parameters

rectList IList<Rect>
groupThreshold int
eps double
weights int[]
levelWeights double[]

GroupRectangles(IList<Rect>, out int[], out double[], int, double)

Groups the object candidate rectangles.

public static void GroupRectangles(IList<Rect> rectList, out int[] rejectLevels, out double[] levelWeights, int groupThreshold, double eps = 0.2)

Parameters

rectList IList<Rect>
rejectLevels int[]
levelWeights double[]
groupThreshold int
eps double

GroupRectangles(IList<Rect>, out int[], int, double)

Groups the object candidate rectangles.

public static void GroupRectangles(IList<Rect> rectList, out int[] weights, int groupThreshold, double eps = 0.2)

Parameters

rectList IList<Rect>

Input/output vector of rectangles. Output vector includes retained and grouped rectangles.

weights int[]
groupThreshold int

Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.

eps double

Relative difference between sides of the rectangles to merge them into a group.

GroupRectanglesMeanshift(IList<Rect>, out double[], out double[], double, Size?)

public static void GroupRectanglesMeanshift(IList<Rect> rectList, out double[] foundWeights, out double[] foundScales, double detectThreshold = 0, Size? winDetSize = null)

Parameters

rectList IList<Rect>
foundWeights double[]
foundScales double[]
detectThreshold double
winDetSize Size?

HConcat(InputArray, InputArray, OutputArray)

Applies horizontal concatenation to given matrices.

public static void HConcat(InputArray src1, InputArray src2, OutputArray dst)

Parameters

src1 InputArray

first input array to be considered for horizontal concatenation.

src2 InputArray

second input array to be considered for horizontal concatenation.

dst OutputArray

output array. It has the same number of rows and depth as the src1 and src2, and the sum of cols of the src1 and src2.

HConcat(IEnumerable<Mat>, OutputArray)

Applies horizontal concatenation to given matrices.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void HConcat(IEnumerable<Mat> src, OutputArray dst)

Parameters

src IEnumerable<Mat>

input array or vector of matrices. all of the matrices must have the same number of rows and the same depth.

dst OutputArray

output array. It has the same number of rows and depth as the src, and the sum of cols of the src.

HaveImageReader(string)

public static bool HaveImageReader(string fileName)

Parameters

fileName string

Returns

bool

HaveImageWriter(string)

public static bool HaveImageWriter(string fileName)

Parameters

fileName string

Returns

bool

HoughCircles(InputArray, HoughModes, double, double, double, double, int, int)

Finds circles in a grayscale image using a Hough transform.

public static CircleSegment[] HoughCircles(InputArray image, HoughModes method, double dp, double minDist, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0)

Parameters

image InputArray

The 8-bit, single-channel, grayscale input image

method HoughModes

The available methods are HoughMethods.Gradient and HoughMethods.GradientAlt

dp double

The inverse ratio of the accumulator resolution to the image resolution.

minDist double

Minimum distance between the centers of the detected circles.

param1 double

The first method-specific parameter. [By default this is 100]

param2 double

The second method-specific parameter. [By default this is 100]

minRadius int

Minimum circle radius. [By default this is 0]

maxRadius int

Maximum circle radius. [By default this is 0]

Returns

CircleSegment[]

The output vector found circles. Each vector is encoded as 3-element floating-point vector (x, y, radius)

HoughLines(InputArray, double, double, int, double, double)

Finds lines in a binary image using standard Hough transform.

public static LineSegmentPolar[] HoughLines(InputArray image, double rho, double theta, int threshold, double srn = 0, double stn = 0)

Parameters

image InputArray

The 8-bit, single-channel, binary source image. The image may be modified by the function

rho double

Distance resolution of the accumulator in pixels

theta double

Angle resolution of the accumulator in radians

threshold int

The accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold )

srn double

For the multi-scale Hough transform it is the divisor for the distance resolution rho. [By default this is 0]

stn double

For the multi-scale Hough transform it is the divisor for the distance resolution theta. [By default this is 0]

Returns

LineSegmentPolar[]

The output vector of lines. Each line is represented by a two-element vector (rho, theta) . rho is the distance from the coordinate origin (0,0) (top-left corner of the image) and theta is the line rotation angle in radians

HoughLinesP(InputArray, double, double, int, double, double)

Finds lines segments in a binary image using probabilistic Hough transform.

public static LineSegmentPoint[] HoughLinesP(InputArray image, double rho, double theta, int threshold, double minLineLength = 0, double maxLineGap = 0)

Parameters

image InputArray
rho double

Distance resolution of the accumulator in pixels

theta double

Angle resolution of the accumulator in radians

threshold int

The accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold )

minLineLength double

The minimum line length. Line segments shorter than that will be rejected. [By default this is 0]

maxLineGap double

The maximum allowed gap between points on the same line to link them. [By default this is 0]

Returns

LineSegmentPoint[]

The output lines. Each line is represented by a 4-element vector (x1, y1, x2, y2)

HoughLinesPointSet(InputArray, OutputArray, int, int, double, double, double, double, double, double)

Finds lines in a set of points using the standard Hough transform. The function finds lines in a set of points using a modification of the Hough transform.

public static void HoughLinesPointSet(InputArray point, OutputArray lines, int linesMax, int threshold, double minRho, double maxRho, double rhoStep, double minTheta, double maxTheta, double thetaStep)

Parameters

point InputArray

Input vector of points. Each vector must be encoded as a Point vector \f$(x,y)\f$. Type must be CV_32FC2 or CV_32SC2.

lines OutputArray

Output vector of found lines. Each vector is encoded as a vector<Vec3d>

linesMax int

Max count of hough lines.

threshold int

Accumulator threshold parameter. Only those lines are returned that get enough votes

minRho double

Minimum Distance value of the accumulator in pixels.

maxRho double

Maximum Distance value of the accumulator in pixels.

rhoStep double

Distance resolution of the accumulator in pixels.

minTheta double

Minimum angle value of the accumulator in radians.

maxTheta double

Maximum angle value of the accumulator in radians.

thetaStep double

Angle resolution of the accumulator in radians.

Idct(InputArray, OutputArray, DctFlags)

Performs inverse 1D or 2D Discrete Cosine Transformation

public static void Idct(InputArray src, OutputArray dst, DctFlags flags = DctFlags.None)

Parameters

src InputArray

The source floating-point array

dst OutputArray

The destination array; will have the same size and same type as src

flags DctFlags

Transformation flags, a combination of DctFlag2 values

Idft(InputArray, OutputArray, DftFlags, int)

Performs an inverse Discrete Fourier transform of 1D or 2D floating-point array.

public static void Idft(InputArray src, OutputArray dst, DftFlags flags = DftFlags.None, int nonzeroRows = 0)

Parameters

src InputArray

The source array, real or complex

dst OutputArray

The destination array, which size and type depends on the flags

flags DftFlags

Transformation flags, a combination of the DftFlag2 values

nonzeroRows int

When the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT

IlluminationChange(InputArray, InputArray?, OutputArray, float, float)

Applying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image.

public static void IlluminationChange(InputArray src, InputArray? mask, OutputArray dst, float alpha = 0.2, float beta = 0.4)

Parameters

src InputArray

Input 8-bit 3-channel image.

mask InputArray

Input 8-bit 1 or 3-channel image.

dst OutputArray

Output image with the same size and type as src.

alpha float

Value ranges between 0-2.

beta float

Value ranges between 0-2.

Remarks

This is useful to highlight under-exposed foreground objects or to reduce specular reflections.

ImDecode(InputArray, ImreadModes)

Reads image from the specified buffer in memory.

public static Mat ImDecode(InputArray buf, ImreadModes flags)

Parameters

buf InputArray

The input array of vector of bytes.

flags ImreadModes

The same flags as in imread

Returns

Mat

ImDecode(Mat, ImreadModes)

Reads image from the specified buffer in memory.

public static Mat ImDecode(Mat buf, ImreadModes flags)

Parameters

buf Mat

The input array of vector of bytes.

flags ImreadModes

The same flags as in imread

Returns

Mat

ImDecode(byte[], ImreadModes)

Reads image from the specified buffer in memory.

public static Mat ImDecode(byte[] buf, ImreadModes flags)

Parameters

buf byte[]

The input array of vector of bytes.

flags ImreadModes

The same flags as in imread

Returns

Mat

ImDecode(ReadOnlySpan<byte>, ImreadModes)

Reads image from the specified buffer in memory.

public static Mat ImDecode(ReadOnlySpan<byte> span, ImreadModes flags)

Parameters

span ReadOnlySpan<byte>

The input slice of bytes.

flags ImreadModes

The same flags as in imread

Returns

Mat

ImEncode(string, InputArray, out byte[], params ImageEncodingParam[])

Compresses the image and stores it in the memory buffer

public static void ImEncode(string ext, InputArray img, out byte[] buf, params ImageEncodingParam[] prms)

Parameters

ext string

The file extension that defines the output format

img InputArray

The image to be written

buf byte[]

Output buffer resized to fit the compressed image.

prms ImageEncodingParam[]

Format-specific parameters.

ImEncode(string, InputArray, out byte[], int[]?)

Compresses the image and stores it in the memory buffer

public static bool ImEncode(string ext, InputArray img, out byte[] buf, int[]? prms = null)

Parameters

ext string

The file extension that defines the output format

img InputArray

The image to be written

buf byte[]

Output buffer resized to fit the compressed image.

prms int[]

Format-specific parameters.

Returns

bool

ImRead(string, ImreadModes)

Loads an image from a file.

public static Mat ImRead(string fileName, ImreadModes flags = ImreadModes.Color)

Parameters

fileName string

Name of file to be loaded.

flags ImreadModes

Specifies color type of the loaded image

Returns

Mat

ImReadMulti(string, out Mat[], ImreadModes)

Loads a multi-page image from a file.

public static bool ImReadMulti(string filename, out Mat[] mats, ImreadModes flags = ImreadModes.AnyColor)

Parameters

filename string

Name of file to be loaded.

mats Mat[]

A vector of Mat objects holding each page, if more than one.

flags ImreadModes

Flag that can take values of @ref cv::ImreadModes, default with IMREAD_ANYCOLOR.

Returns

bool

ImShow(string, Mat)

Displays the image in the specified window

public static void ImShow(string winName, Mat mat)

Parameters

winName string

Name of the window.

mat Mat

Image to be shown.

ImWrite(string, Mat, params ImageEncodingParam[])

Saves an image to a specified file.

public static bool ImWrite(string fileName, Mat img, params ImageEncodingParam[] prms)

Parameters

fileName string

Name of the file.

img Mat

Image to be saved.

prms ImageEncodingParam[]

Format-specific save parameters encoded as pairs

Returns

bool

ImWrite(string, Mat, int[]?)

Saves an image to a specified file.

public static bool ImWrite(string fileName, Mat img, int[]? prms = null)

Parameters

fileName string

Name of the file.

img Mat

Image to be saved.

prms int[]

Format-specific save parameters encoded as pairs

Returns

bool

ImWrite(string, IEnumerable<Mat>, params ImageEncodingParam[])

Saves an image to a specified file.

public static bool ImWrite(string fileName, IEnumerable<Mat> img, params ImageEncodingParam[] prms)

Parameters

fileName string

Name of the file.

img IEnumerable<Mat>

Image to be saved.

prms ImageEncodingParam[]

Format-specific save parameters encoded as pairs

Returns

bool

ImWrite(string, IEnumerable<Mat>, int[]?)

Saves an image to a specified file.

public static bool ImWrite(string fileName, IEnumerable<Mat> img, int[]? prms = null)

Parameters

fileName string

Name of the file.

img IEnumerable<Mat>

Image to be saved.

prms int[]

Format-specific save parameters encoded as pairs

Returns

bool

InRange(InputArray, InputArray, InputArray, OutputArray)

Checks if array elements lie between the elements of two other arrays.

public static void InRange(InputArray src, InputArray lowerb, InputArray upperb, OutputArray dst)

Parameters

src InputArray

first input array.

lowerb InputArray

inclusive lower boundary array or a scalar.

upperb InputArray

inclusive upper boundary array or a scalar.

dst OutputArray

output array of the same size as src and CV_8U type.

InRange(InputArray, Scalar, Scalar, OutputArray)

Checks if array elements lie between the elements of two other arrays.

public static void InRange(InputArray src, Scalar lowerb, Scalar upperb, OutputArray dst)

Parameters

src InputArray

first input array.

lowerb Scalar

inclusive lower boundary array or a scalar.

upperb Scalar

inclusive upper boundary array or a scalar.

dst OutputArray

output array of the same size as src and CV_8U type.

InitCameraMatrix2D(IEnumerable<Mat>, IEnumerable<Mat>, Size, double)

initializes camera matrix from a few 3D points and the corresponding projections.

public static Mat InitCameraMatrix2D(IEnumerable<Mat> objectPoints, IEnumerable<Mat> imagePoints, Size imageSize, double aspectRatio = 1)

Parameters

objectPoints IEnumerable<Mat>

Vector of vectors (vector<vector<Point3d>>) of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the per-view vectors are concatenated.

imagePoints IEnumerable<Mat>

Vector of vectors (vector<vector<Point2d>>) of the projections of the calibration pattern points. In the old interface all the per-view vectors are concatenated.

imageSize Size

Image size in pixels used to initialize the principal point.

aspectRatio double

If it is zero or negative, both f_x and f_y are estimated independently. Otherwise, f_x = f_y * aspectRatio .

Returns

Mat

InitCameraMatrix2D(IEnumerable<IEnumerable<Point3f>>, IEnumerable<IEnumerable<Point2f>>, Size, double)

initializes camera matrix from a few 3D points and the corresponding projections.

public static Mat InitCameraMatrix2D(IEnumerable<IEnumerable<Point3f>> objectPoints, IEnumerable<IEnumerable<Point2f>> imagePoints, Size imageSize, double aspectRatio = 1)

Parameters

objectPoints IEnumerable<IEnumerable<Point3f>>

Vector of vectors of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the per-view vectors are concatenated.

imagePoints IEnumerable<IEnumerable<Point2f>>

Vector of vectors of the projections of the calibration pattern points. In the old interface all the per-view vectors are concatenated.

imageSize Size

Image size in pixels used to initialize the principal point.

aspectRatio double

If it is zero or negative, both f_x and f_y are estimated independently. Otherwise, f_x = f_y * aspectRatio .

Returns

Mat

InitUndistortRectifyMap(InputArray, InputArray, InputArray, InputArray, Size, MatType, OutputArray, OutputArray)

initializes maps for cv::remap() to correct lens distortion and optionally rectify the image

public static void InitUndistortRectifyMap(InputArray cameraMatrix, InputArray distCoeffs, InputArray r, InputArray newCameraMatrix, Size size, MatType m1Type, OutputArray map1, OutputArray map2)

Parameters

cameraMatrix InputArray
distCoeffs InputArray
r InputArray
newCameraMatrix InputArray
size Size
m1Type MatType
map1 OutputArray
map2 OutputArray

InitWideAngleProjMap(InputArray, InputArray, Size, int, MatType, OutputArray, OutputArray, ProjectionType, double)

initializes maps for cv::remap() for wide-angle

public static float InitWideAngleProjMap(InputArray cameraMatrix, InputArray distCoeffs, Size imageSize, int destImageWidth, MatType m1Type, OutputArray map1, OutputArray map2, ProjectionType projType, double alpha = 0)

Parameters

cameraMatrix InputArray
distCoeffs InputArray
imageSize Size
destImageWidth int
m1Type MatType
map1 OutputArray
map2 OutputArray
projType ProjectionType
alpha double

Returns

float

Inpaint(InputArray, InputArray, OutputArray, double, InpaintMethod)

Restores the selected region in an image using the region neighborhood.

public static void Inpaint(InputArray src, InputArray inpaintMask, OutputArray dst, double inpaintRadius, InpaintMethod flags)

Parameters

src InputArray

Input 8-bit, 16-bit unsigned or 32-bit float 1-channel or 8-bit 3-channel image.

inpaintMask InputArray

Inpainting mask, 8-bit 1-channel image. Non-zero pixels indicate the area that needs to be inpainted.

dst OutputArray

Output image with the same size and type as src.

inpaintRadius double

Radius of a circular neighborhood of each point inpainted that is considered by the algorithm.

flags InpaintMethod

Inpainting method that could be cv::INPAINT_NS or cv::INPAINT_TELEA

InsertChannel(InputArray, InputOutputArray, int)

inserts a single channel to dst (coi is 0-based index)

public static void InsertChannel(InputArray src, InputOutputArray dst, int coi)

Parameters

src InputArray
dst InputOutputArray
coi int

Integral(InputArray, OutputArray, OutputArray, OutputArray, MatType?, MatType?)

Calculates the integral of an image. The function calculates one or more integral images for the source image.

public static void Integral(InputArray src, OutputArray sum, OutputArray sqsum, OutputArray tilted, MatType? sdepth = null, MatType? sqdepth = null)

Parameters

src InputArray

input image as W×H, 8-bit or floating-point (32f or 64f).

sum OutputArray

integral image as (W+1)×(H+1) , 32-bit integer or floating-point (32f or 64f).

sqsum OutputArray

integral image for squared pixel values; it is (W+1)×(H+1), double-precision floating-point (64f) array.

tilted OutputArray

integral for the image rotated by 45 degrees; it is (W+1)×(H+1) array with the same data type as sum.

sdepth MatType?

desired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F.

sqdepth MatType?

desired depth of the integral image of squared pixel values, CV_32F or CV_64F.

Integral(InputArray, OutputArray, OutputArray, MatType?)

Calculates the integral of an image. The function calculates one or more integral images for the source image.

public static void Integral(InputArray src, OutputArray sum, OutputArray sqsum, MatType? sdepth = null)

Parameters

src InputArray
sum OutputArray
sqsum OutputArray
sdepth MatType?

Integral(InputArray, OutputArray, MatType?)

Calculates the integral of an image. The function calculates one or more integral images for the source image.

public static void Integral(InputArray src, OutputArray sum, MatType? sdepth = null)

Parameters

src InputArray
sum OutputArray
sdepth MatType?

IntersectConvexConvex(InputArray, InputArray, OutputArray, bool)

finds intersection of two convex polygons

public static float IntersectConvexConvex(InputArray p1, InputArray p2, OutputArray p12, bool handleNested = true)

Parameters

p1 InputArray
p2 InputArray
p12 OutputArray
handleNested bool

Returns

float

IntersectConvexConvex(IEnumerable<Point2f>, IEnumerable<Point2f>, out Point2f[], bool)

finds intersection of two convex polygons

public static float IntersectConvexConvex(IEnumerable<Point2f> p1, IEnumerable<Point2f> p2, out Point2f[] p12, bool handleNested = true)

Parameters

p1 IEnumerable<Point2f>
p2 IEnumerable<Point2f>
p12 Point2f[]
handleNested bool

Returns

float

IntersectConvexConvex(IEnumerable<Point>, IEnumerable<Point>, out Point[], bool)

finds intersection of two convex polygons

public static float IntersectConvexConvex(IEnumerable<Point> p1, IEnumerable<Point> p2, out Point[] p12, bool handleNested = true)

Parameters

p1 IEnumerable<Point>
p2 IEnumerable<Point>
p12 Point[]
handleNested bool

Returns

float

Invert(InputArray, OutputArray, DecompTypes)

computes inverse or pseudo-inverse matrix

public static double Invert(InputArray src, OutputArray dst, DecompTypes flags = DecompTypes.LU)

Parameters

src InputArray

The source floating-point MxN matrix

dst OutputArray

The destination matrix; will have NxM size and the same type as src

flags DecompTypes

The inversion method

Returns

double

InvertAffineTransform(InputArray, OutputArray)

Inverts an affine transformation.

public static void InvertAffineTransform(InputArray m, OutputArray im)

Parameters

m InputArray

Original affine transformation.

im OutputArray

Output reverse affine transformation.

IsContourConvex(InputArray)

returns true if the contour is convex. Does not support contours with self-intersection

public static bool IsContourConvex(InputArray contour)

Parameters

contour InputArray

Input vector of 2D points

Returns

bool

IsContourConvex(IEnumerable<Point2f>)

returns true if the contour is convex. D oes not support contours with self-intersection

public static bool IsContourConvex(IEnumerable<Point2f> contour)

Parameters

contour IEnumerable<Point2f>

Input vector of 2D points

Returns

bool

IsContourConvex(IEnumerable<Point>)

returns true if the contour is convex. Does not support contours with self-intersection

public static bool IsContourConvex(IEnumerable<Point> contour)

Parameters

contour IEnumerable<Point>

Input vector of 2D points

Returns

bool

Kmeans(InputArray, int, InputOutputArray, TermCriteria, int, KMeansFlags, OutputArray?)

Finds centers of clusters and groups input samples around the clusters.

public static double Kmeans(InputArray data, int k, InputOutputArray bestLabels, TermCriteria criteria, int attempts, KMeansFlags flags, OutputArray? centers = null)

Parameters

data InputArray

Data for clustering. An array of N-Dimensional points with float coordinates is needed.

k int

Number of clusters to split the set by.

bestLabels InputOutputArray

Input/output integer array that stores the cluster indices for every sample.

criteria TermCriteria

The algorithm termination criteria, that is, the maximum number of iterations and/or the desired accuracy. The accuracy is specified as criteria.epsilon. As soon as each of the cluster centers moves by less than criteria.epsilon on some iteration, the algorithm stops.

attempts int

Flag to specify the number of times the algorithm is executed using different initial labellings. The algorithm returns the labels that yield the best compactness (see the last function parameter).

flags KMeansFlags

Flag that can take values of cv::KmeansFlags

centers OutputArray

Output matrix of the cluster centers, one row per each cluster center.

Returns

double

The function returns the compactness measure that is computed as \f[\sum _i | \texttt{samples} _i - \texttt{centers} _{ \texttt{labels} _i} | ^2\f] after every attempt. The best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function. Basically, you can use only the core of the function, set the number of attempts to 1, initialize labels each time using a custom algorithm, pass them with the ( flags = #KMEANS_USE_INITIAL_LABELS ) flag, and then choose the best (most-compact) clustering.

LUT(InputArray, InputArray, OutputArray)

transforms array of numbers using a lookup table: dst(i)=lut(src(i))

public static void LUT(InputArray src, InputArray lut, OutputArray dst)

Parameters

src InputArray

Source array of 8-bit elements

lut InputArray

Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array

dst OutputArray

Destination array; will have the same size and the same number of channels as src, and the same depth as lut

LUT(InputArray, byte[], OutputArray)

transforms array of numbers using a lookup table: dst(i)=lut(src(i))

public static void LUT(InputArray src, byte[] lut, OutputArray dst)

Parameters

src InputArray

Source array of 8-bit elements

lut byte[]

Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array

dst OutputArray

Destination array; will have the same size and the same number of channels as src, and the same depth as lut

Laplacian(InputArray, OutputArray, MatType, int, double, double, BorderTypes)

Calculates the Laplacian of an image

public static void Laplacian(InputArray src, OutputArray dst, MatType ddepth, int ksize = 1, double scale = 1, double delta = 0, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

Source image

dst OutputArray

Destination image; will have the same size and the same number of channels as src

ddepth MatType

The desired depth of the destination image

ksize int

The aperture size used to compute the second-derivative filters

scale double

The optional scale factor for the computed Laplacian values (by default, no scaling is applied

delta double

The optional delta value, added to the results prior to storing them in dst

borderType BorderTypes

The pixel extrapolation method

Line(InputOutputArray, Point, Point, Scalar, int, LineTypes, int)

Draws a line segment connecting two points

public static void Line(InputOutputArray img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

The image.

pt1 Point

First point of the line segment.

pt2 Point

Second point of the line segment.

color Scalar

Line color.

thickness int

Line thickness. [By default this is 1]

lineType LineTypes

Type of the line. [By default this is LineType.Link8]

shift int

Number of fractional bits in the point coordinates. [By default this is 0]

Line(InputOutputArray, int, int, int, int, Scalar, int, LineTypes, int)

Draws a line segment connecting two points

public static void Line(InputOutputArray img, int pt1X, int pt1Y, int pt2X, int pt2Y, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

The image.

pt1X int

First point's x-coordinate of the line segment.

pt1Y int

First point's y-coordinate of the line segment.

pt2X int

Second point's x-coordinate of the line segment.

pt2Y int

Second point's y-coordinate of the line segment.

color Scalar

Line color.

thickness int

Line thickness. [By default this is 1]

lineType LineTypes

Type of the line. [By default this is LineType.Link8]

shift int

Number of fractional bits in the point coordinates. [By default this is 0]

LinearPolar(InputArray, OutputArray, Point2f, double, InterpolationFlags)

Remaps an image to polar space.

public static void LinearPolar(InputArray src, OutputArray dst, Point2f center, double maxRadius, InterpolationFlags flags)

Parameters

src InputArray

Source image

dst OutputArray

Destination image

center Point2f

The transformation center

maxRadius double

Inverse magnitude scale parameter

flags InterpolationFlags

A combination of interpolation methods, see cv::InterpolationFlags

Log(InputArray, OutputArray)

computes natural logarithm of absolute value of each matrix element: dst = log(abs(src))

public static void Log(InputArray src, OutputArray dst)

Parameters

src InputArray

The source array

dst OutputArray

The destination array; will have the same size and same type as src

LogPolar(InputArray, OutputArray, Point2f, double, InterpolationFlags)

Remaps an image to log-polar space.

public static void LogPolar(InputArray src, OutputArray dst, Point2f center, double m, InterpolationFlags flags)

Parameters

src InputArray

Source image

dst OutputArray

Destination image

center Point2f

The transformation center; where the output precision is maximal

m double

Magnitude scale parameter.

flags InterpolationFlags

A combination of interpolation methods, see cv::InterpolationFlags

Magnitude(InputArray, InputArray, OutputArray)

Calculates the magnitude of 2D vectors.

public static void Magnitude(InputArray x, InputArray y, OutputArray magnitude)

Parameters

x InputArray

floating-point array of x-coordinates of the vectors.

y InputArray

floating-point array of y-coordinates of the vectors; it must have the same size as x.

magnitude OutputArray

output array of the same size and type as x.

Mahalanobis(InputArray, InputArray, InputArray)

Calculates the Mahalanobis distance between two vectors.

public static double Mahalanobis(InputArray v1, InputArray v2, InputArray icovar)

Parameters

v1 InputArray

first 1D input vector.

v2 InputArray

second 1D input vector.

icovar InputArray

inverse covariance matrix.

Returns

double

MatMulDeriv(InputArray, InputArray, OutputArray, OutputArray)

computes derivatives of the matrix product w.r.t each of the multiplied matrix coefficients

public static void MatMulDeriv(InputArray a, InputArray b, OutputArray dABdA, OutputArray dABdB)

Parameters

a InputArray

First multiplied matrix.

b InputArray

Second multiplied matrix.

dABdA OutputArray

First output derivative matrix d(AB)/dA of size A.rowsB.cols X A.rows*A.cols .

dABdB OutputArray

Second output derivative matrix d(AB)/dB of size A.rowsB.cols X B.rows*B.cols .

MatchShapes(InputArray, InputArray, ShapeMatchModes, double)

Compares two shapes.

public static double MatchShapes(InputArray contour1, InputArray contour2, ShapeMatchModes method, double parameter = 0)

Parameters

contour1 InputArray

First contour or grayscale image.

contour2 InputArray

Second contour or grayscale image.

method ShapeMatchModes

Comparison method

parameter double

Method-specific parameter (not supported now)

Returns

double

MatchShapes(IEnumerable<Point>, IEnumerable<Point>, ShapeMatchModes, double)

Compares two shapes.

public static double MatchShapes(IEnumerable<Point> contour1, IEnumerable<Point> contour2, ShapeMatchModes method, double parameter = 0)

Parameters

contour1 IEnumerable<Point>

First contour or grayscale image.

contour2 IEnumerable<Point>

Second contour or grayscale image.

method ShapeMatchModes

Comparison method

parameter double

Method-specific parameter (not supported now)

Returns

double

MatchTemplate(InputArray, InputArray, OutputArray, TemplateMatchModes, InputArray?)

Computes the proximity map for the raster template and the image where the template is searched for

public static void MatchTemplate(InputArray image, InputArray templ, OutputArray result, TemplateMatchModes method, InputArray? mask = null)

Parameters

image InputArray

Image where the search is running; should be 8-bit or 32-bit floating-point

templ InputArray

Searched template; must be not greater than the source image and have the same data type

result OutputArray

A map of comparison results; will be single-channel 32-bit floating-point. If image is WxH and templ is wxh then result will be (W-w+1) x (H-h+1).

method TemplateMatchModes

Specifies the comparison method

mask InputArray

Mask of searched template. It must have the same datatype and size with templ. It is not set by default.

Max(InputArray, InputArray, OutputArray)

computes per-element maximum of two arrays (dst = max(src1, src2))

public static void Max(InputArray src1, InputArray src2, OutputArray dst)

Parameters

src1 InputArray
src2 InputArray
dst OutputArray

Max(Mat, Mat, Mat)

computes per-element maximum of two arrays (dst = max(src1, src2))

public static void Max(Mat src1, Mat src2, Mat dst)

Parameters

src1 Mat
src2 Mat
dst Mat

Max(Mat, double, Mat)

computes per-element maximum of array and scalar (dst = max(src1, src2))

public static void Max(Mat src1, double src2, Mat dst)

Parameters

src1 Mat
src2 double
dst Mat

Mean(InputArray, InputArray?)

computes mean value of selected array elements

public static Scalar Mean(InputArray src, InputArray? mask = null)

Parameters

src InputArray

The source array; it should have 1 to 4 channels (so that the result can be stored in Scalar)

mask InputArray

The optional operation mask

Returns

Scalar

MeanShift(InputArray, ref Rect, TermCriteria)

Finds an object on a back projection image.

public static int MeanShift(InputArray probImage, ref Rect window, TermCriteria criteria)

Parameters

probImage InputArray

Back projection of the object histogram.

window Rect

Initial search window.

criteria TermCriteria

Stop criteria for the iterative search algorithm.

Returns

int

Number of iterations CAMSHIFT took to converge.

MeanStdDev(InputArray, OutputArray, OutputArray, InputArray?)

computes mean value and standard deviation of all or selected array elements

public static void MeanStdDev(InputArray src, OutputArray mean, OutputArray stddev, InputArray? mask = null)

Parameters

src InputArray

The source array; it should have 1 to 4 channels (so that the results can be stored in Scalar's)

mean OutputArray

The output parameter: computed mean value

stddev OutputArray

The output parameter: computed standard deviation

mask InputArray

The optional operation mask

MeanStdDev(InputArray, out Scalar, out Scalar, InputArray?)

computes mean value and standard deviation of all or selected array elements

public static void MeanStdDev(InputArray src, out Scalar mean, out Scalar stddev, InputArray? mask = null)

Parameters

src InputArray

The source array; it should have 1 to 4 channels (so that the results can be stored in Scalar's)

mean Scalar

The output parameter: computed mean value

stddev Scalar

The output parameter: computed standard deviation

mask InputArray

The optional operation mask

MedianBlur(InputArray, OutputArray, int)

Smoothes image using median filter

public static void MedianBlur(InputArray src, OutputArray dst, int ksize)

Parameters

src InputArray

The source 1-, 3- or 4-channel image. When ksize is 3 or 5, the image depth should be CV_8U , CV_16U or CV_32F. For larger aperture sizes it can only be CV_8U

dst OutputArray

The destination array; will have the same size and the same type as src

ksize int

The aperture linear size. It must be odd and more than 1, i.e. 3, 5, 7 ...

Merge(Mat[], Mat)

makes multi-channel array out of several single-channel arrays

public static void Merge(Mat[] mv, Mat dst)

Parameters

mv Mat[]
dst Mat

Min(InputArray, InputArray, OutputArray)

computes per-element minimum of two arrays (dst = min(src1, src2))

public static void Min(InputArray src1, InputArray src2, OutputArray dst)

Parameters

src1 InputArray
src2 InputArray
dst OutputArray

Min(Mat, Mat, Mat)

computes per-element minimum of two arrays (dst = min(src1, src2))

public static void Min(Mat src1, Mat src2, Mat dst)

Parameters

src1 Mat
src2 Mat
dst Mat

Min(Mat, double, Mat)

computes per-element minimum of array and scalar (dst = min(src1, src2))

public static void Min(Mat src1, double src2, Mat dst)

Parameters

src1 Mat
src2 double
dst Mat

MinAreaRect(InputArray)

Finds the minimum area rotated rectangle enclosing a 2D point set.

public static RotatedRect MinAreaRect(InputArray points)

Parameters

points InputArray

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

Returns

RotatedRect

MinAreaRect(IEnumerable<Point2f>)

Finds the minimum area rotated rectangle enclosing a 2D point set.

public static RotatedRect MinAreaRect(IEnumerable<Point2f> points)

Parameters

points IEnumerable<Point2f>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

Returns

RotatedRect

MinAreaRect(IEnumerable<Point>)

Finds the minimum area rotated rectangle enclosing a 2D point set.

public static RotatedRect MinAreaRect(IEnumerable<Point> points)

Parameters

points IEnumerable<Point>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

Returns

RotatedRect

MinEnclosingCircle(InputArray, out Point2f, out float)

Finds the minimum area circle enclosing a 2D point set.

public static void MinEnclosingCircle(InputArray points, out Point2f center, out float radius)

Parameters

points InputArray

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

center Point2f

The output center of the circle

radius float

The output radius of the circle

MinEnclosingCircle(IEnumerable<Point2f>, out Point2f, out float)

Finds the minimum area circle enclosing a 2D point set.

public static void MinEnclosingCircle(IEnumerable<Point2f> points, out Point2f center, out float radius)

Parameters

points IEnumerable<Point2f>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

center Point2f

The output center of the circle

radius float

The output radius of the circle

MinEnclosingCircle(IEnumerable<Point>, out Point2f, out float)

Finds the minimum area circle enclosing a 2D point set.

public static void MinEnclosingCircle(IEnumerable<Point> points, out Point2f center, out float radius)

Parameters

points IEnumerable<Point>

The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.

center Point2f

The output center of the circle

radius float

The output radius of the circle

MinEnclosingTriangle(InputArray, OutputArray)

Finds a triangle of minimum area enclosing a 2D point set and returns its area.

public static double MinEnclosingTriangle(InputArray points, OutputArray triangle)

Parameters

points InputArray

Input vector of 2D points with depth CV_32S or CV_32F, stored in std::vector or Mat

triangle OutputArray

Output vector of three 2D points defining the vertices of the triangle. The depth

Returns

double

Triangle area

MinEnclosingTriangle(IEnumerable<Point2f>, out Point2f[])

Finds a triangle of minimum area enclosing a 2D point set and returns its area.

public static double MinEnclosingTriangle(IEnumerable<Point2f> points, out Point2f[] triangle)

Parameters

points IEnumerable<Point2f>

Input vector of 2D points with depth CV_32S or CV_32F, stored in std::vector or Mat

triangle Point2f[]

Output vector of three 2D points defining the vertices of the triangle. The depth

Returns

double

Triangle area

MinEnclosingTriangle(IEnumerable<Point>, out Point2f[])

Finds a triangle of minimum area enclosing a 2D point set and returns its area.

public static double MinEnclosingTriangle(IEnumerable<Point> points, out Point2f[] triangle)

Parameters

points IEnumerable<Point>

Input vector of 2D points with depth CV_32S or CV_32F, stored in std::vector or Mat

triangle Point2f[]

Output vector of three 2D points defining the vertices of the triangle. The depth

Returns

double

Triangle area

MinMaxIdx(InputArray, out double, out double)

finds global minimum and maximum array elements and returns their values and their locations

public static void MinMaxIdx(InputArray src, out double minVal, out double maxVal)

Parameters

src InputArray

The source single-channel array

minVal double

Pointer to returned minimum value

maxVal double

Pointer to returned maximum value

MinMaxIdx(InputArray, out double, out double, int[], int[], InputArray?)

finds global minimum and maximum array elements and returns their values and their locations

public static void MinMaxIdx(InputArray src, out double minVal, out double maxVal, int[] minIdx, int[] maxIdx, InputArray? mask = null)

Parameters

src InputArray

The source single-channel array

minVal double

Pointer to returned minimum value

maxVal double

Pointer to returned maximum value

minIdx int[]
maxIdx int[]
mask InputArray

MinMaxIdx(InputArray, int[], int[])

finds global minimum and maximum array elements and returns their values and their locations

public static void MinMaxIdx(InputArray src, int[] minIdx, int[] maxIdx)

Parameters

src InputArray

The source single-channel array

minIdx int[]
maxIdx int[]

MinMaxLoc(InputArray, out Point, out Point)

finds global minimum and maximum array elements and returns their values and their locations

public static void MinMaxLoc(InputArray src, out Point minLoc, out Point maxLoc)

Parameters

src InputArray

The source single-channel array

minLoc Point

Pointer to returned minimum location

maxLoc Point

Pointer to returned maximum location

MinMaxLoc(InputArray, out double, out double)

finds global minimum and maximum array elements and returns their values and their locations

public static void MinMaxLoc(InputArray src, out double minVal, out double maxVal)

Parameters

src InputArray

The source single-channel array

minVal double

Pointer to returned minimum value

maxVal double

Pointer to returned maximum value

MinMaxLoc(InputArray, out double, out double, out Point, out Point, InputArray?)

finds global minimum and maximum array elements and returns their values and their locations

public static void MinMaxLoc(InputArray src, out double minVal, out double maxVal, out Point minLoc, out Point maxLoc, InputArray? mask = null)

Parameters

src InputArray

The source single-channel array

minVal double

Pointer to returned minimum value

maxVal double

Pointer to returned maximum value

minLoc Point

Pointer to returned minimum location

maxLoc Point

Pointer to returned maximum location

mask InputArray

The optional mask used to select a sub-array

MixChannels(Mat[], Mat[], int[])

copies selected channels from the input arrays to the selected channels of the output arrays

public static void MixChannels(Mat[] src, Mat[] dst, int[] fromTo)

Parameters

src Mat[]
dst Mat[]
fromTo int[]

Moments(InputArray, bool)

Calculates all of the moments up to the third order of a polygon or rasterized shape.

public static Moments Moments(InputArray array, bool binaryImage = false)

Parameters

array InputArray

A raster image (single-channel, 8-bit or floating-point 2D array) or an array ( 1xN or Nx1 ) of 2D points ( Point or Point2f )

binaryImage bool

If it is true, then all the non-zero image pixels are treated as 1’s

Returns

Moments

Moments(byte[,], bool)

Calculates all of the moments up to the third order of a polygon or rasterized shape.

public static Moments Moments(byte[,] array, bool binaryImage = false)

Parameters

array byte[,]

A raster image (8-bit) 2D array

binaryImage bool

If it is true, then all the non-zero image pixels are treated as 1’s

Returns

Moments

Moments(IEnumerable<Point2f>, bool)

Calculates all of the moments up to the third order of a polygon or rasterized shape.

public static Moments Moments(IEnumerable<Point2f> array, bool binaryImage = false)

Parameters

array IEnumerable<Point2f>

Array of 2D points

binaryImage bool

If it is true, then all the non-zero image pixels are treated as 1’s

Returns

Moments

Moments(IEnumerable<Point>, bool)

Calculates all of the moments up to the third order of a polygon or rasterized shape.

public static Moments Moments(IEnumerable<Point> array, bool binaryImage = false)

Parameters

array IEnumerable<Point>

Array of 2D points

binaryImage bool

If it is true, then all the non-zero image pixels are treated as 1’s

Returns

Moments

Moments(float[,], bool)

Calculates all of the moments up to the third order of a polygon or rasterized shape.

public static Moments Moments(float[,] array, bool binaryImage = false)

Parameters

array float[,]

A raster image (floating-point) 2D array

binaryImage bool

If it is true, then all the non-zero image pixels are treated as 1’s

Returns

Moments

MorphologyDefaultBorderValue()

Default borderValue for Dilate/Erode

public static Scalar MorphologyDefaultBorderValue()

Returns

Scalar

MorphologyEx(InputArray, OutputArray, MorphTypes, InputArray?, Point?, int, BorderTypes, Scalar?)

Performs advanced morphological transformations

public static void MorphologyEx(InputArray src, OutputArray dst, MorphTypes op, InputArray? element, Point? anchor = null, int iterations = 1, BorderTypes borderType = BorderTypes.Constant, Scalar? borderValue = null)

Parameters

src InputArray

Source image

dst OutputArray

Destination image. It will have the same size and the same type as src

op MorphTypes

Type of morphological operation

element InputArray

Structuring element

anchor Point?

Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center

iterations int

Number of times erosion and dilation are applied. [By default this is 1]

borderType BorderTypes

The pixel extrapolation method. [By default this is BorderType.Constant]

borderValue Scalar?

The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()]

MoveWindow(string, int, int)

Moves window to the specified position

public static void MoveWindow(string winName, int x, int y)

Parameters

winName string

Window name

x int

The new x-coordinate of the window

y int

The new y-coordinate of the window

MulSpectrums(InputArray, InputArray, OutputArray, DftFlags, bool)

Performs the per-element multiplication of two Fourier spectrums.

public static void MulSpectrums(InputArray a, InputArray b, OutputArray c, DftFlags flags, bool conjB = false)

Parameters

a InputArray

first input array.

b InputArray

second input array of the same size and type as src1.

c OutputArray

output array of the same size and type as src1.

flags DftFlags

operation flags; currently, the only supported flag is cv::DFT_ROWS, which indicates that each row of src1 and src2 is an independent 1D Fourier spectrum. If you do not want to use this flag, then simply add a 0 as value.

conjB bool

optional flag that conjugates the second input array before the multiplication (true) or not (false).

MulTransposed(InputArray, OutputArray, bool, InputArray?, double, int)

multiplies matrix by its transposition from the left or from the right

public static void MulTransposed(InputArray src, OutputArray dst, bool aTa, InputArray? delta = null, double scale = 1, int dtype = -1)

Parameters

src InputArray

The source matrix

dst OutputArray

The destination square matrix

aTa bool

Specifies the multiplication ordering; see the description below

delta InputArray

The optional delta matrix, subtracted from src before the multiplication. When the matrix is empty ( delta=Mat() ), it’s assumed to be zero, i.e. nothing is subtracted, otherwise if it has the same size as src, then it’s simply subtracted, otherwise it is "repeated" to cover the full src and then subtracted. Type of the delta matrix, when it's not empty, must be the same as the type of created destination matrix, see the rtype description

scale double

The optional scale factor for the matrix product

dtype int

When it’s negative, the destination matrix will have the same type as src . Otherwise, it will have type=CV_MAT_DEPTH(rtype), which should be either CV_32F or CV_64F

Multiply(InputArray, InputArray, OutputArray, double, int)

Calculates the per-element scaled product of two arrays

public static void Multiply(InputArray src1, InputArray src2, OutputArray dst, double scale = 1, int dtype = -1)

Parameters

src1 InputArray

The first source array

src2 InputArray

The second source array of the same size and the same type as src1

dst OutputArray

The destination array; will have the same size and the same type as src1

scale double

The optional scale factor. [By default this is 1]

dtype int

NamedWindow(string, WindowFlags)

Creates a window.

public static void NamedWindow(string winName, WindowFlags flags = WindowFlags.GuiExpanded)

Parameters

winName string

Name of the window in the window caption that may be used as a window identifier.

flags WindowFlags

Flags of the window. Currently the only supported flag is CV WINDOW AUTOSIZE. If this is set, the window size is automatically adjusted to fit the displayed image (see imshow ), and the user can not change the window size manually.

Norm(InputArray, InputArray, NormTypes, InputArray?)

computes norm of selected part of the difference between two arrays

public static double Norm(InputArray src1, InputArray src2, NormTypes normType = NormTypes.L2, InputArray? mask = null)

Parameters

src1 InputArray

The first source array

src2 InputArray

The second source array of the same size and the same type as src1

normType NormTypes

Type of the norm

mask InputArray

The optional operation mask

Returns

double

Norm(InputArray, NormTypes, InputArray?)

Calculates absolute array norm, absolute difference norm, or relative difference norm.

public static double Norm(InputArray src1, NormTypes normType = NormTypes.L2, InputArray? mask = null)

Parameters

src1 InputArray

The first source array

normType NormTypes

Type of the norm

mask InputArray

The optional operation mask

Returns

double

Normalize(InputArray, InputOutputArray, double, double, NormTypes, int, InputArray?)

scales and shifts array elements so that either the specified norm (alpha) or the minimum (alpha) and maximum (beta) array values get the specified values

public static void Normalize(InputArray src, InputOutputArray dst, double alpha = 1, double beta = 0, NormTypes normType = NormTypes.L2, int dtype = -1, InputArray? mask = null)

Parameters

src InputArray

The source array

dst InputOutputArray

The destination array; will have the same size as src

alpha double

The norm value to normalize to or the lower range boundary in the case of range normalization

beta double

The upper range boundary in the case of range normalization; not used for norm normalization

normType NormTypes

The normalization type

dtype int

When the parameter is negative, the destination array will have the same type as src, otherwise it will have the same number of channels as src and the depth =CV_MAT_DEPTH(rtype)

mask InputArray

The optional operation mask

PCABackProject(InputArray, InputArray, InputArray, OutputArray)

Reconstructs vectors from their PC projections.

public static void PCABackProject(InputArray data, InputArray mean, InputArray eigenvectors, OutputArray result)

Parameters

data InputArray

input samples stored as the matrix rows or as the matrix columns.

mean InputArray

optional mean value; if the matrix is empty (noArray()), the mean is computed from the data.

eigenvectors InputArray

eigenvectors of the covariation matrix

result OutputArray

output vectors

PCACompute(InputArray, InputOutputArray, OutputArray, OutputArray, int)

PCA of the supplied dataset.

public static void PCACompute(InputArray data, InputOutputArray mean, OutputArray eigenvectors, OutputArray eigenvalues, int maxComponents = 0)

Parameters

data InputArray

input samples stored as the matrix rows or as the matrix columns.

mean InputOutputArray

optional mean value; if the matrix is empty (noArray()), the mean is computed from the data.

eigenvectors OutputArray

eigenvectors of the covariation matrix

eigenvalues OutputArray

eigenvalues of the covariation matrix

maxComponents int

maximum number of components that PCA should retain; by default, all the components are retained.

PCACompute(InputArray, InputOutputArray, OutputArray, int)

PCA of the supplied dataset.

public static void PCACompute(InputArray data, InputOutputArray mean, OutputArray eigenvectors, int maxComponents = 0)

Parameters

data InputArray

input samples stored as the matrix rows or as the matrix columns.

mean InputOutputArray

optional mean value; if the matrix is empty (noArray()), the mean is computed from the data.

eigenvectors OutputArray

eigenvectors of the covariation matrix

maxComponents int

maximum number of components that PCA should retain; by default, all the components are retained.

PCAComputeVar(InputArray, InputOutputArray, OutputArray, OutputArray, double)

PCA of the supplied dataset.

public static void PCAComputeVar(InputArray data, InputOutputArray mean, OutputArray eigenvectors, OutputArray eigenvalues, double retainedVariance)

Parameters

data InputArray

input samples stored as the matrix rows or as the matrix columns.

mean InputOutputArray

optional mean value; if the matrix is empty (noArray()), the mean is computed from the data.

eigenvectors OutputArray

eigenvectors of the covariation matrix

eigenvalues OutputArray

eigenvalues of the covariation matrix

retainedVariance double

Percentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2.

PCAComputeVar(InputArray, InputOutputArray, OutputArray, double)

PCA of the supplied dataset.

public static void PCAComputeVar(InputArray data, InputOutputArray mean, OutputArray eigenvectors, double retainedVariance)

Parameters

data InputArray

input samples stored as the matrix rows or as the matrix columns.

mean InputOutputArray

optional mean value; if the matrix is empty (noArray()), the mean is computed from the data.

eigenvectors OutputArray

eigenvectors of the covariation matrix

retainedVariance double

Percentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2.

PCAProject(InputArray, InputArray, InputArray, OutputArray)

Projects vector(s) to the principal component subspace.

public static void PCAProject(InputArray data, InputArray mean, InputArray eigenvectors, OutputArray result)

Parameters

data InputArray

input samples stored as the matrix rows or as the matrix columns.

mean InputArray

optional mean value; if the matrix is empty (noArray()), the mean is computed from the data.

eigenvectors InputArray

eigenvectors of the covariation matrix

result OutputArray

output vectors

PSNR(InputArray, InputArray, double)

Computes the Peak Signal-to-Noise Ratio (PSNR) image quality metric.

This function calculates the Peak Signal-to-Noise Ratio(PSNR) image quality metric in decibels(dB), between two input arrays src1 and src2.The arrays must have the same type.

public static double PSNR(InputArray src1, InputArray src2, double r = 255)

Parameters

src1 InputArray

first input array.

src2 InputArray

second input array of the same size as src1.

r double

the maximum pixel value (255 by default)

Returns

double

Partition<T>(IEnumerable<T>, out int[], PartitionPredicate<T>)

Splits an element set into equivalency classes. Consider using GroupBy of Linq instead.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static int Partition<T>(IEnumerable<T> vec, out int[] labels, Cv2.PartitionPredicate<T> predicate)

Parameters

vec IEnumerable<T>

Set of elements stored as a vector.

labels int[]

Output vector of labels. It contains as many elements as vec. Each label labels[i] is a 0-based cluster index of vec[i] .

predicate Cv2.PartitionPredicate<T>

Equivalence predicate (a boolean function of two arguments). The predicate returns true when the elements are certainly in the same class, and returns false if they may or may not be in the same class.

Returns

int

Type Parameters

T

PatchNaNs(InputOutputArray, double)

converts NaN's to the given number

public static void PatchNaNs(InputOutputArray a, double val = 0)

Parameters

a InputOutputArray
val double

PencilSketch(InputArray, OutputArray, OutputArray, float, float, float)

Pencil-like non-photorealistic line drawing

public static void PencilSketch(InputArray src, OutputArray dst1, OutputArray dst2, float sigmaS = 60, float sigmaR = 0.07, float shadeFactor = 0.02)

Parameters

src InputArray

Input 8-bit 3-channel image.

dst1 OutputArray

Output 8-bit 1-channel image.

dst2 OutputArray

Output image with the same size and type as src.

sigmaS float

Range between 0 to 200.

sigmaR float

Range between 0 to 1.

shadeFactor float

Range between 0 to 0.1.

PerspectiveTransform(InputArray, OutputArray, InputArray)

performs perspective transformation of each element of multi-channel input matrix

public static void PerspectiveTransform(InputArray src, OutputArray dst, InputArray m)

Parameters

src InputArray

The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed

dst OutputArray

The destination array; it will have the same size and same type as src

m InputArray

3x3 or 4x4 transformation matrix

PerspectiveTransform(IEnumerable<Point2d>, Mat)

performs perspective transformation of each element of multi-channel input matrix

public static Point2d[] PerspectiveTransform(IEnumerable<Point2d> src, Mat m)

Parameters

src IEnumerable<Point2d>

The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed

m Mat

3x3 or 4x4 transformation matrix

Returns

Point2d[]

The destination array; it will have the same size and same type as src

PerspectiveTransform(IEnumerable<Point2f>, Mat)

performs perspective transformation of each element of multi-channel input matrix

public static Point2f[] PerspectiveTransform(IEnumerable<Point2f> src, Mat m)

Parameters

src IEnumerable<Point2f>

The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed

m Mat

3x3 or 4x4 transformation matrix

Returns

Point2f[]

The destination array; it will have the same size and same type as src

PerspectiveTransform(IEnumerable<Point3d>, Mat)

performs perspective transformation of each element of multi-channel input matrix

public static Point3d[] PerspectiveTransform(IEnumerable<Point3d> src, Mat m)

Parameters

src IEnumerable<Point3d>

The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed

m Mat

3x3 or 4x4 transformation matrix

Returns

Point3d[]

The destination array; it will have the same size and same type as src

PerspectiveTransform(IEnumerable<Point3f>, Mat)

performs perspective transformation of each element of multi-channel input matrix

public static Point3f[] PerspectiveTransform(IEnumerable<Point3f> src, Mat m)

Parameters

src IEnumerable<Point3f>

The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed

m Mat

3x3 or 4x4 transformation matrix

Returns

Point3f[]

The destination array; it will have the same size and same type as src

Phase(InputArray, InputArray, OutputArray, bool)

Calculates the rotation angle of 2D vectors.

public static void Phase(InputArray x, InputArray y, OutputArray angle, bool angleInDegrees = false)

Parameters

x InputArray

input floating-point array of x-coordinates of 2D vectors.

y InputArray

input array of y-coordinates of 2D vectors; it must have the same size and the same type as x.

angle OutputArray

output array of vector angles; it has the same size and same type as x.

angleInDegrees bool

when true, the function calculates the angle in degrees, otherwise, they are measured in radians.

PhaseCorrelate(InputArray, InputArray, InputArray, out double)

The function is used to detect translational shifts that occur between two images.

The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain.It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation.

Calculates the cross-power spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.

public static Point2d PhaseCorrelate(InputArray src1, InputArray src2, InputArray window, out double response)

Parameters

src1 InputArray

Source floating point array (CV_32FC1 or CV_64FC1)

src2 InputArray

Source floating point array (CV_32FC1 or CV_64FC1)

window InputArray

Floating point array with windowing coefficients to reduce edge effects (optional).

response double

Signal power within the 5x5 centroid around the peak, between 0 and 1 (optional).

Returns

Point2d

detected phase shift(sub-pixel) between the two arrays.

PointPolygonTest(InputArray, Point2f, bool)

Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary

public static double PointPolygonTest(InputArray contour, Point2f pt, bool measureDist)

Parameters

contour InputArray
pt Point2f
measureDist bool

Returns

double

PointPolygonTest(IEnumerable<Point2f>, Point2f, bool)

Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary.

public static double PointPolygonTest(IEnumerable<Point2f> contour, Point2f pt, bool measureDist)

Parameters

contour IEnumerable<Point2f>

Input contour.

pt Point2f

Point tested against the contour.

measureDist bool

If true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not.

Returns

double

Positive (inside), negative (outside), or zero (on an edge) value.

PointPolygonTest(IEnumerable<Point>, Point2f, bool)

Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary

public static double PointPolygonTest(IEnumerable<Point> contour, Point2f pt, bool measureDist)

Parameters

contour IEnumerable<Point>
pt Point2f
measureDist bool

Returns

double

PolarToCart(InputArray, InputArray, OutputArray, OutputArray, bool)

Calculates x and y coordinates of 2D vectors from their magnitude and angle.

public static void PolarToCart(InputArray magnitude, InputArray angle, OutputArray x, OutputArray y, bool angleInDegrees = false)

Parameters

magnitude InputArray

input floating-point array of magnitudes of 2D vectors; it can be an empty matrix(=Mat()), in this case, the function assumes that all the magnitudes are = 1; if it is not empty, it must have the same size and type as angle.

angle InputArray

input floating-point array of angles of 2D vectors.

x OutputArray

output array of x-coordinates of 2D vectors; it has the same size and type as angle.

y OutputArray

output array of y-coordinates of 2D vectors; it has the same size and type as angle.

angleInDegrees bool

when true, the input angles are measured in degrees, otherwise, they are measured in radians.

Polylines(InputOutputArray, InputArray, bool, Scalar, int, LineTypes, int)

draws one or more polygonal curves

public static void Polylines(InputOutputArray img, InputArray pts, bool isClosed, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray
pts InputArray
isClosed bool
color Scalar
thickness int
lineType LineTypes
shift int

Polylines(Mat, IEnumerable<IEnumerable<Point>>, bool, Scalar, int, LineTypes, int)

draws one or more polygonal curves

public static void Polylines(Mat img, IEnumerable<IEnumerable<Point>> pts, bool isClosed, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img Mat
pts IEnumerable<IEnumerable<Point>>
isClosed bool
color Scalar
thickness int
lineType LineTypes
shift int

Pow(InputArray, double, OutputArray)

raises the input matrix elements to the specified power (b = a**power)

public static void Pow(InputArray src, double power, OutputArray dst)

Parameters

src InputArray

The source array

power double

The exponent of power

dst OutputArray

The destination array; will have the same size and the same type as src

PreCornerDetect(InputArray, OutputArray, int, BorderTypes)

computes another complex cornerness criteria at each pixel

public static void PreCornerDetect(InputArray src, OutputArray dst, int ksize, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray
dst OutputArray
ksize int
borderType BorderTypes

ProjectPoints(InputArray, InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray?, double)

projects points from the model coordinate space to the image coordinates. Also computes derivatives of the image coordinates w.r.t the intrinsic and extrinsic camera parameters

public static void ProjectPoints(InputArray objectPoints, InputArray rvec, InputArray tvec, InputArray cameraMatrix, InputArray distCoeffs, OutputArray imagePoints, OutputArray? jacobian = null, double aspectRatio = 0)

Parameters

objectPoints InputArray

Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points in the view.

rvec InputArray

Rotation vector (3x1).

tvec InputArray

Translation vector (3x1).

cameraMatrix InputArray

Camera matrix (3x3)

distCoeffs InputArray

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

imagePoints OutputArray

Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel

jacobian OutputArray

Optional output 2Nx(10 + numDistCoeffs) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.

aspectRatio double

Optional “fixed aspect ratio” parameter. If the parameter is not 0, the function assumes that the aspect ratio (fx/fy) is fixed and correspondingly adjusts the jacobian matrix.

ProjectPoints(IEnumerable<Point3f>, double[], double[], double[,], double[], out Point2f[], out double[,], double)

projects points from the model coordinate space to the image coordinates. Also computes derivatives of the image coordinates w.r.t the intrinsic and extrinsic camera parameters

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void ProjectPoints(IEnumerable<Point3f> objectPoints, double[] rvec, double[] tvec, double[,] cameraMatrix, double[] distCoeffs, out Point2f[] imagePoints, out double[,] jacobian, double aspectRatio = 0)

Parameters

objectPoints IEnumerable<Point3f>

Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points in the view.

rvec double[]

Rotation vector (3x1).

tvec double[]

Translation vector (3x1).

cameraMatrix double[,]

Camera matrix (3x3)

distCoeffs double[]

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

imagePoints Point2f[]

Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel

jacobian double[,]

Optional output 2Nx(10 + numDistCoeffs) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.

aspectRatio double

Optional “fixed aspect ratio” parameter. If the parameter is not 0, the function assumes that the aspect ratio (fx/fy) is fixed and correspondingly adjusts the jacobian matrix.

PutText(InputOutputArray, string, Point, HersheyFonts, double, Scalar, int, LineTypes, bool)

renders text string in the image

public static void PutText(InputOutputArray img, string text, Point org, HersheyFonts fontFace, double fontScale, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, bool bottomLeftOrigin = false)

Parameters

img InputOutputArray

Image.

text string

Text string to be drawn.

org Point

Bottom-left corner of the text string in the image.

fontFace HersheyFonts

Font type, see #HersheyFonts.

fontScale double

Font scale factor that is multiplied by the font-specific base size.

color Scalar

Text color.

thickness int

Thickness of the lines used to draw a text.

lineType LineTypes

Line type. See #LineTypes

bottomLeftOrigin bool

When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.

PyrDown(InputArray, OutputArray, Size?, BorderTypes)

Blurs an image and downsamples it.

public static void PyrDown(InputArray src, OutputArray dst, Size? dstSize = null, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

input image.

dst OutputArray

output image; it has the specified size and the same type as src.

dstSize Size?

size of the output image; by default, it is computed as Size((src.cols+1)/2

borderType BorderTypes

PyrMeanShiftFiltering(InputArray, OutputArray, double, double, int, TermCriteria?)

Performs initial step of meanshift segmentation of an image.

public static void PyrMeanShiftFiltering(InputArray src, OutputArray dst, double sp, double sr, int maxLevel = 1, TermCriteria? termcrit = null)

Parameters

src InputArray

The source 8-bit, 3-channel image.

dst OutputArray

The destination image of the same format and the same size as the source.

sp double

The spatial window radius.

sr double

The color window radius.

maxLevel int

Maximum level of the pyramid for the segmentation.

termcrit TermCriteria?

Termination criteria: when to stop meanshift iterations.

PyrUp(InputArray, OutputArray, Size?, BorderTypes)

Upsamples an image and then blurs it.

public static void PyrUp(InputArray src, OutputArray dst, Size? dstSize = null, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

input image.

dst OutputArray

output image. It has the specified size and the same type as src.

dstSize Size?

size of the output image; by default, it is computed as Size(src.cols2, (src.rows2)

borderType BorderTypes

RQDecomp3x3(InputArray, OutputArray, OutputArray, OutputArray?, OutputArray?, OutputArray?)

Computes RQ decomposition of 3x3 matrix

public static Vec3d RQDecomp3x3(InputArray src, OutputArray mtxR, OutputArray mtxQ, OutputArray? qx = null, OutputArray? qy = null, OutputArray? qz = null)

Parameters

src InputArray

3x3 input matrix.

mtxR OutputArray

Output 3x3 upper-triangular matrix.

mtxQ OutputArray

Output 3x3 orthogonal matrix.

qx OutputArray

Optional output 3x3 rotation matrix around x-axis.

qy OutputArray

Optional output 3x3 rotation matrix around y-axis.

qz OutputArray

Optional output 3x3 rotation matrix around z-axis.

Returns

Vec3d

RQDecomp3x3(double[,], out double[,], out double[,])

Computes RQ decomposition of 3x3 matrix

public static Vec3d RQDecomp3x3(double[,] src, out double[,] mtxR, out double[,] mtxQ)

Parameters

src double[,]

3x3 input matrix.

mtxR double[,]

Output 3x3 upper-triangular matrix.

mtxQ double[,]

Output 3x3 orthogonal matrix.

Returns

Vec3d

RQDecomp3x3(double[,], out double[,], out double[,], out double[,], out double[,], out double[,])

Computes RQ decomposition of 3x3 matrix

public static Vec3d RQDecomp3x3(double[,] src, out double[,] mtxR, out double[,] mtxQ, out double[,] qx, out double[,] qy, out double[,] qz)

Parameters

src double[,]

3x3 input matrix.

mtxR double[,]

Output 3x3 upper-triangular matrix.

mtxQ double[,]

Output 3x3 orthogonal matrix.

qx double[,]

Optional output 3x3 rotation matrix around x-axis.

qy double[,]

Optional output 3x3 rotation matrix around y-axis.

qz double[,]

Optional output 3x3 rotation matrix around z-axis.

Returns

Vec3d

RandShuffle(InputOutputArray, double)

shuffles the input array elements

public static void RandShuffle(InputOutputArray dst, double iterFactor)

Parameters

dst InputOutputArray

The input/output numerical 1D array

iterFactor double

The scale factor that determines the number of random swap operations.

RandShuffle(InputOutputArray, double, ref RNG)

shuffles the input array elements

public static void RandShuffle(InputOutputArray dst, double iterFactor, ref RNG rng)

Parameters

dst InputOutputArray

The input/output numerical 1D array

iterFactor double

The scale factor that determines the number of random swap operations.

rng RNG

The optional random number generator used for shuffling. If it is null, theRng() is used instead.

Randn(InputOutputArray, InputArray, InputArray)

fills array with normally-distributed random numbers with the specified mean and the standard deviation

public static void Randn(InputOutputArray dst, InputArray mean, InputArray stddev)

Parameters

dst InputOutputArray

The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels

mean InputArray

The mean value (expectation) of the generated random numbers

stddev InputArray

The standard deviation of the generated random numbers

Randn(InputOutputArray, Scalar, Scalar)

fills array with normally-distributed random numbers with the specified mean and the standard deviation

public static void Randn(InputOutputArray dst, Scalar mean, Scalar stddev)

Parameters

dst InputOutputArray

The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels

mean Scalar

The mean value (expectation) of the generated random numbers

stddev Scalar

The standard deviation of the generated random numbers

Randu(InputOutputArray, InputArray, InputArray)

fills array with uniformly-distributed random numbers from the range [low, high)

public static void Randu(InputOutputArray dst, InputArray low, InputArray high)

Parameters

dst InputOutputArray

The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels

low InputArray

The inclusive lower boundary of the generated random numbers

high InputArray

The exclusive upper boundary of the generated random numbers

Randu(InputOutputArray, Scalar, Scalar)

fills array with uniformly-distributed random numbers from the range [low, high)

public static void Randu(InputOutputArray dst, Scalar low, Scalar high)

Parameters

dst InputOutputArray

The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels

low Scalar

The inclusive lower boundary of the generated random numbers

high Scalar

The exclusive upper boundary of the generated random numbers

RecoverPose(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, InputOutputArray?)

Recover relative camera rotation and translation from an estimated essential matrix and the corresponding points in two images, using cheirality check. Returns the number of inliers which pass the check.

public static int RecoverPose(InputArray E, InputArray points1, InputArray points2, InputArray cameraMatrix, OutputArray R, OutputArray t, InputOutputArray? mask = null)

Parameters

E InputArray

The input essential matrix.

points1 InputArray

Array of N 2D points from the first image. The point coordinates should be floating-point (single or double precision).

points2 InputArray

Array of the second image points of the same size and format as points1.

cameraMatrix InputArray

Camera matrix K=⎡⎣⎢fx000fy0cxcy1⎤⎦⎥ . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.

R OutputArray

Recovered relative rotation.

t OutputArray

Recovered relative translation.

mask InputOutputArray

Input/output mask for inliers in points1 and points2. : If it is not empty, then it marks inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check basically means that the triangulated 3D points should have positive depth.

Returns

int

RecoverPose(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, double, InputOutputArray?, OutputArray?)

Recover relative camera rotation and translation from an estimated essential matrix and the corresponding points in two images, using cheirality check. Returns the number of inliers which pass the check.

public static int RecoverPose(InputArray E, InputArray points1, InputArray points2, InputArray cameraMatrix, OutputArray R, OutputArray t, double distanceTresh, InputOutputArray? mask = null, OutputArray? triangulatedPoints = null)

Parameters

E InputArray

The input essential matrix.

points1 InputArray

Array of N 2D points from the first image. The point coordinates should be floating-point (single or double precision).

points2 InputArray

Array of the second image points of the same size and format as points1.

cameraMatrix InputArray

Camera matrix K=⎡⎣⎢fx000fy0cxcy1⎤⎦⎥ . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.

R OutputArray

Recovered relative rotation.

t OutputArray

Recovered relative translation.

distanceTresh double

threshold distance which is used to filter out far away points (i.e. infinite points).

mask InputOutputArray

Input/output mask for inliers in points1 and points2. : If it is not empty, then it marks inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check basically means that the triangulated 3D points should have positive depth.

triangulatedPoints OutputArray

3d points which were reconstructed by triangulation.

Returns

int

RecoverPose(InputArray, InputArray, InputArray, OutputArray, OutputArray, double, Point2d, InputOutputArray?)

Recover relative camera rotation and translation from an estimated essential matrix and the corresponding points in two images, using cheirality check. Returns the number of inliers which pass the check.

public static int RecoverPose(InputArray E, InputArray points1, InputArray points2, OutputArray R, OutputArray t, double focal, Point2d pp, InputOutputArray? mask = null)

Parameters

E InputArray

The input essential matrix.

points1 InputArray

Array of N 2D points from the first image. The point coordinates should be floating-point (single or double precision).

points2 InputArray

Array of the second image points of the same size and format as points1.

R OutputArray

Recovered relative rotation.

t OutputArray

Recovered relative translation.

focal double

Focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.

pp Point2d

principal point of the camera.

mask InputOutputArray

Input/output mask for inliers in points1 and points2. : If it is not empty, then it marks inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check basically means that the triangulated 3D points should have positive depth.

Returns

int

Rectangle(InputOutputArray, Point, Point, Scalar, int, LineTypes, int)

Draws simple, thick or filled rectangle

public static void Rectangle(InputOutputArray img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

Image.

pt1 Point

One of the rectangle vertices.

pt2 Point

Opposite rectangle vertex.

color Scalar

Line color (RGB) or brightness (grayscale image).

thickness int

Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]

lineType LineTypes

Type of the line, see cvLine description. [By default this is LineType.Link8]

shift int

Number of fractional bits in the point coordinates. [By default this is 0]

Rectangle(InputOutputArray, Rect, Scalar, int, LineTypes, int)

Draws simple, thick or filled rectangle

public static void Rectangle(InputOutputArray img, Rect rect, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img InputOutputArray

Image.

rect Rect

Rectangle.

color Scalar

Line color (RGB) or brightness (grayscale image).

thickness int

Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]

lineType LineTypes

Type of the line, see cvLine description. [By default this is LineType.Link8]

shift int

Number of fractional bits in the point coordinates. [By default this is 0]

Rectangle(Mat, Point, Point, Scalar, int, LineTypes, int)

Draws simple, thick or filled rectangle

public static void Rectangle(Mat img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img Mat

Image.

pt1 Point

One of the rectangle vertices.

pt2 Point

Opposite rectangle vertex.

color Scalar

Line color (RGB) or brightness (grayscale image).

thickness int

Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]

lineType LineTypes

Type of the line, see cvLine description. [By default this is LineType.Link8]

shift int

Number of fractional bits in the point coordinates. [By default this is 0]

Rectangle(Mat, Rect, Scalar, int, LineTypes, int)

Draws simple, thick or filled rectangle

public static void Rectangle(Mat img, Rect rect, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)

Parameters

img Mat

Image.

rect Rect

Rectangle.

color Scalar

Line color (RGB) or brightness (grayscale image).

thickness int

Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]

lineType LineTypes

Type of the line, see cvLine description. [By default this is LineType.Link8]

shift int

Number of fractional bits in the point coordinates. [By default this is 0]

Rectify3Collinear(InputArray, InputArray, InputArray, InputArray, InputArray, InputArray, IEnumerable<InputArray>, IEnumerable<InputArray>, Size, InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, double, Size, out Rect, out Rect, StereoRectificationFlags)

computes the rectification transformations for 3-head camera, where all the heads are on the same line.

public static float Rectify3Collinear(InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, InputArray cameraMatrix3, InputArray distCoeffs3, IEnumerable<InputArray> imgpt1, IEnumerable<InputArray> imgpt3, Size imageSize, InputArray R12, InputArray T12, InputArray R13, InputArray T13, OutputArray R1, OutputArray R2, OutputArray R3, OutputArray P1, OutputArray P2, OutputArray P3, OutputArray Q, double alpha, Size newImgSize, out Rect roi1, out Rect roi2, StereoRectificationFlags flags)

Parameters

cameraMatrix1 InputArray
distCoeffs1 InputArray
cameraMatrix2 InputArray
distCoeffs2 InputArray
cameraMatrix3 InputArray
distCoeffs3 InputArray
imgpt1 IEnumerable<InputArray>
imgpt3 IEnumerable<InputArray>
imageSize Size
R12 InputArray
T12 InputArray
R13 InputArray
T13 InputArray
R1 OutputArray
R2 OutputArray
R3 OutputArray
P1 OutputArray
P2 OutputArray
P3 OutputArray
Q OutputArray
alpha double
newImgSize Size
roi1 Rect
roi2 Rect
flags StereoRectificationFlags

Returns

float

Reduce(InputArray, OutputArray, ReduceDimension, ReduceTypes, int)

transforms 2D matrix to 1D row or column vector by taking sum, minimum, maximum or mean value over all the rows

public static void Reduce(InputArray src, OutputArray dst, ReduceDimension dim, ReduceTypes rtype, int dtype)

Parameters

src InputArray

The source 2D matrix

dst OutputArray

The destination vector. Its size and type is defined by dim and dtype parameters

dim ReduceDimension

The dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row and 1 means that the matrix is reduced to a single column

rtype ReduceTypes
dtype int

When it is negative, the destination vector will have the same type as the source matrix, otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels())

ReduceArgMax(InputArray, OutputArray, int, bool)

Finds indices of max elements along provided axis

public static void ReduceArgMax(InputArray src, OutputArray dst, int axis, bool lastIndex = false)

Parameters

src InputArray

Input single-channel array

dst OutputArray

Output array of type CV_32SC1 with the same dimensionality as src, except for axis being reduced - it should be set to 1.

axis int

Axis to reduce along

lastIndex bool

Whether to get the index of first or last occurrence of max

ReduceArgMin(InputArray, OutputArray, int, bool)

Finds indices of min elements along provided axis

public static void ReduceArgMin(InputArray src, OutputArray dst, int axis, bool lastIndex = false)

Parameters

src InputArray

Input single-channel array

dst OutputArray

Output array of type CV_32SC1 with the same dimensionality as src, except for axis being reduced - it should be set to 1.

axis int

Axis to reduce along

lastIndex bool

Whether to get the index of first or last occurrence of min

Remap(InputArray, OutputArray, InputArray, InputArray, InterpolationFlags, BorderTypes, Scalar?)

Applies a generic geometrical transformation to an image.

public static void Remap(InputArray src, OutputArray dst, InputArray map1, InputArray map2, InterpolationFlags interpolation = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)

Parameters

src InputArray

Source image.

dst OutputArray

Destination image. It has the same size as map1 and the same type as src

map1 InputArray

The first map of either (x,y) points or just x values having the type CV_16SC2, CV_32FC1, or CV_32FC2.

map2 InputArray

The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively.

interpolation InterpolationFlags

Interpolation method. The method INTER_AREA is not supported by this function.

borderMode BorderTypes

Pixel extrapolation method. When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function.

borderValue Scalar?

Value used in case of a constant border. By default, it is 0.

Repeat(InputArray, int, int, OutputArray)

replicates the input matrix the specified number of times in the horizontal and/or vertical direction

public static void Repeat(InputArray src, int ny, int nx, OutputArray dst)

Parameters

src InputArray

The source array to replicate

ny int

How many times the src is repeated along the vertical axis

nx int

How many times the src is repeated along the horizontal axis

dst OutputArray

The destination array; will have the same type as src

Repeat(Mat, int, int)

replicates the input matrix the specified number of times in the horizontal and/or vertical direction

public static Mat Repeat(Mat src, int ny, int nx)

Parameters

src Mat

The source array to replicate

ny int

How many times the src is repeated along the vertical axis

nx int

How many times the src is repeated along the horizontal axis

Returns

Mat

ReprojectImageTo3D(InputArray, OutputArray, InputArray, bool, int)

reprojects disparity image to 3D: (x,y,d)->(X,Y,Z) using the matrix Q returned by cv::stereoRectify

public static void ReprojectImageTo3D(InputArray disparity, OutputArray _3dImage, InputArray Q, bool handleMissingValues = false, int ddepth = -1)

Parameters

disparity InputArray

Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image.

_3dImage OutputArray

Output 3-channel floating-point image of the same size as disparity. Each element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map.

Q InputArray

4 x 4 perspective transformation matrix that can be obtained with stereoRectify().

handleMissingValues bool

Indicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoBM::operator() ) are transformed to 3D points with a very large Z value (currently set to 10000).

ddepth int

he optional output array depth. If it is -1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.

Resize(InputArray, OutputArray, Size, double, double, InterpolationFlags)

Resizes an image.

public static void Resize(InputArray src, OutputArray dst, Size dsize, double fx = 0, double fy = 0, InterpolationFlags interpolation = InterpolationFlags.Linear)

Parameters

src InputArray

input image.

dst OutputArray

output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src.

dsize Size

output image size; if it equals zero, it is computed as: dsize = Size(round(fxsrc.cols), round(fysrc.rows)) Either dsize or both fx and fy must be non-zero.

fx double

scale factor along the horizontal axis; when it equals 0, it is computed as: (double)dsize.width/src.cols

fy double

scale factor along the vertical axis; when it equals 0, it is computed as: (double)dsize.height/src.rows

interpolation InterpolationFlags

interpolation method

ResizeWindow(string, Size)

Resizes window to the specified size

public static void ResizeWindow(string winName, Size size)

Parameters

winName string

Window name

size Size

The new window size

ResizeWindow(string, int, int)

Resizes window to the specified size

public static void ResizeWindow(string winName, int width, int height)

Parameters

winName string

Window name

width int

The new window width

height int

The new window height

Rodrigues(InputArray, OutputArray, OutputArray?)

converts rotation vector to rotation matrix or vice versa using Rodrigues transformation

public static void Rodrigues(InputArray src, OutputArray dst, OutputArray? jacobian = null)

Parameters

src InputArray

Input rotation vector (3x1 or 1x3) or rotation matrix (3x3).

dst OutputArray

Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively.

jacobian OutputArray

Optional output Jacobian matrix, 3x9 or 9x3, which is a matrix of partial derivatives of the output array components with respect to the input array components.

Rodrigues(double[,], out double[], out double[,])

converts rotation matrix to rotation vector using Rodrigues transformation

public static void Rodrigues(double[,] matrix, out double[] vector, out double[,] jacobian)

Parameters

matrix double[,]

Input rotation matrix (3x3).

vector double[]

Output rotation vector (3x1).

jacobian double[,]

Optional output Jacobian matrix, 3x9, which is a matrix of partial derivatives of the output array components with respect to the input array components.

Rodrigues(double[], out double[,], out double[,])

converts rotation vector to rotation matrix using Rodrigues transformation

public static void Rodrigues(double[] vector, out double[,] matrix, out double[,] jacobian)

Parameters

vector double[]

Input rotation vector (3x1).

matrix double[,]

Output rotation matrix (3x3).

jacobian double[,]

Optional output Jacobian matrix, 3x9, which is a matrix of partial derivatives of the output array components with respect to the input array components.

Rotate(InputArray, OutputArray, RotateFlags)

Rotates a 2D array in multiples of 90 degrees.

public static void Rotate(InputArray src, OutputArray dst, RotateFlags rotateCode)

Parameters

src InputArray

input array.

dst OutputArray

output array of the same type as src. The size is the same with ROTATE_180, and the rows and cols are switched for ROTATE_90_CLOCKWISE and ROTATE_90_COUNTERCLOCKWISE.

rotateCode RotateFlags

an enum to specify how to rotate the array.

RotatedRectangleIntersection(RotatedRect, RotatedRect, OutputArray)

Finds out if there is any intersection between two rotated rectangles. If there is then the vertices of the interesecting region are returned as well. Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function.

public static RectanglesIntersectTypes RotatedRectangleIntersection(RotatedRect rect1, RotatedRect rect2, OutputArray intersectingRegion)

Parameters

rect1 RotatedRect

First rectangle

rect2 RotatedRect

Second rectangle

intersectingRegion OutputArray

The output array of the verticies of the intersecting region. It returns at most 8 vertices. Stored as std::vector<cv::Point2f> or cv::Mat as Mx1 of type CV_32FC2.

Returns

RectanglesIntersectTypes

RotatedRectangleIntersection(RotatedRect, RotatedRect, out Point2f[])

Finds out if there is any intersection between two rotated rectangles. If there is then the vertices of the interesecting region are returned as well. Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function.

public static RectanglesIntersectTypes RotatedRectangleIntersection(RotatedRect rect1, RotatedRect rect2, out Point2f[] intersectingRegion)

Parameters

rect1 RotatedRect

First rectangle

rect2 RotatedRect

Second rectangle

intersectingRegion Point2f[]

The output array of the verticies of the intersecting region. It returns at most 8 vertices.

Returns

RectanglesIntersectTypes

SVBackSubst(InputArray, InputArray, InputArray, InputArray, OutputArray)

performs back substitution for the previously computed SVD

public static void SVBackSubst(InputArray w, InputArray u, InputArray vt, InputArray rhs, OutputArray dst)

Parameters

w InputArray

calculated singular values

u InputArray

calculated left singular vectors

vt InputArray

transposed matrix of right singular vectors

rhs InputArray

right-hand side of a linear system (uwv')*dst = rhs to be solved, where A has been previously decomposed.

dst OutputArray

output

SVDecomp(InputArray, OutputArray, OutputArray, OutputArray, Flags)

decomposes matrix and stores the results to user-provided matrices

public static void SVDecomp(InputArray src, OutputArray w, OutputArray u, OutputArray vt, SVD.Flags flags = Flags.None)

Parameters

src InputArray

decomposed matrix. The depth has to be CV_32F or CV_64F.

w OutputArray

calculated singular values

u OutputArray

calculated left singular vectors

vt OutputArray

transposed matrix of right singular vectors

flags SVD.Flags

peration flags - see SVD::Flags.

SampsonDistance(InputArray, InputArray, InputArray)

Calculates the Sampson Distance between two points.

public static double SampsonDistance(InputArray pt1, InputArray pt2, InputArray f)

Parameters

pt1 InputArray

first homogeneous 2d point

pt2 InputArray

second homogeneous 2d point

f InputArray

F fundamental matrix

Returns

double

The computed Sampson distance.

Remarks

SampsonDistance(Point3d, Point3d, double[,])

Calculates the Sampson Distance between two points.

public static double SampsonDistance(Point3d pt1, Point3d pt2, double[,] f)

Parameters

pt1 Point3d

first homogeneous 2d point

pt2 Point3d

second homogeneous 2d point

f double[,]

F fundamental matrix

Returns

double

The computed Sampson distance.

Remarks

ScaleAdd(InputArray, double, InputArray, OutputArray)

adds scaled array to another one (dst = alpha*src1 + src2)

public static void ScaleAdd(InputArray src1, double alpha, InputArray src2, OutputArray dst)

Parameters

src1 InputArray
alpha double
src2 InputArray
dst OutputArray

Scharr(InputArray, OutputArray, MatType, int, int, double, double, BorderTypes)

Calculates the first x- or y- image derivative using Scharr operator

public static void Scharr(InputArray src, OutputArray dst, MatType ddepth, int xorder, int yorder, double scale = 1, double delta = 0, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

The source image

dst OutputArray

The destination image; will have the same size and the same number of channels as src

ddepth MatType

The destination image depth

xorder int

Order of the derivative x

yorder int

Order of the derivative y

scale double

The optional scale factor for the computed derivative values (by default, no scaling is applie

delta double

The optional delta value, added to the results prior to storing them in dst

borderType BorderTypes

The pixel extrapolation method

SeamlessClone(InputArray, InputArray, InputArray?, Point, OutputArray, SeamlessCloneMethods)

Image editing tasks concern either global changes (color/intensity corrections, filters, deformations) or local changes concerned to a selection. Here we are interested in achieving local changes, ones that are restricted to a region manually selected (ROI), in a seamless and effortless manner. The extent of the changes ranges from slight distortions to complete replacement by novel content @cite PM03 .

public static void SeamlessClone(InputArray src, InputArray dst, InputArray? mask, Point p, OutputArray blend, SeamlessCloneMethods flags)

Parameters

src InputArray

Input 8-bit 3-channel image.

dst InputArray

Input 8-bit 3-channel image.

mask InputArray

Input 8-bit 1 or 3-channel image.

p Point

Point in dst image where object is placed.

blend OutputArray

Output image with the same size and type as dst.

flags SeamlessCloneMethods

Cloning method

SelectROI(InputArray, bool, bool)

Selects ROI on the given image. Function creates a window and allows user to select a ROI using mouse. Controls: use space or enter to finish selection, use key c to cancel selection (function will return the zero cv::Rect).

public static Rect SelectROI(InputArray img, bool showCrosshair = true, bool fromCenter = false)

Parameters

img InputArray

image to select a ROI.

showCrosshair bool

if true crosshair of selection rectangle will be shown.

fromCenter bool

if true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspond to the initial mouse position.

Returns

Rect

selected ROI or empty rect if selection canceled.

SelectROI(string, InputArray, bool, bool)

Selects ROI on the given image. Function creates a window and allows user to select a ROI using mouse. Controls: use space or enter to finish selection, use key c to cancel selection (function will return the zero cv::Rect).

public static Rect SelectROI(string windowName, InputArray img, bool showCrosshair = true, bool fromCenter = false)

Parameters

windowName string

name of the window where selection process will be shown.

img InputArray

image to select a ROI.

showCrosshair bool

if true crosshair of selection rectangle will be shown.

fromCenter bool

if true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspond to the initial mouse position.

Returns

Rect

selected ROI or empty rect if selection canceled.

SelectROIs(string, InputArray, bool, bool)

Selects ROIs on the given image. Function creates a window and allows user to select a ROIs using mouse. Controls: use space or enter to finish current selection and start a new one, use esc to terminate multiple ROI selection process.

public static Rect[] SelectROIs(string windowName, InputArray img, bool showCrosshair = true, bool fromCenter = false)

Parameters

windowName string

name of the window where selection process will be shown.

img InputArray

image to select a ROI.

showCrosshair bool

if true crosshair of selection rectangle will be shown.

fromCenter bool

if true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspond to the initial mouse position.

Returns

Rect[]

selected ROIs.

SepFilter2D(InputArray, OutputArray, MatType, InputArray, InputArray, Point?, double, BorderTypes)

Applies separable linear filter to an image

public static void SepFilter2D(InputArray src, OutputArray dst, MatType ddepth, InputArray kernelX, InputArray kernelY, Point? anchor = null, double delta = 0, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

The source image

dst OutputArray

The destination image; will have the same size and the same number of channels as src

ddepth MatType

The destination image depth

kernelX InputArray

The coefficients for filtering each row

kernelY InputArray

The coefficients for filtering each column

anchor Point?

The anchor position within the kernel; The default value (-1, 1) means that the anchor is at the kernel center

delta double

The value added to the filtered results before storing them

borderType BorderTypes

The pixel extrapolation method

SetBreakOnError(bool)

Sets/resets the break-on-error mode. When the break-on-error mode is set, the default error handler issues a hardware exception, which can make debugging more convenient.

public static bool SetBreakOnError(bool flag)

Parameters

flag bool

Returns

bool

the previous state

SetIdentity(InputOutputArray, Scalar?)

initializes scaled identity matrix

public static void SetIdentity(InputOutputArray mtx, Scalar? s = null)

Parameters

mtx InputOutputArray

The matrix to initialize (not necessarily square)

s Scalar?

The value to assign to the diagonal elements

SetLogLevel(LogLevel)

Set global logging level

public static LogLevel SetLogLevel(LogLevel logLevel)

Parameters

logLevel LogLevel

logging level

Returns

LogLevel

previous logging level

SetMouseCallback(string, MouseCallback, nint)

Sets the callback function for mouse events occuring within the specified window.

public static void SetMouseCallback(string windowName, MouseCallback onMouse, nint userData = 0)

Parameters

windowName string

Name of the window.

onMouse MouseCallback

Reference to the function to be called every time mouse event occurs in the specified window.

userData nint

SetNumThreads(int)

OpenCV will try to set the number of threads for the next parallel region. If threads == 0, OpenCV will disable threading optimizations and run all it's functions sequentially.Passing threads < 0 will reset threads number to system default. This function must be called outside of parallel region. OpenCV will try to run its functions with specified threads number, but some behaviour differs from framework:

  • TBB - User-defined parallel constructions will run with the same threads number, if another is not specified.If later on user creates his own scheduler, OpenCV will use it.
  • OpenMP - No special defined behaviour.
  • Concurrency - If threads == 1, OpenCV will disable threading optimizations and run its functions sequentially.
  • GCD - Supports only values <= 0.
  • C= - No special defined behaviour.
public static void SetNumThreads(int nThreads)

Parameters

nThreads int

Number of threads used by OpenCV.

SetTheRNG(ulong)

Sets the thread-local Random number generator

public static RNG SetTheRNG(ulong state)

Parameters

state ulong

Returns

RNG

SetTrackbarMax(string, string, int)

Sets the trackbar maximum position. The function sets the maximum position of the specified trackbar in the specified window.

public static void SetTrackbarMax(string trackbarName, string winName, int maxVal)

Parameters

trackbarName string

Name of the trackbar.

winName string

Name of the window that is the parent of trackbar.

maxVal int

New maximum position.

SetTrackbarMin(string, string, int)

Sets the trackbar minimum position. The function sets the minimum position of the specified trackbar in the specified window.

public static void SetTrackbarMin(string trackbarName, string winName, int minVal)

Parameters

trackbarName string

Name of the trackbar.

winName string

Name of the window that is the parent of trackbar.

minVal int

New minimum position.

SetTrackbarPos(string, string, int)

Sets the trackbar position.

public static void SetTrackbarPos(string trackbarName, string winName, int pos)

Parameters

trackbarName string

Name of the trackbar.

winName string

Name of the window that is the parent of trackbar.

pos int

New position.

SetUseOptimized(bool)

Turns on/off available optimization. The function turns on or off the optimized code in OpenCV. Some optimization can not be enabled or disabled, but, for example, most of SSE code in OpenCV can be temporarily turned on or off this way.

public static void SetUseOptimized(bool onoff)

Parameters

onoff bool

SetWindowProperty(string, WindowPropertyFlags, double)

Changes parameters of a window dynamically.

public static void SetWindowProperty(string winName, WindowPropertyFlags propId, double propValue)

Parameters

winName string

Name of the window.

propId WindowPropertyFlags

Window property to retrieve.

propValue double

New value of the window property.

SetWindowTitle(string, string)

Updates window title

public static void SetWindowTitle(string winName, string title)

Parameters

winName string

Name of the window

title string

New title

Sobel(InputArray, OutputArray, MatType, int, int, int, double, double, BorderTypes)

Calculates the first, second, third or mixed image derivatives using an extended Sobel operator

public static void Sobel(InputArray src, OutputArray dst, MatType ddepth, int xorder, int yorder, int ksize = 3, double scale = 1, double delta = 0, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

The source image

dst OutputArray

The destination image; will have the same size and the same number of channels as src

ddepth MatType

The destination image depth

xorder int

Order of the derivative x

yorder int

Order of the derivative y

ksize int

Size of the extended Sobel kernel, must be 1, 3, 5 or 7

scale double

The optional scale factor for the computed derivative values (by default, no scaling is applied

delta double

The optional delta value, added to the results prior to storing them in dst

borderType BorderTypes

The pixel extrapolation method

Solve(InputArray, InputArray, OutputArray, DecompTypes)

solves linear system or a least-square problem

public static bool Solve(InputArray src1, InputArray src2, OutputArray dst, DecompTypes flags = DecompTypes.LU)

Parameters

src1 InputArray
src2 InputArray
dst OutputArray
flags DecompTypes

Returns

bool

SolveCubic(InputArray, OutputArray)

finds real roots of a cubic polynomial

public static int SolveCubic(InputArray coeffs, OutputArray roots)

Parameters

coeffs InputArray

The equation coefficients, an array of 3 or 4 elements

roots OutputArray

The destination array of real roots which will have 1 or 3 elements

Returns

int

SolveLP(InputArray, InputArray, OutputArray)

Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method).

public static SolveLPResult SolveLP(InputArray func, InputArray constr, OutputArray z)

Parameters

func InputArray

This row-vector corresponds to \f$c\f$ in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers.As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to \f$c^T\f$.

constr InputArray

m-by-n+1 matrix, whose rightmost column corresponds to \f$b\f$ in formulation above and the remaining to \f$A\f$. It should containt 32- or 64-bit floating point numbers.

z OutputArray

The solution will be returned here as a column-vector - it corresponds to \f$c\f$ in the formulation above.It will contain 64-bit floating point numbers.

Returns

SolveLPResult

SolvePnP(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, bool, SolvePnPFlags)

Finds an object pose from 3D-2D point correspondences.

public static void SolvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess = false, SolvePnPFlags flags = SolvePnPFlags.Iterative)

Parameters

objectPoints InputArray

Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here.

imagePoints InputArray

Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here.

cameraMatrix InputArray

Input camera matrix

distCoeffs InputArray

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

rvec OutputArray

Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.

tvec OutputArray

Output translation vector.

useExtrinsicGuess bool

If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

flags SolvePnPFlags

Method for solving a PnP problem:

SolvePnP(IEnumerable<Point3f>, IEnumerable<Point2f>, double[,], IEnumerable<double>?, ref double[], ref double[], bool, SolvePnPFlags)

Finds an object pose from 3D-2D point correspondences.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void SolvePnP(IEnumerable<Point3f> objectPoints, IEnumerable<Point2f> imagePoints, double[,] cameraMatrix, IEnumerable<double>? distCoeffs, ref double[] rvec, ref double[] tvec, bool useExtrinsicGuess = false, SolvePnPFlags flags = SolvePnPFlags.Iterative)

Parameters

objectPoints IEnumerable<Point3f>

Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here.

imagePoints IEnumerable<Point2f>

Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here.

cameraMatrix double[,]

Input camera matrix

distCoeffs IEnumerable<double>

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

rvec double[]

Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.

tvec double[]

Output translation vector.

useExtrinsicGuess bool

If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

flags SolvePnPFlags

Method for solving a PnP problem

SolvePnPRansac(InputArray, InputArray, InputArray, InputArray, OutputArray?, OutputArray?, bool, int, float, double, OutputArray?, SolvePnPFlags)

computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible.

public static void SolvePnPRansac(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray? rvec, OutputArray? tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8, double confidence = 0.99, OutputArray? inliers = null, SolvePnPFlags flags = SolvePnPFlags.Iterative)

Parameters

objectPoints InputArray

Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here.

imagePoints InputArray

Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here.

cameraMatrix InputArray

Input 3x3 camera matrix

distCoeffs InputArray

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

rvec OutputArray

Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.

tvec OutputArray

Output translation vector.

useExtrinsicGuess bool

If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

iterationsCount int

Number of iterations.

reprojectionError float

Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.

confidence double

The probability that the algorithm produces a useful result.

inliers OutputArray

Output vector that contains indices of inliers in objectPoints and imagePoints .

flags SolvePnPFlags

Method for solving a PnP problem

SolvePnPRansac(IEnumerable<Point3f>, IEnumerable<Point2f>, double[,], IEnumerable<double>, out double[], out double[])

computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible.

public static void SolvePnPRansac(IEnumerable<Point3f> objectPoints, IEnumerable<Point2f> imagePoints, double[,] cameraMatrix, IEnumerable<double> distCoeffs, out double[] rvec, out double[] tvec)

Parameters

objectPoints IEnumerable<Point3f>

Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here.

imagePoints IEnumerable<Point2f>

Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here.

cameraMatrix double[,]

Input 3x3 camera matrix

distCoeffs IEnumerable<double>

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

rvec double[]

Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.

tvec double[]

Output translation vector.

SolvePnPRansac(IEnumerable<Point3f>, IEnumerable<Point2f>, double[,], IEnumerable<double>?, out double[], out double[], out int[], bool, int, float, double, SolvePnPFlags)

computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void SolvePnPRansac(IEnumerable<Point3f> objectPoints, IEnumerable<Point2f> imagePoints, double[,] cameraMatrix, IEnumerable<double>? distCoeffs, out double[] rvec, out double[] tvec, out int[] inliers, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8, double confidence = 0.99, SolvePnPFlags flags = SolvePnPFlags.Iterative)

Parameters

objectPoints IEnumerable<Point3f>

Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here.

imagePoints IEnumerable<Point2f>

Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here.

cameraMatrix double[,]

Input 3x3 camera matrix

distCoeffs IEnumerable<double>

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

rvec double[]

Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.

tvec double[]

Output translation vector.

inliers int[]

Output vector that contains indices of inliers in objectPoints and imagePoints .

useExtrinsicGuess bool

If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

iterationsCount int

Number of iterations.

reprojectionError float

Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.

confidence double

The probability that the algorithm produces a useful result.

flags SolvePnPFlags

Method for solving a PnP problem

SolvePoly(InputArray, OutputArray, int)

finds real and complex roots of a polynomial

public static double SolvePoly(InputArray coeffs, OutputArray roots, int maxIters = 300)

Parameters

coeffs InputArray

The array of polynomial coefficients

roots OutputArray

The destination (complex) array of roots

maxIters int

The maximum number of iterations the algorithm does

Returns

double

Sort(InputArray, OutputArray, SortFlags)

sorts independently each matrix row or each matrix column

public static void Sort(InputArray src, OutputArray dst, SortFlags flags)

Parameters

src InputArray

The source single-channel array

dst OutputArray

The destination array of the same size and the same type as src

flags SortFlags

The operation flags, a combination of the SortFlag values

SortIdx(InputArray, OutputArray, SortFlags)

sorts independently each matrix row or each matrix column

public static void SortIdx(InputArray src, OutputArray dst, SortFlags flags)

Parameters

src InputArray

The source single-channel array

dst OutputArray

The destination integer array of the same size as src

flags SortFlags

The operation flags, a combination of SortFlag values

SpatialGradient(InputArray, OutputArray, OutputArray, int, BorderTypes)

Calculates the first order image derivative in both x and y using a Sobel operator

public static void SpatialGradient(InputArray src, OutputArray dx, OutputArray dy, int ksize = 3, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

input image.

dx OutputArray

output image with first-order derivative in x.

dy OutputArray

output image with first-order derivative in y.

ksize int

size of Sobel kernel. It must be 3.

borderType BorderTypes

pixel extrapolation method

Split(Mat)

Copies each plane of a multi-channel array to a dedicated array

public static Mat[] Split(Mat src)

Parameters

src Mat

The source multi-channel array

Returns

Mat[]

The number of arrays must match mtx.channels() . The arrays themselves will be reallocated if needed

Split(Mat, out Mat[])

Copies each plane of a multi-channel array to a dedicated array

public static void Split(Mat src, out Mat[] mv)

Parameters

src Mat

The source multi-channel array

mv Mat[]

The destination array or vector of arrays; The number of arrays must match mtx.channels() . The arrays themselves will be reallocated if needed

SqrBoxFilter(InputArray, OutputArray, int, Size, Point?, bool, BorderTypes)

Calculates the normalized sum of squares of the pixel values overlapping the filter.

For every pixel f(x, y) in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel f(x, y).

The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.

public static void SqrBoxFilter(InputArray src, OutputArray dst, int ddepth, Size ksize, Point? anchor = null, bool normalize = true, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray
dst OutputArray
ddepth int
ksize Size
anchor Point?
normalize bool
borderType BorderTypes

Sqrt(InputArray, OutputArray)

computes square root of each matrix element (dst = src**0.5)

public static void Sqrt(InputArray src, OutputArray dst)

Parameters

src InputArray

The source floating-point array

dst OutputArray

The destination array; will have the same size and the same type as src

StartWindowThread()

public static int StartWindowThread()

Returns

int

StereoCalibrate(IEnumerable<InputArray>, IEnumerable<InputArray>, IEnumerable<InputArray>, InputOutputArray, InputOutputArray, InputOutputArray, InputOutputArray, Size, OutputArray, OutputArray, OutputArray, OutputArray, CalibrationFlags, TermCriteria?)

finds intrinsic and extrinsic parameters of a stereo camera

public static double StereoCalibrate(IEnumerable<InputArray> objectPoints, IEnumerable<InputArray> imagePoints1, IEnumerable<InputArray> imagePoints2, InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1, InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2, Size imageSize, OutputArray R, OutputArray T, OutputArray E, OutputArray F, CalibrationFlags flags = CalibrationFlags.FixIntrinsic, TermCriteria? criteria = null)

Parameters

objectPoints IEnumerable<InputArray>

Vector of vectors of the calibration pattern points.

imagePoints1 IEnumerable<InputArray>

Vector of vectors of the projections of the calibration pattern points, observed by the first camera.

imagePoints2 IEnumerable<InputArray>

Vector of vectors of the projections of the calibration pattern points, observed by the second camera.

cameraMatrix1 InputOutputArray

Input/output first camera matrix

distCoeffs1 InputOutputArray

Input/output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The output vector length depends on the flags.

cameraMatrix2 InputOutputArray

Input/output second camera matrix. The parameter is similar to cameraMatrix1 .

distCoeffs2 InputOutputArray

Input/output lens distortion coefficients for the second camera. The parameter is similar to distCoeffs1 .

imageSize Size

Size of the image used only to initialize intrinsic camera matrix.

R OutputArray

Output rotation matrix between the 1st and the 2nd camera coordinate systems.

T OutputArray

Output translation vector between the coordinate systems of the cameras.

E OutputArray

Output essential matrix.

F OutputArray

Output fundamental matrix.

flags CalibrationFlags

Different flags that may be zero or a combination of the CalibrationFlag values

criteria TermCriteria?

Termination criteria for the iterative optimization algorithm.

Returns

double

StereoCalibrate(IEnumerable<IEnumerable<Point3f>>, IEnumerable<IEnumerable<Point2f>>, IEnumerable<IEnumerable<Point2f>>, double[,], double[], double[,], double[], Size, OutputArray, OutputArray, OutputArray, OutputArray, CalibrationFlags, TermCriteria?)

finds intrinsic and extrinsic parameters of a stereo camera

public static double StereoCalibrate(IEnumerable<IEnumerable<Point3f>> objectPoints, IEnumerable<IEnumerable<Point2f>> imagePoints1, IEnumerable<IEnumerable<Point2f>> imagePoints2, double[,] cameraMatrix1, double[] distCoeffs1, double[,] cameraMatrix2, double[] distCoeffs2, Size imageSize, OutputArray R, OutputArray T, OutputArray E, OutputArray F, CalibrationFlags flags = CalibrationFlags.FixIntrinsic, TermCriteria? criteria = null)

Parameters

objectPoints IEnumerable<IEnumerable<Point3f>>

Vector of vectors of the calibration pattern points.

imagePoints1 IEnumerable<IEnumerable<Point2f>>

Vector of vectors of the projections of the calibration pattern points, observed by the first camera.

imagePoints2 IEnumerable<IEnumerable<Point2f>>

Vector of vectors of the projections of the calibration pattern points, observed by the second camera.

cameraMatrix1 double[,]

Input/output first camera matrix

distCoeffs1 double[]

Input/output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The output vector length depends on the flags.

cameraMatrix2 double[,]

Input/output second camera matrix. The parameter is similar to cameraMatrix1 .

distCoeffs2 double[]

Input/output lens distortion coefficients for the second camera. The parameter is similar to distCoeffs1 .

imageSize Size

Size of the image used only to initialize intrinsic camera matrix.

R OutputArray

Output rotation matrix between the 1st and the 2nd camera coordinate systems.

T OutputArray

Output translation vector between the coordinate systems of the cameras.

E OutputArray

Output essential matrix.

F OutputArray

Output fundamental matrix.

flags CalibrationFlags

Different flags that may be zero or a combination of the CalibrationFlag values

criteria TermCriteria?

Termination criteria for the iterative optimization algorithm.

Returns

double

StereoRectify(InputArray, InputArray, InputArray, InputArray, Size, InputArray, InputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, StereoRectificationFlags, double, Size, out Rect, out Rect)

computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters

public static void StereoRectify(InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, Size imageSize, InputArray R, InputArray T, OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, StereoRectificationFlags flags, double alpha, Size newImageSize, out Rect validPixROI1, out Rect validPixROI2)

Parameters

cameraMatrix1 InputArray

First camera matrix.

distCoeffs1 InputArray

First camera distortion parameters.

cameraMatrix2 InputArray

Second camera matrix.

distCoeffs2 InputArray

Second camera distortion parameters.

imageSize Size

Size of the image used for stereo calibration.

R InputArray

Rotation matrix between the coordinate systems of the first and the second cameras.

T InputArray

Translation vector between coordinate systems of the cameras.

R1 OutputArray

Output 3x3 rectification transform (rotation matrix) for the first camera.

R2 OutputArray

Output 3x3 rectification transform (rotation matrix) for the second camera.

P1 OutputArray

Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.

P2 OutputArray

Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.

Q OutputArray

Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).

flags StereoRectificationFlags

Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.

alpha double

Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.

newImageSize Size

New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.

validPixROI1 Rect

Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.

validPixROI2 Rect

Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.

StereoRectify(InputArray, InputArray, InputArray, InputArray, Size, InputArray, InputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, StereoRectificationFlags, double, Size?)

computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters

public static void StereoRectify(InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, Size imageSize, InputArray R, InputArray T, OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, StereoRectificationFlags flags = StereoRectificationFlags.ZeroDisparity, double alpha = -1, Size? newImageSize = null)

Parameters

cameraMatrix1 InputArray

First camera matrix.

distCoeffs1 InputArray

First camera distortion parameters.

cameraMatrix2 InputArray

Second camera matrix.

distCoeffs2 InputArray

Second camera distortion parameters.

imageSize Size

Size of the image used for stereo calibration.

R InputArray

Rotation matrix between the coordinate systems of the first and the second cameras.

T InputArray

Translation vector between coordinate systems of the cameras.

R1 OutputArray

Output 3x3 rectification transform (rotation matrix) for the first camera.

R2 OutputArray

Output 3x3 rectification transform (rotation matrix) for the second camera.

P1 OutputArray

Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.

P2 OutputArray

Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.

Q OutputArray

Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).

flags StereoRectificationFlags

Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.

alpha double

Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.

newImageSize Size?

New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.

StereoRectify(double[,], double[], double[,], double[], Size, double[,], double[], out double[,], out double[,], out double[,], out double[,], out double[,], StereoRectificationFlags, double, Size, out Rect, out Rect)

computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters

public static void StereoRectify(double[,] cameraMatrix1, double[] distCoeffs1, double[,] cameraMatrix2, double[] distCoeffs2, Size imageSize, double[,] R, double[] T, out double[,] R1, out double[,] R2, out double[,] P1, out double[,] P2, out double[,] Q, StereoRectificationFlags flags, double alpha, Size newImageSize, out Rect validPixROI1, out Rect validPixROI2)

Parameters

cameraMatrix1 double[,]

First camera matrix.

distCoeffs1 double[]

First camera distortion parameters.

cameraMatrix2 double[,]

Second camera matrix.

distCoeffs2 double[]

Second camera distortion parameters.

imageSize Size

Size of the image used for stereo calibration.

R double[,]

Rotation matrix between the coordinate systems of the first and the second cameras.

T double[]

Translation vector between coordinate systems of the cameras.

R1 double[,]

Output 3x3 rectification transform (rotation matrix) for the first camera.

R2 double[,]

Output 3x3 rectification transform (rotation matrix) for the second camera.

P1 double[,]

Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.

P2 double[,]

Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.

Q double[,]

Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).

flags StereoRectificationFlags

Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.

alpha double

Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.

newImageSize Size

New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.

validPixROI1 Rect

Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.

validPixROI2 Rect

Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.

StereoRectify(double[,], double[], double[,], double[], Size, double[,], double[], out double[,], out double[,], out double[,], out double[,], out double[,], StereoRectificationFlags, double, Size?)

computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters

public static void StereoRectify(double[,] cameraMatrix1, double[] distCoeffs1, double[,] cameraMatrix2, double[] distCoeffs2, Size imageSize, double[,] R, double[] T, out double[,] R1, out double[,] R2, out double[,] P1, out double[,] P2, out double[,] Q, StereoRectificationFlags flags = StereoRectificationFlags.ZeroDisparity, double alpha = -1, Size? newImageSize = null)

Parameters

cameraMatrix1 double[,]

First camera matrix.

distCoeffs1 double[]

First camera distortion parameters.

cameraMatrix2 double[,]

Second camera matrix.

distCoeffs2 double[]

Second camera distortion parameters.

imageSize Size

Size of the image used for stereo calibration.

R double[,]

Rotation matrix between the coordinate systems of the first and the second cameras.

T double[]

Translation vector between coordinate systems of the cameras.

R1 double[,]

Output 3x3 rectification transform (rotation matrix) for the first camera.

R2 double[,]

Output 3x3 rectification transform (rotation matrix) for the second camera.

P1 double[,]

Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.

P2 double[,]

Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.

Q double[,]

Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).

flags StereoRectificationFlags

Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.

alpha double

Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.

newImageSize Size?

New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.

StereoRectifyUncalibrated(InputArray, InputArray, InputArray, Size, OutputArray, OutputArray, double)

computes the rectification transformation for an uncalibrated stereo camera (zero distortion is assumed)

public static bool StereoRectifyUncalibrated(InputArray points1, InputArray points2, InputArray F, Size imgSize, OutputArray H1, OutputArray H2, double threshold = 5)

Parameters

points1 InputArray

Array of feature points in the first image.

points2 InputArray

The corresponding points in the second image. The same formats as in findFundamentalMat() are supported.

F InputArray

Input fundamental matrix. It can be computed from the same set of point pairs using findFundamentalMat() .

imgSize Size

Size of the image.

H1 OutputArray

Output rectification homography matrix for the first image.

H2 OutputArray

Output rectification homography matrix for the second image.

threshold double

Optional threshold used to filter out the outliers. If the parameter is greater than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which |points2[i]^T * F * points1[i]| > threshold ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers.

Returns

bool

StereoRectifyUncalibrated(IEnumerable<Point2d>, IEnumerable<Point2d>, double[,], Size, out double[,], out double[,], double)

computes the rectification transformation for an uncalibrated stereo camera (zero distortion is assumed)

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static bool StereoRectifyUncalibrated(IEnumerable<Point2d> points1, IEnumerable<Point2d> points2, double[,] F, Size imgSize, out double[,] H1, out double[,] H2, double threshold = 5)

Parameters

points1 IEnumerable<Point2d>

Array of feature points in the first image.

points2 IEnumerable<Point2d>

The corresponding points in the second image. The same formats as in findFundamentalMat() are supported.

F double[,]

Input fundamental matrix. It can be computed from the same set of point pairs using findFundamentalMat() .

imgSize Size

Size of the image.

H1 double[,]

Output rectification homography matrix for the first image.

H2 double[,]

Output rectification homography matrix for the second image.

threshold double

Optional threshold used to filter out the outliers. If the parameter is greater than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which |points2[i]^T * F * points1[i]| > threshold ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers.

Returns

bool

Stylization(InputArray, OutputArray, float, float)

Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features.

public static void Stylization(InputArray src, OutputArray dst, float sigmaS = 60, float sigmaR = 0.45)

Parameters

src InputArray

Input 8-bit 3-channel image.

dst OutputArray

Output image with the same size and type as src.

sigmaS float

Range between 0 to 200.

sigmaR float

Range between 0 to 1.

Subtract(InputArray, InputArray, OutputArray, InputArray?, int)

Calculates per-element difference between two arrays or array and a scalar

public static void Subtract(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null, int dtype = -1)

Parameters

src1 InputArray

The first source array

src2 InputArray

The second source array. It must have the same size and same type as src1

dst OutputArray

The destination array; it will have the same size and same type as src1

mask InputArray

The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]

dtype int

Subtract(InputArray, Scalar, OutputArray, InputArray?, int)

Calculates per-element difference between two arrays or array and a scalar

public static void Subtract(InputArray src1, Scalar src2, OutputArray dst, InputArray? mask = null, int dtype = -1)

Parameters

src1 InputArray

The first source array

src2 Scalar

The second source array. It must have the same size and same type as src1

dst OutputArray

The destination array; it will have the same size and same type as src1

mask InputArray

The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]

dtype int

Subtract(Scalar, InputArray, OutputArray, InputArray?, int)

Calculates per-element difference between two arrays or array and a scalar

public static void Subtract(Scalar src1, InputArray src2, OutputArray dst, InputArray? mask = null, int dtype = -1)

Parameters

src1 Scalar

The first source array

src2 InputArray

The second source array. It must have the same size and same type as src1

dst OutputArray

The destination array; it will have the same size and same type as src1

mask InputArray

The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]

dtype int

Sum(InputArray)

computes sum of array elements

public static Scalar Sum(InputArray src)

Parameters

src InputArray

The source array; must have 1 to 4 channels

Returns

Scalar

TextureFlattening(InputArray, InputArray?, OutputArray, float, float, int)

By retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge Detector is used.

public static void TextureFlattening(InputArray src, InputArray? mask, OutputArray dst, float lowThreshold = 30, float highThreshold = 45, int kernelSize = 3)

Parameters

src InputArray

Input 8-bit 3-channel image.

mask InputArray

Input 8-bit 1 or 3-channel image.

dst OutputArray

Output image with the same size and type as src.

lowThreshold float

Range from 0 to 100.

highThreshold float

Value > 100.

kernelSize int

The size of the Sobel kernel to be used.

Threshold(InputArray, OutputArray, double, double, ThresholdTypes)

Applies a fixed-level threshold to each array element.

public static double Threshold(InputArray src, OutputArray dst, double thresh, double maxval, ThresholdTypes type)

Parameters

src InputArray

input array (single-channel, 8-bit or 32-bit floating point).

dst OutputArray

output array of the same size and type as src.

thresh double

threshold value.

maxval double

maximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

type ThresholdTypes

thresholding type (see the details below).

Returns

double

the computed threshold value when type == OTSU

Trace(InputArray)

computes trace of a matrix

public static Scalar Trace(InputArray mtx)

Parameters

mtx InputArray

The source matrix

Returns

Scalar

Transform(InputArray, OutputArray, InputArray)

performs affine transformation of each element of multi-channel input matrix

public static void Transform(InputArray src, OutputArray dst, InputArray m)

Parameters

src InputArray

The source array; must have as many channels (1 to 4) as mtx.cols or mtx.cols-1

dst OutputArray

The destination array; will have the same size and depth as src and as many channels as mtx.rows

m InputArray

The transformation matrix

Transpose(InputArray, OutputArray)

transposes the matrix

public static void Transpose(InputArray src, OutputArray dst)

Parameters

src InputArray

The source array

dst OutputArray

The destination array of the same type as src

TriangulatePoints(InputArray, InputArray, InputArray, InputArray, OutputArray)

Reconstructs points by triangulation.

public static void TriangulatePoints(InputArray projMatr1, InputArray projMatr2, InputArray projPoints1, InputArray projPoints2, OutputArray points4D)

Parameters

projMatr1 InputArray

3x4 projection matrix of the first camera.

projMatr2 InputArray

3x4 projection matrix of the second camera.

projPoints1 InputArray

2xN array of feature points in the first image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.

projPoints2 InputArray

2xN array of corresponding points in the second image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.

points4D OutputArray

4xN array of reconstructed points in homogeneous coordinates.

TriangulatePoints(double[,], double[,], IEnumerable<Point2d>, IEnumerable<Point2d>)

Reconstructs points by triangulation.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec4d[] TriangulatePoints(double[,] projMatr1, double[,] projMatr2, IEnumerable<Point2d> projPoints1, IEnumerable<Point2d> projPoints2)

Parameters

projMatr1 double[,]

3x4 projection matrix of the first camera.

projMatr2 double[,]

3x4 projection matrix of the second camera.

projPoints1 IEnumerable<Point2d>

2xN array of feature points in the first image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.

projPoints2 IEnumerable<Point2d>

2xN array of corresponding points in the second image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.

Returns

Vec4d[]

4xN array of reconstructed points in homogeneous coordinates.

Undistort(InputArray, OutputArray, InputArray, InputArray, InputArray?)

corrects lens distortion for the given camera matrix and distortion coefficients

public static void Undistort(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray? newCameraMatrix = null)

Parameters

src InputArray

Input (distorted) image.

dst OutputArray

Output (corrected) image that has the same size and type as src .

cameraMatrix InputArray

Input camera matrix

distCoeffs InputArray

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

newCameraMatrix InputArray

Camera matrix of the distorted image. By default, it is the same as cameraMatrix but you may additionally scale and shift the result by using a different matrix.

UndistortPoints(InputArray, OutputArray, InputArray, InputArray, InputArray?, InputArray?)

Computes the ideal point coordinates from the observed point coordinates.

public static void UndistortPoints(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray? r = null, InputArray? p = null)

Parameters

src InputArray

Observed point coordinates, 1xN or Nx1 2-channel (CV_32FC2 or CV_64FC2).

dst OutputArray

Output ideal point coordinates after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.

cameraMatrix InputArray

Camera matrix

distCoeffs InputArray

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

r InputArray

Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity transformation is used.

p InputArray

New camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity new camera matrix is used.

UndistortPointsIter(InputArray, OutputArray, InputArray, InputArray, InputArray?, InputArray?, TermCriteria?)

Computes the ideal point coordinates from the observed point coordinates.

public static void UndistortPointsIter(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray? r = null, InputArray? p = null, TermCriteria? termCriteria = null)

Parameters

src InputArray

Observed point coordinates, 1xN or Nx1 2-channel (CV_32FC2 or CV_64FC2).

dst OutputArray

Output ideal point coordinates after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.

cameraMatrix InputArray

Camera matrix

distCoeffs InputArray

Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.

r InputArray

Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity transformation is used.

p InputArray

New camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity new camera matrix is used.

termCriteria TermCriteria?

UseOptimized()

Returns the current optimization status. The function returns the current optimization status, which is controlled by cv::setUseOptimized().

public static bool UseOptimized()

Returns

bool

VConcat(InputArray, InputArray, OutputArray)

Applies vertical concatenation to given matrices.

public static void VConcat(InputArray src1, InputArray src2, OutputArray dst)

Parameters

src1 InputArray

first input array to be considered for vertical concatenation.

src2 InputArray

second input array to be considered for vertical concatenation.

dst OutputArray

output array. It has the same number of cols and depth as the src1 and src2, and the sum of rows of the src1 and src2.

VConcat(IEnumerable<Mat>, OutputArray)

Applies vertical concatenation to given matrices.

[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void VConcat(IEnumerable<Mat> src, OutputArray dst)

Parameters

src IEnumerable<Mat>

input array or vector of matrices. all of the matrices must have the same number of cols and the same depth.

dst OutputArray

output array. It has the same number of cols and depth as the src, and the sum of rows of the src.

ValidateDisparity(InputOutputArray, InputArray, int, int, int)

validates disparity using the left-right check. The matrix "cost" should be computed by the stereo correspondence algorithm

public static void ValidateDisparity(InputOutputArray disparity, InputArray cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp = 1)

Parameters

disparity InputOutputArray
cost InputArray
minDisparity int
numberOfDisparities int
disp12MaxDisp int

WaitKey(int)

Waits for a pressed key.

public static int WaitKey(int delay = 0)

Parameters

delay int

Delay in milliseconds. 0 is the special value that means ”forever”

Returns

int

Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.

WaitKeyEx(int)

Waits for a pressed key. Similar to #waitKey, but returns full key code. Key code is implementation specific and depends on used backend: QT/GTK/Win32/etc

public static int WaitKeyEx(int delay = 0)

Parameters

delay int

Delay in milliseconds. 0 is the special value that means ”forever”

Returns

int

Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.

WarpAffine(InputArray, OutputArray, InputArray, Size, InterpolationFlags, BorderTypes, Scalar?)

Applies an affine transformation to an image.

public static void WarpAffine(InputArray src, OutputArray dst, InputArray m, Size dsize, InterpolationFlags flags = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)

Parameters

src InputArray

input image.

dst OutputArray

output image that has the size dsize and the same type as src.

m InputArray

2x3 transformation matrix.

dsize Size

size of the output image.

flags InterpolationFlags

combination of interpolation methods and the optional flag WARP_INVERSE_MAP that means that M is the inverse transformation (dst -> src) .

borderMode BorderTypes

pixel extrapolation method; when borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function.

borderValue Scalar?

value used in case of a constant border; by default, it is 0.

WarpPerspective(InputArray, OutputArray, InputArray, Size, InterpolationFlags, BorderTypes, Scalar?)

Applies a perspective transformation to an image.

public static void WarpPerspective(InputArray src, OutputArray dst, InputArray m, Size dsize, InterpolationFlags flags = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)

Parameters

src InputArray

input image.

dst OutputArray

output image that has the size dsize and the same type as src.

m InputArray

3x3 transformation matrix.

dsize Size

size of the output image.

flags InterpolationFlags

combination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation (dst -> src).

borderMode BorderTypes

pixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE).

borderValue Scalar?

value used in case of a constant border; by default, it equals 0.

WarpPerspective(InputArray, OutputArray, float[,], Size, InterpolationFlags, BorderTypes, Scalar?)

Applies a perspective transformation to an image.

public static void WarpPerspective(InputArray src, OutputArray dst, float[,] m, Size dsize, InterpolationFlags flags = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)

Parameters

src InputArray

input image.

dst OutputArray

output image that has the size dsize and the same type as src.

m float[,]

3x3 transformation matrix.

dsize Size

size of the output image.

flags InterpolationFlags

combination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation (dst -> src).

borderMode BorderTypes

pixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE).

borderValue Scalar?

value used in case of a constant border; by default, it equals 0.

WarpPolar(InputArray, OutputArray, Size, Point2f, double, InterpolationFlags, WarpPolarMode)

Remaps an image to polar or semilog-polar coordinates space.

public static void WarpPolar(InputArray src, OutputArray dst, Size dsize, Point2f center, double maxRadius, InterpolationFlags interpolationFlags, WarpPolarMode warpPolarMode)

Parameters

src InputArray

Source image.

dst OutputArray

Destination image. It will have same type as src.

dsize Size

The destination image size (see description for valid options).

center Point2f

The transformation center.

maxRadius double

The radius of the bounding circle to transform. It determines the inverse magnitude scale parameter too.

interpolationFlags InterpolationFlags

interpolation methods.

warpPolarMode WarpPolarMode

interpolation methods.

Remarks

  • The function can not operate in-place.
  • To calculate magnitude and angle in degrees #cartToPolar is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.
  • This function uses #remap. Due to current implementation limitations the size of an input and output images should be less than 32767x32767.

Watershed(InputArray, InputOutputArray)

Performs a marker-based image segmentation using the watershed algorithm.

public static void Watershed(InputArray image, InputOutputArray markers)

Parameters

image InputArray

Input 8-bit 3-channel image.

markers InputOutputArray

Input/output 32-bit single-channel image (map) of markers. It should have the same size as image.