Class Cv2
- Namespace
- OpenCvSharp
- Assembly
- OpenCvSharp.dll
OpenCV Functions of C++ I/F (cv::xxx)
public static class Cv2
- Inheritance
-
Cv2
- Inherited Members
Fields
FILLED
public const int FILLED = -1
Field Value
LOG2
public const double LOG2 = 0.6931471805599453
Field Value
PI
The ratio of a circle's circumference to its diameter
public const double PI = 3.141592653589793
Field Value
Methods
AGAST(InputArray, int, bool, DetectorType)
Detects corners using the AGAST algorithm
public static KeyPoint[] AGAST(InputArray image, int threshold, bool nonmaxSuppression, AgastFeatureDetector.DetectorType type)
Parameters
imageInputArraygrayscale image where keypoints (corners) are detected.
thresholdintthreshold on difference between intensity of the central pixel and pixels of a circle around this pixel.
nonmaxSuppressionboolif true, non-maximum suppression is applied to detected corners (keypoints).
typeAgastFeatureDetector.DetectorTypeone of the four neighborhoods as defined in the paper
Returns
- KeyPoint[]
keypoints detected on the image.
Abs(Mat)
Computes absolute value of each matrix element
public static MatExpr Abs(Mat src)
Parameters
srcMatmatrix
Returns
Abs(MatExpr)
Computes absolute value of each matrix element
public static MatExpr Abs(MatExpr src)
Parameters
srcMatExprmatrix expression
Returns
Absdiff(InputArray, InputArray, OutputArray)
Calculates the per-element absolute difference between two arrays or between an array and a scalar.
public static void Absdiff(InputArray src1, InputArray src2, OutputArray dst)
Parameters
src1InputArrayfirst input array or a scalar.
src2InputArraysecond input array or a scalar.
dstOutputArrayoutput array that has the same size and type as input arrays.
Accumulate(InputArray, InputOutputArray, InputArray)
Adds an image to the accumulator.
public static void Accumulate(InputArray src, InputOutputArray dst, InputArray mask)
Parameters
srcInputArrayInput image as 1- or 3-channel, 8-bit or 32-bit floating point.
dstInputOutputArrayAccumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.
maskInputArrayOptional operation mask.
AccumulateProduct(InputArray, InputArray, InputOutputArray, InputArray)
Adds the per-element product of two input images to the accumulator.
public static void AccumulateProduct(InputArray src1, InputArray src2, InputOutputArray dst, InputArray mask)
Parameters
src1InputArrayFirst input image, 1- or 3-channel, 8-bit or 32-bit floating point.
src2InputArraySecond input image of the same type and the same size as src1
dstInputOutputArrayAccumulator with the same number of channels as input images, 32-bit or 64-bit floating-point.
maskInputArrayOptional operation mask.
AccumulateSquare(InputArray, InputOutputArray, InputArray)
Adds the square of a source image to the accumulator.
public static void AccumulateSquare(InputArray src, InputOutputArray dst, InputArray mask)
Parameters
srcInputArrayInput image as 1- or 3-channel, 8-bit or 32-bit floating point.
dstInputOutputArrayAccumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.
maskInputArrayOptional operation mask.
AccumulateWeighted(InputArray, InputOutputArray, double, InputArray)
Updates a running average.
public static void AccumulateWeighted(InputArray src, InputOutputArray dst, double alpha, InputArray mask)
Parameters
srcInputArrayInput image as 1- or 3-channel, 8-bit or 32-bit floating point.
dstInputOutputArrayAccumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.
alphadoubleWeight of the input image.
maskInputArrayOptional operation mask.
AdaptiveThreshold(InputArray, OutputArray, double, AdaptiveThresholdTypes, ThresholdTypes, int, double)
Applies an adaptive threshold to an array.
public static void AdaptiveThreshold(InputArray src, OutputArray dst, double maxValue, AdaptiveThresholdTypes adaptiveMethod, ThresholdTypes thresholdType, int blockSize, double c)
Parameters
srcInputArraySource 8-bit single-channel image.
dstOutputArrayDestination image of the same size and the same type as src .
maxValuedoubleNon-zero value assigned to the pixels for which the condition is satisfied. See the details below.
adaptiveMethodAdaptiveThresholdTypesAdaptive thresholding algorithm to use, ADAPTIVE_THRESH_MEAN_C or ADAPTIVE_THRESH_GAUSSIAN_C .
thresholdTypeThresholdTypesThresholding type that must be either THRESH_BINARY or THRESH_BINARY_INV .
blockSizeintSize of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.
cdoubleConstant subtracted from the mean or weighted mean (see the details below). Normally, it is positive but may be zero or negative as well.
Add(InputArray, InputArray, OutputArray, InputArray?, int)
Computes the per-element sum of two arrays or an array and a scalar.
public static void Add(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null, int dtype = -1)
Parameters
src1InputArrayThe first source array
src2InputArrayThe second source array. It must have the same size and same type as src1
dstOutputArrayThe destination array; it will have the same size and same type as src1
maskInputArrayThe optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]
dtypeint
AddWeighted(InputArray, double, InputArray, double, double, OutputArray, int)
computes weighted sum of two arrays (dst = alphasrc1 + betasrc2 + gamma)
public static void AddWeighted(InputArray src1, double alpha, InputArray src2, double beta, double gamma, OutputArray dst, int dtype = -1)
Parameters
src1InputArrayalphadoublesrc2InputArraybetadoublegammadoubledstOutputArraydtypeint
AlignSize(int, int)
Aligns buffer size by the certain number of bytes This small inline function aligns a buffer size by the certian number of bytes by enlarging it.
public static int AlignSize(int sz, int n)
Parameters
Returns
ApplyColorMap(InputArray, OutputArray, ColormapTypes)
Applies a GNU Octave/MATLAB equivalent colormap on a given image.
public static void ApplyColorMap(InputArray src, OutputArray dst, ColormapTypes colormap)
Parameters
srcInputArrayThe source image, grayscale or colored of type CV_8UC1 or CV_8UC3.
dstOutputArrayThe result is the colormapped source image. Note: Mat::create is called on dst.
colormapColormapTypescolormap The colormap to apply
ApplyColorMap(InputArray, OutputArray, InputArray)
Applies a user colormap on a given image.
public static void ApplyColorMap(InputArray src, OutputArray dst, InputArray userColor)
Parameters
srcInputArrayThe source image, grayscale or colored of type CV_8UC1 or CV_8UC3.
dstOutputArrayThe result is the colormapped source image. Note: Mat::create is called on dst.
userColorInputArrayThe colormap to apply of type CV_8UC1 or CV_8UC3 and size 256
ApproxPolyDP(InputArray, OutputArray, double, bool)
Approximates contour or a curve using Douglas-Peucker algorithm
public static void ApproxPolyDP(InputArray curve, OutputArray approxCurve, double epsilon, bool closed)
Parameters
curveInputArrayThe polygon or curve to approximate. Must be 1 x N or N x 1 matrix of type CV_32SC2 or CV_32FC2.
approxCurveOutputArrayThe result of the approximation; The type should match the type of the input curve
epsilondoubleSpecifies the approximation accuracy. This is the maximum distance between the original curve and its approximation.
closedboolThe result of the approximation; The type should match the type of the input curve
ApproxPolyDP(IEnumerable<Point2f>, double, bool)
Approximates contour or a curve using Douglas-Peucker algorithm
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point2f[] ApproxPolyDP(IEnumerable<Point2f> curve, double epsilon, bool closed)
Parameters
curveIEnumerable<Point2f>The polygon or curve to approximate.
epsilondoubleSpecifies the approximation accuracy. This is the maximum distance between the original curve and its approximation.
closedboolIf true, the approximated curve is closed (i.e. its first and last vertices are connected), otherwise it’s not
Returns
- Point2f[]
The result of the approximation; The type should match the type of the input curve
ApproxPolyDP(IEnumerable<Point>, double, bool)
Approximates contour or a curve using Douglas-Peucker algorithm
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point[] ApproxPolyDP(IEnumerable<Point> curve, double epsilon, bool closed)
Parameters
curveIEnumerable<Point>The polygon or curve to approximate.
epsilondoubleSpecifies the approximation accuracy. This is the maximum distance between the original curve and its approximation.
closedboolThe result of the approximation; The type should match the type of the input curve
Returns
- Point[]
The result of the approximation; The type should match the type of the input curve
ArcLength(InputArray, bool)
Calculates a contour perimeter or a curve length.
public static double ArcLength(InputArray curve, bool closed)
Parameters
curveInputArrayThe input vector of 2D points, represented by CV_32SC2 or CV_32FC2 matrix.
closedboolIndicates, whether the curve is closed or not.
Returns
ArcLength(IEnumerable<Point2f>, bool)
Calculates a contour perimeter or a curve length.
public static double ArcLength(IEnumerable<Point2f> curve, bool closed)
Parameters
curveIEnumerable<Point2f>The input vector of 2D points.
closedboolIndicates, whether the curve is closed or not.
Returns
ArcLength(IEnumerable<Point>, bool)
Calculates a contour perimeter or a curve length.
public static double ArcLength(IEnumerable<Point> curve, bool closed)
Parameters
curveIEnumerable<Point>The input vector of 2D points.
closedboolIndicates, whether the curve is closed or not.
Returns
ArrowedLine(InputOutputArray, Point, Point, Scalar, int, LineTypes, int, double)
Draws a arrow segment pointing from the first point to the second one. The function arrowedLine draws an arrow between pt1 and pt2 points in the image. See also cv::line.
public static void ArrowedLine(InputOutputArray img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0, double tipLength = 0.1)
Parameters
imgInputOutputArrayImage.
pt1PointThe point the arrow starts from.
pt2PointThe point the arrow points to.
colorScalarLine color.
thicknessintLine thickness.
lineTypeLineTypesType of the line, see cv::LineTypes
shiftintNumber of fractional bits in the point coordinates.
tipLengthdoubleThe length of the arrow tip in relation to the arrow length
BatchDistance(InputArray, InputArray, OutputArray, int, OutputArray, NormTypes, int, InputArray?, int, bool)
naive nearest neighbor finder
public static void BatchDistance(InputArray src1, InputArray src2, OutputArray dist, int dtype, OutputArray nidx, NormTypes normType = NormTypes.L2, int k = 0, InputArray? mask = null, int update = 0, bool crosscheck = false)
Parameters
src1InputArraysrc2InputArraydistOutputArraydtypeintnidxOutputArraynormTypeNormTypeskintmaskInputArrayupdateintcrosscheckbool
BilateralFilter(InputArray, OutputArray, int, double, double, BorderTypes)
Applies bilateral filter to the image
public static void BilateralFilter(InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayThe source 8-bit or floating-point, 1-channel or 3-channel image
dstOutputArrayThe destination image; will have the same size and the same type as src
dintThe diameter of each pixel neighborhood, that is used during filtering. If it is non-positive, it's computed from sigmaSpace
sigmaColordoubleFilter sigma in the color space. Larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color
sigmaSpacedoubleFilter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0 , it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace
borderTypeBorderTypes
BitwiseAnd(InputArray, InputArray, OutputArray, InputArray?)
computes bitwise conjunction of the two arrays (dst = src1 & src2)
public static void BitwiseAnd(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null)
Parameters
src1InputArrayfirst input array or a scalar.
src2InputArraysecond input array or a scalar.
dstOutputArrayoutput array that has the same size and type as the input
maskInputArrayoptional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.
BitwiseNot(InputArray, OutputArray, InputArray?)
inverts each bit of array (dst = ~src)
public static void BitwiseNot(InputArray src, OutputArray dst, InputArray? mask = null)
Parameters
srcInputArrayinput array.
dstOutputArrayoutput array that has the same size and type as the input
maskInputArrayoptional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.
BitwiseOr(InputArray, InputArray, OutputArray, InputArray?)
computes bitwise disjunction of the two arrays (dst = src1 | src2)
public static void BitwiseOr(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null)
Parameters
src1InputArrayfirst input array or a scalar.
src2InputArraysecond input array or a scalar.
dstOutputArrayoutput array that has the same size and type as the input
maskInputArrayoptional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.
BitwiseXor(InputArray, InputArray, OutputArray, InputArray?)
computes bitwise exclusive-or of the two arrays (dst = src1 ^ src2)
public static void BitwiseXor(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null)
Parameters
src1InputArrayfirst input array or a scalar.
src2InputArraysecond input array or a scalar.
dstOutputArrayoutput array that has the same size and type as the input
maskInputArrayoptional operation mask, 8-bit single channel array, that specifies elements of the output array to be changed.
BlendLinear(InputArray, InputArray, InputArray, InputArray, OutputArray)
Performs linear blending of two images: dst(i,j) = weights1(i,j)*src1(i,j) + weights2(i,j)*src2(i,j)
public static void BlendLinear(InputArray src1, InputArray src2, InputArray weights1, InputArray weights2, OutputArray dst)
Parameters
src1InputArrayIt has a type of CV_8UC(n) or CV_32FC(n), where n is a positive integer.
src2InputArrayIt has the same type and size as src1.
weights1InputArrayIt has a type of CV_32FC1 and the same size with src1.
weights2InputArrayIt has a type of CV_32FC1 and the same size with src1.
dstOutputArrayIt is created if it does not have the same size and type with src1.
Blur(InputArray, OutputArray, Size, Point?, BorderTypes)
Smoothes image using normalized box filter
public static void Blur(InputArray src, OutputArray dst, Size ksize, Point? anchor = null, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image; will have the same size and the same type as src
ksizeSizeThe smoothing kernel size
anchorPoint?The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center
borderTypeBorderTypesThe border mode used to extrapolate pixels outside of the image
BorderInterpolate(int, int, BorderTypes)
Computes the source location of an extrapolated pixel.
public static int BorderInterpolate(int p, int len, BorderTypes borderType)
Parameters
pint0-based coordinate of the extrapolated pixel along one of the axes, likely <0 or >= len
lenintLength of the array along the corresponding axis.
borderTypeBorderTypesBorder type, one of the #BorderTypes, except for #BORDER_TRANSPARENT and BORDER_ISOLATED. When borderType==BORDER_CONSTANT, the function always returns -1, regardless
Returns
BoundingRect(InputArray)
Calculates the up-right bounding rectangle of a point set.
public static Rect BoundingRect(InputArray curve)
Parameters
curveInputArrayThe input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
Returns
- Rect
Minimal up-right bounding rectangle for the specified point set.
BoundingRect(IEnumerable<Point2f>)
Calculates the up-right bounding rectangle of a point set.
public static Rect BoundingRect(IEnumerable<Point2f> curve)
Parameters
curveIEnumerable<Point2f>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
Returns
- Rect
Minimal up-right bounding rectangle for the specified point set.
BoundingRect(IEnumerable<Point>)
Calculates the up-right bounding rectangle of a point set.
public static Rect BoundingRect(IEnumerable<Point> curve)
Parameters
curveIEnumerable<Point>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
Returns
- Rect
Minimal up-right bounding rectangle for the specified point set.
BoxFilter(InputArray, OutputArray, MatType, Size, Point?, bool, BorderTypes)
Smoothes image using box filter
public static void BoxFilter(InputArray src, OutputArray dst, MatType ddepth, Size ksize, Point? anchor = null, bool normalize = true, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image; will have the same size and the same type as src
ddepthMatTypeksizeSizeThe smoothing kernel size
anchorPoint?The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center
normalizeboolIndicates, whether the kernel is normalized by its area or not
borderTypeBorderTypesThe border mode used to extrapolate pixels outside of the image
BoxPoints(RotatedRect)
Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.
The function finds the four vertices of a rotated rectangle.This function is useful to draw the rectangle.In C++, instead of using this function, you can directly use RotatedRect::points method. Please visit the @ref tutorial_bounding_rotated_ellipses "tutorial on Creating Bounding rotated boxes and ellipses for contours" for more information.
public static Point2f[] BoxPoints(RotatedRect box)
Parameters
boxRotatedRectThe input rotated rectangle. It may be the output of
Returns
- Point2f[]
The output array of four vertices of rectangles.
BoxPoints(RotatedRect, OutputArray)
Finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.
The function finds the four vertices of a rotated rectangle.This function is useful to draw the rectangle.In C++, instead of using this function, you can directly use RotatedRect::points method. Please visit the @ref tutorial_bounding_rotated_ellipses "tutorial on Creating Bounding rotated boxes and ellipses for contours" for more information.
public static void BoxPoints(RotatedRect box, OutputArray points)
Parameters
boxRotatedRectThe input rotated rectangle. It may be the output of
pointsOutputArrayThe output array of four vertices of rectangles.
BuildOpticalFlowPyramid(InputArray, out Mat[], Size, int, bool, BorderTypes, BorderTypes, bool)
Constructs a pyramid which can be used as input for calcOpticalFlowPyrLK
public static int BuildOpticalFlowPyramid(InputArray img, out Mat[] pyramid, Size winSize, int maxLevel, bool withDerivatives = true, BorderTypes pyrBorder = BorderTypes.Default, BorderTypes derivBorder = BorderTypes.Constant, bool tryReuseInputImage = true)
Parameters
imgInputArray8-bit input image.
pyramidMat[]output pyramid.
winSizeSizewindow size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK(). It is needed to calculate required padding for pyramid levels.
maxLevelint0-based maximal pyramid level number.
withDerivativesboolset to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK() will calculate them internally.
pyrBorderBorderTypesthe border mode for pyramid layers.
derivBorderBorderTypesthe border mode for gradients.
tryReuseInputImageboolput ROI of input image into the pyramid if possible. You can pass false to force data copying.
Returns
- int
number of levels in constructed pyramid. Can be less than maxLevel.
BuildOpticalFlowPyramid(InputArray, OutputArray, Size, int, bool, BorderTypes, BorderTypes, bool)
Constructs a pyramid which can be used as input for calcOpticalFlowPyrLK
public static int BuildOpticalFlowPyramid(InputArray img, OutputArray pyramid, Size winSize, int maxLevel, bool withDerivatives = true, BorderTypes pyrBorder = BorderTypes.Default, BorderTypes derivBorder = BorderTypes.Constant, bool tryReuseInputImage = true)
Parameters
imgInputArray8-bit input image.
pyramidOutputArrayoutput pyramid.
winSizeSizewindow size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK(). It is needed to calculate required padding for pyramid levels.
maxLevelint0-based maximal pyramid level number.
withDerivativesboolset to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK() will calculate them internally.
pyrBorderBorderTypesthe border mode for pyramid layers.
derivBorderBorderTypesthe border mode for gradients.
tryReuseInputImageboolput ROI of input image into the pyramid if possible. You can pass false to force data copying.
Returns
- int
number of levels in constructed pyramid. Can be less than maxLevel.
BuildPyramid(InputArray, VectorOfMat, int, BorderTypes)
public static void BuildPyramid(InputArray src, VectorOfMat dst, int maxlevel, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArraydstVectorOfMatmaxlevelintborderTypeBorderTypes
Exceptions
CalcBackProject(Mat[], int[], InputArray, OutputArray, Rangef[], bool)
computes the joint dense histogram for a set of images.
public static void CalcBackProject(Mat[] images, int[] channels, InputArray hist, OutputArray backProject, Rangef[] ranges, bool uniform = true)
Parameters
imagesMat[]channelsint[]histInputArraybackProjectOutputArrayrangesRangef[]uniformbool
CalcCovarMatrix(InputArray, OutputArray, InputOutputArray, CovarFlags, MatType?)
computes covariation matrix of a set of samples
public static void CalcCovarMatrix(InputArray samples, OutputArray covar, InputOutputArray mean, CovarFlags flags, MatType? ctype = null)
Parameters
samplesInputArraysamples stored as rows/columns of a single matrix.
covarOutputArrayoutput covariance matrix of the type ctype and square size.
meanInputOutputArrayinput or output (depending on the flags) array as the average value of the input vectors.
flagsCovarFlagsoperation flags as a combination of CovarFlags
ctypeMatType?type of the matrixl; it equals 'CV_64F' by default.
CalcCovarMatrix(Mat[], Mat, Mat, CovarFlags, MatType?)
computes covariation matrix of a set of samples
public static void CalcCovarMatrix(Mat[] samples, Mat covar, Mat mean, CovarFlags flags, MatType? ctype = null)
Parameters
samplesMat[]samples stored as separate matrices
covarMatoutput covariance matrix of the type ctype and square size.
meanMatinput or output (depending on the flags) array as the average value of the input vectors.
flagsCovarFlagsoperation flags as a combination of CovarFlags
ctypeMatType?type of the matrixl; it equals 'CV_64F' by default.
CalcHist(Mat[], int[], InputArray?, OutputArray, int, int[], Rangef[], bool, bool)
computes the joint dense histogram for a set of images.
public static void CalcHist(Mat[] images, int[] channels, InputArray? mask, OutputArray hist, int dims, int[] histSize, Rangef[] ranges, bool uniform = true, bool accumulate = false)
Parameters
imagesMat[]channelsint[]maskInputArrayhistOutputArraydimsinthistSizeint[]rangesRangef[]uniformboolaccumulatebool
CalcHist(Mat[], int[], InputArray?, OutputArray, int, int[], float[][], bool, bool)
computes the joint dense histogram for a set of images.
public static void CalcHist(Mat[] images, int[] channels, InputArray? mask, OutputArray hist, int dims, int[] histSize, float[][] ranges, bool uniform = true, bool accumulate = false)
Parameters
imagesMat[]channelsint[]maskInputArrayhistOutputArraydimsinthistSizeint[]rangesfloat[][]uniformboolaccumulatebool
CalcOpticalFlowFarneback(InputArray, InputArray, InputOutputArray, double, int, int, int, int, double, OpticalFlowFlags)
Computes a dense optical flow using the Gunnar Farneback's algorithm.
public static void CalcOpticalFlowFarneback(InputArray prev, InputArray next, InputOutputArray flow, double pyrScale, int levels, int winsize, int iterations, int polyN, double polySigma, OpticalFlowFlags flags)
Parameters
prevInputArrayfirst 8-bit single-channel input image.
nextInputArraysecond input image of the same size and the same type as prev.
flowInputOutputArraycomputed flow image that has the same size as prev and type CV_32FC2.
pyrScaledoubleparameter, specifying the image scale (<1) to build pyramids for each image; pyrScale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one.
levelsintnumber of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used.
winsizeintaveraging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field.
iterationsintnumber of iterations the algorithm does at each pyramid level.
polyNintsize of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly_n =5 or 7.
polySigmadoublestandard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for polyN=5, you can set polySigma=1.1, for polyN=7, a good value would be polySigma=1.5.
flagsOpticalFlowFlagsoperation flags that can be a combination of OPTFLOW_USE_INITIAL_FLOW and/or OPTFLOW_FARNEBACK_GAUSSIAN
CalcOpticalFlowPyrLK(InputArray, InputArray, InputArray, InputOutputArray, OutputArray, OutputArray, Size?, int, TermCriteria?, OpticalFlowFlags, double)
computes sparse optical flow using multi-scale Lucas-Kanade algorithm
public static void CalcOpticalFlowPyrLK(InputArray prevImg, InputArray nextImg, InputArray prevPts, InputOutputArray nextPts, OutputArray status, OutputArray err, Size? winSize = null, int maxLevel = 3, TermCriteria? criteria = null, OpticalFlowFlags flags = OpticalFlowFlags.None, double minEigThreshold = 0.0001)
Parameters
prevImgInputArraynextImgInputArrayprevPtsInputArraynextPtsInputOutputArraystatusOutputArrayerrOutputArraywinSizeSize?maxLevelintcriteriaTermCriteria?flagsOpticalFlowFlagsminEigThresholddouble
CalcOpticalFlowPyrLK(InputArray, InputArray, Point2f[], ref Point2f[], out byte[], out float[], Size?, int, TermCriteria?, OpticalFlowFlags, double)
computes sparse optical flow using multi-scale Lucas-Kanade algorithm
public static void CalcOpticalFlowPyrLK(InputArray prevImg, InputArray nextImg, Point2f[] prevPts, ref Point2f[] nextPts, out byte[] status, out float[] err, Size? winSize = null, int maxLevel = 3, TermCriteria? criteria = null, OpticalFlowFlags flags = OpticalFlowFlags.None, double minEigThreshold = 0.0001)
Parameters
prevImgInputArraynextImgInputArrayprevPtsPoint2f[]nextPtsPoint2f[]statusbyte[]errfloat[]winSizeSize?maxLevelintcriteriaTermCriteria?flagsOpticalFlowFlagsminEigThresholddouble
CalibrateCamera(IEnumerable<Mat>, IEnumerable<Mat>, Size, InputOutputArray, InputOutputArray, out Mat[], out Mat[], CalibrationFlags, TermCriteria?)
finds intrinsic and extrinsic camera parameters from several fews of a known calibration pattern.
public static double CalibrateCamera(IEnumerable<Mat> objectPoints, IEnumerable<Mat> imagePoints, Size imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, out Mat[] rvecs, out Mat[] tvecs, CalibrationFlags flags = CalibrationFlags.None, TermCriteria? criteria = null)
Parameters
objectPointsIEnumerable<Mat>In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of the pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different. The points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object point is 0. In the old interface all the vectors of object points from different views are concatenated together.
imagePointsIEnumerable<Mat>In the new interface it is a vector of vectors of the projections of calibration pattern points. imagePoints.Count() and objectPoints.Count() and imagePoints[i].Count() must be equal to objectPoints[i].Count() for each i.
imageSizeSizeSize of the image used only to initialize the intrinsic camera matrix.
cameraMatrixInputOutputArrayOutput 3x3 floating-point camera matrix. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.
distCoeffsInputOutputArrayOutput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements.
rvecsMat[]Output vector of rotation vectors (see Rodrigues() ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1)
tvecsMat[]Output vector of translation vectors estimated for each pattern view.
flagsCalibrationFlagsDifferent flags that may be zero or a combination of the CalibrationFlag values
criteriaTermCriteria?Termination criteria for the iterative optimization algorithm.
Returns
- double
Root mean square (RMS) re-projection error. A value below 1.0 is generally considered acceptable for a good calibration.
CalibrateCamera(IEnumerable<IEnumerable<Point3f>>, IEnumerable<IEnumerable<Point2f>>, Size, double[,], double[], out Vec3d[], out Vec3d[], CalibrationFlags, TermCriteria?)
finds intrinsic and extrinsic camera parameters from several fews of a known calibration pattern.
public static double CalibrateCamera(IEnumerable<IEnumerable<Point3f>> objectPoints, IEnumerable<IEnumerable<Point2f>> imagePoints, Size imageSize, double[,] cameraMatrix, double[] distCoeffs, out Vec3d[] rvecs, out Vec3d[] tvecs, CalibrationFlags flags = CalibrationFlags.None, TermCriteria? criteria = null)
Parameters
objectPointsIEnumerable<IEnumerable<Point3f>>In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of the pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different. The points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object point is 0. In the old interface all the vectors of object points from different views are concatenated together.
imagePointsIEnumerable<IEnumerable<Point2f>>In the new interface it is a vector of vectors of the projections of calibration pattern points. imagePoints.Count() and objectPoints.Count() and imagePoints[i].Count() must be equal to objectPoints[i].Count() for each i.
imageSizeSizeSize of the image used only to initialize the intrinsic camera matrix.
cameraMatrixdouble[,]Output 3x3 floating-point camera matrix. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.
distCoeffsdouble[]Output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements.
rvecsVec3d[]Output vector of rotation vectors (see Rodrigues() ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1)
tvecsVec3d[]Output vector of translation vectors estimated for each pattern view.
flagsCalibrationFlagsDifferent flags that may be zero or a combination of the CalibrationFlag values
criteriaTermCriteria?Termination criteria for the iterative optimization algorithm.
Returns
- double
Root mean square (RMS) re-projection error. A value below 1.0 is generally considered acceptable for a good calibration.
CalibrateHandEye(IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, OutputArray, OutputArray, HandEyeCalibrationMethod)
Computes Hand-Eye calibration.
The function performs the Hand-Eye calibration using various methods. One approach consists in estimating the rotation then the translation(separable solutions) and the following methods are implemented:
- R.Tsai, R.Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration \cite Tsai89
- F.Park, B.Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group \cite Park94
- R.Horaud, F.Dornaika Hand-Eye Calibration \cite Horaud95
Another approach consists in estimating simultaneously the rotation and the translation(simultaneous solutions), with the following implemented method:
- N.Andreff, R.Horaud, B.Espiau On-line Hand-Eye Calibration \cite Andreff99
- K.Daniilidis Hand-Eye Calibration Using Dual Quaternions \cite Daniilidis98
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CalibrateHandEye(IEnumerable<Mat> R_gripper2base, IEnumerable<Mat> t_gripper2base, IEnumerable<Mat> R_target2cam, IEnumerable<Mat> t_target2cam, OutputArray R_cam2gripper, OutputArray t_cam2gripper, HandEyeCalibrationMethod method = HandEyeCalibrationMethod.TSAI)
Parameters
R_gripper2baseIEnumerable<Mat>Rotation part extracted from the homogeneous matrix that transforms a pointexpressed in the gripper frame to the robot base frame that contains the rotation matrices for all the transformationsfrom gripper frame to robot base frame.
t_gripper2baseIEnumerable<Mat>Translation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame. This is a vector(
vector<Mat>) that contains the translation vectors for all the transformations from gripper frame to robot base frame.R_target2camIEnumerable<Mat>Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector(
vector<Mat>) that contains the rotation matrices for all the transformations from calibration target frame to camera frame.t_target2camIEnumerable<Mat>Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector(
vector<Mat>) that contains the translation vectors for all the transformations from calibration target frame to camera frame.R_cam2gripperOutputArrayEstimated rotation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.
t_cam2gripperOutputArrayEstimated translation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.
methodHandEyeCalibrationMethodOne of the implemented Hand-Eye calibration method
CalibrateRobotWorldHandEye(IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, OutputArray, OutputArray, OutputArray, OutputArray, RobotWorldHandEyeCalibrationMethod)
Computes Robot-World/Hand-Eye calibration. The function performs the Robot-World/Hand-Eye calibration using various methods. One approach consists in estimating the rotation then the translation(separable solutions):
- M.Shah, Solving the robot-world/hand-eye calibration problem using the kronecker product \cite Shah2013SolvingTR
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CalibrateRobotWorldHandEye(IEnumerable<Mat> R_world2cam, IEnumerable<Mat> t_world2cam, IEnumerable<Mat> R_base2gripper, IEnumerable<Mat> t_base2gripper, OutputArray R_base2world, OutputArray t_base2world, OutputArray R_gripper2cam, OutputArray t_gripper2cam, RobotWorldHandEyeCalibrationMethod method = RobotWorldHandEyeCalibrationMethod.SHAH)
Parameters
R_world2camIEnumerable<Mat>[in] R_world2cam Rotation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector of Mat that contains the rotation,
(3x3)rotation matrices or(3x1)rotation vectors,for all the transformations from world frame to the camera frame.t_world2camIEnumerable<Mat>[in] Translation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector (
vector<Mat>) that contains the(3x1)translation vectors for all the transformations from world frame to the camera frame.R_base2gripperIEnumerable<Mat>[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (
vector<Mat>) that contains the rotation,(3x3)rotation matrices or(3x1)rotation vectors, for all the transformations from robot base frame to the gripper frame.t_base2gripperIEnumerable<Mat>[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (
vector<Mat>) that contains the(3x1)translation vectors for all the transformations from robot base frame to the gripper frame.R_base2worldOutputArray[out] R_base2world Estimated
(3x3)rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.t_base2worldOutputArray[out] t_base2world Estimated
(3x1)translation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.R_gripper2camOutputArray[out] R_gripper2cam Estimated
(3x3)rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame.t_gripper2camOutputArray[out] Estimated
(3x1)translation part extracted from the homogeneous matrix that transforms a pointexpressed in the gripper frame to the camera frame.methodRobotWorldHandEyeCalibrationMethodOne of the implemented Robot-World/Hand-Eye calibration method
CalibrateRobotWorldHandEye(IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, IEnumerable<Mat>, out double[,], out double[], out double[,], out double[], RobotWorldHandEyeCalibrationMethod)
omputes Robot-World/Hand-Eye calibration. The function performs the Robot-World/Hand-Eye calibration using various methods. One approach consists in estimating the rotation then the translation(separable solutions):
- M.Shah, Solving the robot-world/hand-eye calibration problem using the kronecker product \cite Shah2013SolvingTR
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CalibrateRobotWorldHandEye(IEnumerable<Mat> R_world2cam, IEnumerable<Mat> t_world2cam, IEnumerable<Mat> R_base2gripper, IEnumerable<Mat> t_base2gripper, out double[,] R_base2world, out double[] t_base2world, out double[,] R_gripper2cam, out double[] t_gripper2cam, RobotWorldHandEyeCalibrationMethod method = RobotWorldHandEyeCalibrationMethod.SHAH)
Parameters
R_world2camIEnumerable<Mat>[in] R_world2cam Rotation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector of Mat that contains the rotation,
(3x3)rotation matrices or(3x1)rotation vectors,for all the transformations from world frame to the camera frame.t_world2camIEnumerable<Mat>[in] Translation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame. This is a vector (
vector<Mat>) that contains the(3x1)translation vectors for all the transformations from world frame to the camera frame.R_base2gripperIEnumerable<Mat>[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (
vector<Mat>) that contains the rotation,(3x3)rotation matrices or(3x1)rotation vectors, for all the transformations from robot base frame to the gripper frame.t_base2gripperIEnumerable<Mat>[in] Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame. This is a vector (
vector<Mat>) that contains the(3x1)translation vectors for all the transformations from robot base frame to the gripper frame.R_base2worlddouble[,][out] R_base2world Estimated
(3x3)rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.t_base2worlddouble[][out] t_base2world Estimated
(3x1)translation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame.R_gripper2camdouble[,][out] R_gripper2cam Estimated
(3x3)rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame.t_gripper2camdouble[][out] Estimated
(3x1)translation part extracted from the homogeneous matrix that transforms a pointexpressed in the gripper frame to the camera frame.methodRobotWorldHandEyeCalibrationMethodOne of the implemented Robot-World/Hand-Eye calibration method
CalibrationMatrixValues(InputArray, Size, double, double, out double, out double, out double, out Point2d, out double)
computes several useful camera characteristics from the camera matrix, camera frame resolution and the physical sensor size.
public static void CalibrationMatrixValues(InputArray cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, out double fovx, out double fovy, out double focalLength, out Point2d principalPoint, out double aspectRatio)
Parameters
cameraMatrixInputArrayInput camera matrix that can be estimated by calibrateCamera() or stereoCalibrate() .
imageSizeSizeInput image size in pixels.
apertureWidthdoublePhysical width of the sensor.
apertureHeightdoublePhysical height of the sensor.
fovxdoubleOutput field of view in degrees along the horizontal sensor axis.
fovydoubleOutput field of view in degrees along the vertical sensor axis.
focalLengthdoubleFocal length of the lens in mm.
principalPointPoint2dPrincipal point in pixels.
aspectRatiodoublefy / fx
CalibrationMatrixValues(double[,], Size, double, double, out double, out double, out double, out Point2d, out double)
computes several useful camera characteristics from the camera matrix, camera frame resolution and the physical sensor size.
public static void CalibrationMatrixValues(double[,] cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, out double fovx, out double fovy, out double focalLength, out Point2d principalPoint, out double aspectRatio)
Parameters
cameraMatrixdouble[,]Input camera matrix that can be estimated by calibrateCamera() or stereoCalibrate() .
imageSizeSizeInput image size in pixels.
apertureWidthdoublePhysical width of the sensor.
apertureHeightdoublePhysical height of the sensor.
fovxdoubleOutput field of view in degrees along the horizontal sensor axis.
fovydoubleOutput field of view in degrees along the vertical sensor axis.
focalLengthdoubleFocal length of the lens in mm.
principalPointPoint2dPrincipal point in pixels.
aspectRatiodoublefy / fx
CamShift(InputArray, ref Rect, TermCriteria)
Finds an object center, size, and orientation.
public static RotatedRect CamShift(InputArray probImage, ref Rect window, TermCriteria criteria)
Parameters
probImageInputArrayBack projection of the object histogram.
windowRectInitial search window.
criteriaTermCriteriaStop criteria for the underlying MeanShift() .
Returns
Canny(InputArray, InputArray, OutputArray, double, double, bool)
Finds edges in an image using the Canny algorithm with custom image gradient.
public static void Canny(InputArray dx, InputArray dy, OutputArray edges, double threshold1, double threshold2, bool L2gradient = false)
Parameters
dxInputArray16-bit x derivative of input image (CV_16SC1 or CV_16SC3).
dyInputArray16-bit y derivative of input image (same type as dx).
edgesOutputArrayoutput edge map; single channels 8-bit image, which has the same size as image.
threshold1doublefirst threshold for the hysteresis procedure.
threshold2doublesecond threshold for the hysteresis procedure.
L2gradientboolIndicates, whether the more accurate L2 norm should be used to compute the image gradient magnitude (true), or a faster default L1 norm is enough (false). [By default this is false]
Canny(InputArray, OutputArray, double, double, int, bool)
Finds edges in an image using Canny algorithm.
public static void Canny(InputArray src, OutputArray edges, double threshold1, double threshold2, int apertureSize = 3, bool L2gradient = false)
Parameters
srcInputArraySingle-channel 8-bit input image
edgesOutputArrayThe output edge map. It will have the same size and the same type as image
threshold1doubleThe first threshold for the hysteresis procedure
threshold2doubleThe second threshold for the hysteresis procedure
apertureSizeintAperture size for the Sobel operator [By default this is ApertureSize.Size3]
L2gradientboolIndicates, whether the more accurate L2 norm should be used to compute the image gradient magnitude (true), or a faster default L1 norm is enough (false). [By default this is false]
CartToPolar(InputArray, InputArray, OutputArray, OutputArray, bool)
Calculates the magnitude and angle of 2D vectors.
public static void CartToPolar(InputArray x, InputArray y, OutputArray magnitude, OutputArray angle, bool angleInDegrees = false)
Parameters
xInputArrayarray of x-coordinates; this must be a single-precision or double-precision floating-point array.
yInputArrayarray of y-coordinates, that must have the same size and same type as x.
magnitudeOutputArrayoutput array of magnitudes of the same size and type as x.
angleOutputArrayoutput array of angles that has the same size and type as x; the angles are measured in radians(from 0 to 2*Pi) or in degrees(0 to 360 degrees).
angleInDegreesboola flag, indicating whether the angles are measured in radians(which is by default), or in degrees.
CheckChessboard(InputArray, Size)
Checks whether the image contains chessboard of the specific size or not.
public static bool CheckChessboard(InputArray img, Size size)
Parameters
imgInputArraysizeSize
Returns
CheckHardwareSupport(CpuFeatures)
Returns true if the specified feature is supported by the host hardware. The function returns true if the host hardware supports the specified feature.When user calls setUseOptimized(false), the subsequent calls to checkHardwareSupport() will return false until setUseOptimized(true) is called.This way user can dynamically switch on and off the optimized code in OpenCV.
public static bool CheckHardwareSupport(CpuFeatures feature)
Parameters
featureCpuFeaturesThe feature of interest, one of cv::CpuFeatures
Returns
CheckRange(InputArray, bool)
checks that each matrix element is within the specified range.
public static bool CheckRange(InputArray src, bool quiet = true)
Parameters
srcInputArrayThe array to check
quietboolThe flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception.
Returns
CheckRange(InputArray, bool, out Point, double, double)
checks that each matrix element is within the specified range.
public static bool CheckRange(InputArray src, bool quiet, out Point pos, double minVal = -1.7976931348623157E+308, double maxVal = 1.7976931348623157E+308)
Parameters
srcInputArrayThe array to check
quietboolThe flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception.
posPointThe optional output parameter, where the position of the first outlier is stored.
minValdoubleThe inclusive lower boundary of valid values range
maxValdoubleThe exclusive upper boundary of valid values range
Returns
Circle(InputOutputArray, Point, int, Scalar, int, LineTypes, int)
Draws a circle
public static void Circle(InputOutputArray img, Point center, int radius, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayImage where the circle is drawn.
centerPointCenter of the circle.
radiusintRadius of the circle.
colorScalarCircle color.
thicknessintThickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1]
lineTypeLineTypesType of the circle boundary. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the center coordinates and radius value. [By default this is 0]
Circle(InputOutputArray, int, int, int, Scalar, int, LineTypes, int)
Draws a circle
public static void Circle(InputOutputArray img, int centerX, int centerY, int radius, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayImage where the circle is drawn.
centerXintX-coordinate of the center of the circle.
centerYintY-coordinate of the center of the circle.
radiusintRadius of the circle.
colorScalarCircle color.
thicknessintThickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1]
lineTypeLineTypesType of the circle boundary. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the center coordinates and radius value. [By default this is 0]
ClipLine(Rect, ref Point, ref Point)
Clips the line against the image rectangle
public static bool ClipLine(Rect imgRect, ref Point pt1, ref Point pt2)
Parameters
Returns
ClipLine(Size, ref Point, ref Point)
Clips the line against the image rectangle
public static bool ClipLine(Size imgSize, ref Point pt1, ref Point pt2)
Parameters
Returns
ColorChange(InputArray, InputArray?, OutputArray, float, float, float)
Given an original color image, two differently colored versions of this image can be mixed seamlessly. Multiplication factor is between 0.5 to 2.5.
public static void ColorChange(InputArray src, InputArray? mask, OutputArray dst, float redMul = 1, float greenMul = 1, float blueMul = 1)
Parameters
srcInputArrayInput 8-bit 3-channel image.
maskInputArrayInput 8-bit 1 or 3-channel image.
dstOutputArrayOutput image with the same size and type as src.
redMulfloatR-channel multiply factor.
greenMulfloatG-channel multiply factor.
blueMulfloatB-channel multiply factor.
Compare(InputArray, InputArray, OutputArray, CmpType)
Performs the per-element comparison of two arrays or an array and scalar value.
public static void Compare(InputArray src1, InputArray src2, OutputArray dst, CmpType cmpop)
Parameters
src1InputArrayfirst input array or a scalar; when it is an array, it must have a single channel.
src2InputArraysecond input array or a scalar; when it is an array, it must have a single channel.
dstOutputArrayoutput array of type ref CV_8U that has the same size and the same number of channels as the input arrays.
cmpopCmpTypea flag, that specifies correspondence between the arrays (cv::CmpTypes)
CompareHist(InputArray, InputArray, HistCompMethods)
compares two histograms stored in dense arrays
public static double CompareHist(InputArray h1, InputArray h2, HistCompMethods method)
Parameters
h1InputArrayThe first compared histogram
h2InputArrayThe second compared histogram of the same size as h1
methodHistCompMethodsThe comparison method
Returns
CompleteSymm(InputOutputArray, bool)
extends the symmetrical matrix from the lower half or from the upper half
public static void CompleteSymm(InputOutputArray mtx, bool lowerToUpper = false)
Parameters
mtxInputOutputArrayInput-output floating-point square matrix
lowerToUpperboolIf true, the lower half is copied to the upper half, otherwise the upper half is copied to the lower half
ComposeRT(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?, OutputArray?)
composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments
public static void ComposeRT(InputArray rvec1, InputArray tvec1, InputArray rvec2, InputArray tvec2, OutputArray rvec3, OutputArray tvec3, OutputArray? dr3dr1 = null, OutputArray? dr3dt1 = null, OutputArray? dr3dr2 = null, OutputArray? dr3dt2 = null, OutputArray? dt3dr1 = null, OutputArray? dt3dt1 = null, OutputArray? dt3dr2 = null, OutputArray? dt3dt2 = null)
Parameters
rvec1InputArrayFirst rotation vector.
tvec1InputArrayFirst translation vector.
rvec2InputArraySecond rotation vector.
tvec2InputArraySecond translation vector.
rvec3OutputArrayOutput rotation vector of the superposition.
tvec3OutputArrayOutput translation vector of the superposition.
dr3dr1OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dr3dt1OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dr3dr2OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dr3dt2OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dr1OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dt1OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dr2OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dt2OutputArrayOptional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
ComposeRT(double[], double[], double[], double[], out double[], out double[])
composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments
public static void ComposeRT(double[] rvec1, double[] tvec1, double[] rvec2, double[] tvec2, out double[] rvec3, out double[] tvec3)
Parameters
rvec1double[]First rotation vector.
tvec1double[]First translation vector.
rvec2double[]Second rotation vector.
tvec2double[]Second translation vector.
rvec3double[]Output rotation vector of the superposition.
tvec3double[]Output translation vector of the superposition.
ComposeRT(double[], double[], double[], double[], out double[], out double[], out double[,], out double[,], out double[,], out double[,], out double[,], out double[,], out double[,], out double[,])
composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments
public static void ComposeRT(double[] rvec1, double[] tvec1, double[] rvec2, double[] tvec2, out double[] rvec3, out double[] tvec3, out double[,] dr3dr1, out double[,] dr3dt1, out double[,] dr3dr2, out double[,] dr3dt2, out double[,] dt3dr1, out double[,] dt3dt1, out double[,] dt3dr2, out double[,] dt3dt2)
Parameters
rvec1double[]First rotation vector.
tvec1double[]First translation vector.
rvec2double[]Second rotation vector.
tvec2double[]Second translation vector.
rvec3double[]Output rotation vector of the superposition.
tvec3double[]Output translation vector of the superposition.
dr3dr1double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dr3dt1double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dr3dr2double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dr3dt2double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dr1double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dt1double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dr2double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
dt3dt2double[,]Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively.
ComputeCorrespondEpilines(InputArray, int, InputArray, OutputArray)
For points in an image of a stereo pair, computes the corresponding epilines in the other image.
public static void ComputeCorrespondEpilines(InputArray points, int whichImage, InputArray F, OutputArray lines)
Parameters
pointsInputArrayInput points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2.
whichImageintIndex of the image (1 or 2) that contains the points .
FInputArrayFundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() .
linesOutputArrayOutput vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) .
ComputeCorrespondEpilines(IEnumerable<Point2d>, int, double[,])
For points in an image of a stereo pair, computes the corresponding epilines in the other image.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point3f[] ComputeCorrespondEpilines(IEnumerable<Point2d> points, int whichImage, double[,] F)
Parameters
pointsIEnumerable<Point2d>Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2.
whichImageintIndex of the image (1 or 2) that contains the points .
Fdouble[,]Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() .
Returns
- Point3f[]
Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) .
ComputeCorrespondEpilines(IEnumerable<Point3d>, int, double[,])
For points in an image of a stereo pair, computes the corresponding epilines in the other image.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Point3f[] ComputeCorrespondEpilines(IEnumerable<Point3d> points, int whichImage, double[,] F)
Parameters
pointsIEnumerable<Point3d>Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2.
whichImageintIndex of the image (1 or 2) that contains the points .
Fdouble[,]Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() .
Returns
- Point3f[]
Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) .
ComputeECC(InputArray, InputArray, InputArray?)
Computes the Enhanced Correlation Coefficient value between two images @cite EP08 .
public static double ComputeECC(InputArray templateImage, InputArray inputImage, InputArray? inputMask = null)
Parameters
templateImageInputArraysingle-channel template image; CV_8U or CV_32F array.
inputImageInputArraysingle-channel input image to be warped to provide an image similar to templateImage, same type as templateImage.
inputMaskInputArrayAn optional mask to indicate valid values of inputImage.
Returns
ComputeRecallPrecisionCurve(DMatch[][], byte[][])
public static Point2f[] ComputeRecallPrecisionCurve(DMatch[][] matches1to2, byte[][] correctMatches1to2Mask)
Parameters
Returns
- Point2f[]
recallPrecisionCurve
ConnectedComponents(InputArray, OutputArray, PixelConnectivity)
computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.
public static int ConnectedComponents(InputArray image, OutputArray labels, PixelConnectivity connectivity = PixelConnectivity.Connectivity8)
Parameters
imageInputArraythe image to be labeled
labelsOutputArraydestination labeled image
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
Returns
- int
The number of labels
ConnectedComponents(InputArray, OutputArray, PixelConnectivity, MatType)
computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.
public static int ConnectedComponents(InputArray image, OutputArray labels, PixelConnectivity connectivity, MatType ltype)
Parameters
imageInputArraythe image to be labeled
labelsOutputArraydestination labeled image
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
ltypeMatTypeoutput image label type. Currently CV_32S and CV_16U are supported.
Returns
- int
The number of labels
ConnectedComponents(InputArray, out int[,], PixelConnectivity)
computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.
public static int ConnectedComponents(InputArray image, out int[,] labels, PixelConnectivity connectivity)
Parameters
imageInputArraythe image to be labeled
labelsint[,]destination labeled rectangular array
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
Returns
- int
The number of labels
ConnectedComponentsEx(InputArray, PixelConnectivity, ConnectedComponentsAlgorithmsTypes)
computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.
public static ConnectedComponents ConnectedComponentsEx(InputArray image, PixelConnectivity connectivity = PixelConnectivity.Connectivity8, ConnectedComponentsAlgorithmsTypes ccltype = ConnectedComponentsAlgorithmsTypes.Default)
Parameters
imageInputArraythe image to be labeled
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
ccltypeConnectedComponentsAlgorithmsTypes
Returns
ConnectedComponentsWithAlgorithm(InputArray, OutputArray, PixelConnectivity, MatType, ConnectedComponentsAlgorithmsTypes)
Computes the connected components labeled image of boolean image.
image with 4 or 8 way connectivity - returns N, the total number of labels[0, N - 1] where 0 represents the background label.ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.ccltype specifies the connected components labeling algorithm to use, currently Grana (BBDT) and Wu's (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details.Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu's algorithms if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.
public static int ConnectedComponentsWithAlgorithm(InputArray image, OutputArray labels, PixelConnectivity connectivity, MatType ltype, ConnectedComponentsAlgorithmsTypes ccltype)
Parameters
imageInputArraythe 8-bit single-channel image to be labeled
labelsOutputArraydestination labeled image
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
ltypeMatTypeoutput image label type. Currently CV_32S and CV_16U are supported.
ccltypeConnectedComponentsAlgorithmsTypesconnected components algorithm type.
Returns
ConnectedComponentsWithStats(InputArray, OutputArray, OutputArray, OutputArray, PixelConnectivity)
computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.
public static int ConnectedComponentsWithStats(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, PixelConnectivity connectivity = PixelConnectivity.Connectivity8)
Parameters
imageInputArraythe image to be labeled
labelsOutputArraydestination labeled image
statsOutputArraystatistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes
centroidsOutputArrayfloating point centroid (x,y) output for each label, including the background label
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
Returns
ConnectedComponentsWithStats(InputArray, OutputArray, OutputArray, OutputArray, PixelConnectivity, MatType)
computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.
public static int ConnectedComponentsWithStats(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, PixelConnectivity connectivity, MatType ltype)
Parameters
imageInputArraythe image to be labeled
labelsOutputArraydestination labeled image
statsOutputArraystatistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes
centroidsOutputArrayfloating point centroid (x,y) output for each label, including the background label
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
ltypeMatTypeoutput image label type. Currently CV_32S and CV_16U are supported.
Returns
ConnectedComponentsWithStatsWithAlgorithm(InputArray, OutputArray, OutputArray, OutputArray, PixelConnectivity, MatType, ConnectedComponentsAlgorithmsTypes)
computes the connected components labeled image of boolean image and also produces a statistics output for each label.
image with 4 or 8 way connectivity - returns N, the total number of labels[0, N - 1] where 0 represents the background label.ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image.ccltype specifies the connected components labeling algorithm to use, currently Grana's (BBDT) and Wu's (SAUF) algorithms are supported, see the #ConnectedComponentsAlgorithmsTypes for details.Note that SAUF algorithm forces a row major ordering of labels while BBDT does not. This function uses parallel version of both Grana and Wu's algorithms (statistics included) if at least one allowed parallel framework is enabled and if the rows of the image are at least twice the number returned by #getNumberOfCPUs.
public static int ConnectedComponentsWithStatsWithAlgorithm(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, PixelConnectivity connectivity, MatType ltype, ConnectedComponentsAlgorithmsTypes ccltype)
Parameters
imageInputArraythe 8-bit single-channel image to be labeled
labelsOutputArraydestination labeled image
statsOutputArraystatistics output for each label, including the background label, see below for available statistics.Statistics are accessed via stats(label, COLUMN) where COLUMN is one of #ConnectedComponentsTypes. The data type is CV_32S.
centroidsOutputArraycentroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y.The data type CV_64F.
connectivityPixelConnectivity8 or 4 for 8-way or 4-way connectivity respectively
ltypeMatTypeoutput image label type. Currently CV_32S and CV_16U are supported.
ccltypeConnectedComponentsAlgorithmsTypesconnected components algorithm type.
Returns
ContourArea(InputArray, bool)
Calculates the contour area
public static double ContourArea(InputArray contour, bool oriented = false)
Parameters
contourInputArrayThe contour vertices, represented by CV_32SC2 or CV_32FC2 matrix
orientedbool
Returns
ContourArea(IEnumerable<Point2f>, bool)
Calculates the contour area
public static double ContourArea(IEnumerable<Point2f> contour, bool oriented = false)
Parameters
contourIEnumerable<Point2f>The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix
orientedbool
Returns
ContourArea(IEnumerable<Point>, bool)
Calculates the contour area
public static double ContourArea(IEnumerable<Point> contour, bool oriented = false)
Parameters
contourIEnumerable<Point>The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix
orientedbool
Returns
ConvertFp16(InputArray, OutputArray)
Converts an array to half precision floating number.
This function converts FP32(single precision floating point) from/to FP16(half precision floating point). CV_16S format is used to represent FP16 data. There are two use modes(src -> dst) : CV_32F -> CV_16S and CV_16S -> CV_32F.The input array has to have type of CV_32F or CV_16S to represent the bit depth.If the input array is neither of them, the function will raise an error. The format of half precision floating point is defined in IEEE 754-2008.
public static void ConvertFp16(InputArray src, OutputArray dst)
Parameters
srcInputArrayinput array.
dstOutputArrayoutput array.
ConvertMaps(InputArray, InputArray, OutputArray, OutputArray, MatType, bool)
Converts image transformation maps from one representation to another.
public static void ConvertMaps(InputArray map1, InputArray map2, OutputArray dstmap1, OutputArray dstmap2, MatType dstmap1Type, bool nnInterpolation = false)
Parameters
map1InputArrayThe first input map of type CV_16SC2 , CV_32FC1 , or CV_32FC2 .
map2InputArrayThe second input map of type CV_16UC1 , CV_32FC1 , or none (empty matrix), respectively.
dstmap1OutputArrayThe first output map that has the type dstmap1type and the same size as src.
dstmap2OutputArrayThe second output map.
dstmap1TypeMatTypeType of the first output map that should be CV_16SC2 , CV_32FC1 , or CV_32FC2 .
nnInterpolationboolFlag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation.
ConvertPointsFromHomogeneous(InputArray, OutputArray)
converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z))
public static void ConvertPointsFromHomogeneous(InputArray src, OutputArray dst)
Parameters
srcInputArrayInput vector of N-dimensional points.
dstOutputArrayOutput vector of N-1-dimensional points.
ConvertPointsFromHomogeneous(IEnumerable<Vec3f>)
converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z))
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec2f[] ConvertPointsFromHomogeneous(IEnumerable<Vec3f> src)
Parameters
srcIEnumerable<Vec3f>Input vector of N-dimensional points.
Returns
- Vec2f[]
Output vector of N-1-dimensional points.
ConvertPointsFromHomogeneous(IEnumerable<Vec4f>)
converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z))
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec3f[] ConvertPointsFromHomogeneous(IEnumerable<Vec4f> src)
Parameters
srcIEnumerable<Vec4f>Input vector of N-dimensional points.
Returns
- Vec3f[]
Output vector of N-1-dimensional points.
ConvertPointsHomogeneous(InputArray, OutputArray)
Converts points to/from homogeneous coordinates.
public static void ConvertPointsHomogeneous(InputArray src, OutputArray dst)
Parameters
srcInputArrayInput array or vector of 2D, 3D, or 4D points.
dstOutputArrayOutput vector of 2D, 3D, or 4D points.
ConvertPointsToHomogeneous(InputArray, OutputArray)
converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1))
public static void ConvertPointsToHomogeneous(InputArray src, OutputArray dst)
Parameters
srcInputArrayInput vector of N-dimensional points.
dstOutputArrayOutput vector of N+1-dimensional points.
ConvertPointsToHomogeneous(IEnumerable<Vec2f>)
converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1))
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec3f[] ConvertPointsToHomogeneous(IEnumerable<Vec2f> src)
Parameters
srcIEnumerable<Vec2f>Input vector of N-dimensional points.
Returns
- Vec3f[]
Output vector of N+1-dimensional points.
ConvertPointsToHomogeneous(IEnumerable<Vec3f>)
converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1))
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec4f[] ConvertPointsToHomogeneous(IEnumerable<Vec3f> src)
Parameters
srcIEnumerable<Vec3f>Input vector of N-dimensional points.
Returns
- Vec4f[]
Output vector of N+1-dimensional points.
ConvertScaleAbs(InputArray, OutputArray, double, double)
Scales, computes absolute values and converts the result to 8-bit.
public static void ConvertScaleAbs(InputArray src, OutputArray dst, double alpha = 1, double beta = 0)
Parameters
srcInputArrayThe source array
dstOutputArrayThe destination array
alphadoubleThe optional scale factor. [By default this is 1]
betadoubleThe optional delta added to the scaled values. [By default this is 0]
ConvexHull(InputArray, OutputArray, bool, bool)
Computes convex hull for a set of 2D points.
public static void ConvexHull(InputArray points, OutputArray hull, bool clockwise = false, bool returnPoints = true)
Parameters
pointsInputArrayThe input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix
hullOutputArrayThe output convex hull. It is either a vector of points that form the hull (must have the same type as the input points), or a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set).
clockwiseboolIf true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.
returnPointsbool
ConvexHull(IEnumerable<Point2f>, bool)
Computes convex hull for a set of 2D points.
public static Point2f[] ConvexHull(IEnumerable<Point2f> points, bool clockwise = false)
Parameters
pointsIEnumerable<Point2f>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix
clockwiseboolIf true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.
Returns
- Point2f[]
The output convex hull. It is a vector of points that form the hull (must have the same type as the input points).
ConvexHull(IEnumerable<Point>, bool)
Computes convex hull for a set of 2D points.
public static Point[] ConvexHull(IEnumerable<Point> points, bool clockwise = false)
Parameters
pointsIEnumerable<Point>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix
clockwiseboolIf true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.
Returns
- Point[]
The output convex hull. It is a vector of points that form the hull (must have the same type as the input points).
ConvexHullIndices(IEnumerable<Point2f>, bool)
Computes convex hull for a set of 2D points.
public static int[] ConvexHullIndices(IEnumerable<Point2f> points, bool clockwise = false)
Parameters
pointsIEnumerable<Point2f>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix
clockwiseboolIf true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.
Returns
- int[]
The output convex hull. It is a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set).
ConvexHullIndices(IEnumerable<Point>, bool)
Computes convex hull for a set of 2D points.
public static int[] ConvexHullIndices(IEnumerable<Point> points, bool clockwise = false)
Parameters
pointsIEnumerable<Point>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix
clockwiseboolIf true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards.
Returns
- int[]
The output convex hull. It is a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set).
ConvexityDefects(InputArray, InputArray, OutputArray)
Computes the contour convexity defects
public static void ConvexityDefects(InputArray contour, InputArray convexHull, OutputArray convexityDefects)
Parameters
contourInputArrayInput contour.
convexHullInputArrayConvex hull obtained using convexHull() that should contain indices of the contour points that make the hull.
convexityDefectsOutputArrayThe output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.
ConvexityDefects(IEnumerable<Point2f>, IEnumerable<int>)
Computes the contour convexity defects
public static Vec4i[] ConvexityDefects(IEnumerable<Point2f> contour, IEnumerable<int> convexHull)
Parameters
contourIEnumerable<Point2f>Input contour.
convexHullIEnumerable<int>Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull.
Returns
- Vec4i[]
The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.
ConvexityDefects(IEnumerable<Point>, IEnumerable<int>)
Computes the contour convexity defects
public static Vec4i[] ConvexityDefects(IEnumerable<Point> contour, IEnumerable<int> convexHull)
Parameters
contourIEnumerable<Point>Input contour.
convexHullIEnumerable<int>Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull.
Returns
- Vec4i[]
The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.
CopyMakeBorder(InputArray, OutputArray, int, int, int, int, BorderTypes, Scalar?)
Forms a border around the image
public static void CopyMakeBorder(InputArray src, OutputArray dst, int top, int bottom, int left, int right, BorderTypes borderType, Scalar? value = null)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image; will have the same type as src and the size Size(src.cols+left+right, src.rows+top+bottom)
topintSpecify how much pixels in each direction from the source image rectangle one needs to extrapolate
bottomintSpecify how much pixels in each direction from the source image rectangle one needs to extrapolate
leftintSpecify how much pixels in each direction from the source image rectangle one needs to extrapolate
rightintSpecify how much pixels in each direction from the source image rectangle one needs to extrapolate
borderTypeBorderTypesThe border type
valueScalar?The border value if borderType == Constant
CopyTo(InputArray, OutputArray, InputArray?)
Copies the matrix to another one. When the operation mask is specified, if the Mat::create call shown above reallocates the matrix, the newly allocated matrix is initialized with all zeros before copying the data.
public static void CopyTo(InputArray src, OutputArray dst, InputArray? mask = null)
Parameters
srcInputArraySource matrix.
dstOutputArrayDestination matrix. If it does not have a proper size or type before the operation, it is reallocated.
maskInputArrayOperation mask of the same size as *this. Its non-zero elements indicate which matrix elements need to be copied.The mask has to be of type CV_8U and can have 1 or multiple channels.
CornerEigenValsAndVecs(InputArray, OutputArray, int, int, BorderTypes)
computes both eigenvalues and the eigenvectors of 2x2 derivative covariation matrix at each pixel. The output is stored as 6-channel matrix.
public static void CornerEigenValsAndVecs(InputArray src, OutputArray dst, int blockSize, int ksize, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArraydstOutputArrayblockSizeintksizeintborderTypeBorderTypes
CornerHarris(InputArray, OutputArray, int, int, double, BorderTypes)
Harris corner detector.
public static void CornerHarris(InputArray src, OutputArray dst, int blockSize, int ksize, double k, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayInput single-channel 8-bit or floating-point image.
dstOutputArrayImage to store the Harris detector responses. It has the type CV_32FC1 and the same size as src.
blockSizeintNeighborhood size (see the details on #cornerEigenValsAndVecs ).
ksizeintAperture parameter for the Sobel operator.
kdoubleHarris detector free parameter. See the formula above.
borderTypeBorderTypesPixel extrapolation method. See #BorderTypes. #BORDER_WRAP is not supported.
CornerMinEigenVal(InputArray, OutputArray, int, int, BorderTypes)
Calculates the minimal eigenvalue of gradient matrices for corner detection.
public static void CornerMinEigenVal(InputArray src, OutputArray dst, int blockSize, int ksize = 3, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayInput single-channel 8-bit or floating-point image.
dstOutputArrayImage to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src .
blockSizeintNeighborhood size (see the details on #cornerEigenValsAndVecs ).
ksizeintAperture parameter for the Sobel operator.
borderTypeBorderTypesPixel extrapolation method. See #BorderTypes. #BORDER_WRAP is not supported.
CornerSubPix(InputArray, IEnumerable<Point2f>, Size, Size, TermCriteria)
adjusts the corner locations with sub-pixel accuracy to maximize the certain cornerness criteria
public static Point2f[] CornerSubPix(InputArray image, IEnumerable<Point2f> inputCorners, Size winSize, Size zeroZone, TermCriteria criteria)
Parameters
imageInputArrayInput image.
inputCornersIEnumerable<Point2f>Initial coordinates of the input corners and refined coordinates provided for output.
winSizeSizeHalf of the side length of the search window.
zeroZoneSizeHalf of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size.
criteriaTermCriteriaCriteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration.
Returns
- Point2f[]
CorrectMatches(InputArray, InputArray, InputArray, OutputArray, OutputArray)
Refines coordinates of corresponding points.
public static void CorrectMatches(InputArray F, InputArray points1, InputArray points2, OutputArray newPoints1, OutputArray newPoints2)
Parameters
FInputArray3x3 fundamental matrix.
points1InputArray1xN array containing the first set of points.
points2InputArray1xN array containing the second set of points.
newPoints1OutputArrayThe optimized points1.
newPoints2OutputArrayThe optimized points2.
CorrectMatches(double[,], IEnumerable<Point2d>, IEnumerable<Point2d>, out Point2d[], out Point2d[])
Refines coordinates of corresponding points.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void CorrectMatches(double[,] F, IEnumerable<Point2d> points1, IEnumerable<Point2d> points2, out Point2d[] newPoints1, out Point2d[] newPoints2)
Parameters
Fdouble[,]3x3 fundamental matrix.
points1IEnumerable<Point2d>1xN array containing the first set of points.
points2IEnumerable<Point2d>1xN array containing the second set of points.
newPoints1Point2d[]The optimized points1.
newPoints2Point2d[]The optimized points2.
CountNonZero(InputArray)
computes the number of nonzero array elements
public static int CountNonZero(InputArray mtx)
Parameters
mtxInputArraySingle-channel array
Returns
- int
number of non-zero elements in mtx
CreateCLAHE(double, Size?)
Creates a predefined CLAHE object
public static CLAHE CreateCLAHE(double clipLimit = 40, Size? tileGridSize = null)
Parameters
Returns
CreateFrameSource_Camera(int)
public static FrameSource CreateFrameSource_Camera(int deviceId)
Parameters
deviceIdint
Returns
CreateFrameSource_Empty()
public static FrameSource CreateFrameSource_Empty()
Returns
CreateFrameSource_Video(string)
public static FrameSource CreateFrameSource_Video(string fileName)
Parameters
fileNamestring
Returns
CreateFrameSource_Video_CUDA(string)
public static FrameSource CreateFrameSource_Video_CUDA(string fileName)
Parameters
fileNamestring
Returns
CreateHanningWindow(InputOutputArray, Size, MatType)
Computes a Hanning window coefficients in two dimensions.
public static void CreateHanningWindow(InputOutputArray dst, Size winSize, MatType type)
Parameters
dstInputOutputArrayDestination array to place Hann coefficients in
winSizeSizeThe window size specifications
typeMatTypeCreated array type
CreateOptFlow_Brox_GPU()
public static DenseOpticalFlowExt CreateOptFlow_Brox_GPU()
Returns
CreateOptFlow_DualTVL1_GPU()
public static DenseOpticalFlowExt CreateOptFlow_DualTVL1_GPU()
Returns
CreateOptFlow_Farneback()
public static DenseOpticalFlowExt CreateOptFlow_Farneback()
Returns
CreateOptFlow_Farneback_GPU()
public static DenseOpticalFlowExt CreateOptFlow_Farneback_GPU()
Returns
CreateOptFlow_PyrLK_GPU()
public static DenseOpticalFlowExt CreateOptFlow_PyrLK_GPU()
Returns
CreateSuperResolution_BTVL1()
Create Bilateral TV-L1 Super Resolution.
public static SuperResolution CreateSuperResolution_BTVL1()
Returns
CreateSuperResolution_BTVL1_CUDA()
Create Bilateral TV-L1 Super Resolution.
public static SuperResolution CreateSuperResolution_BTVL1_CUDA()
Returns
CreateTrackbar(string, string, int, TrackbarCallbackNative?, nint)
Creates a trackbar and attaches it to the specified window. The function createTrackbar creates a trackbar(a slider or range control) with the specified name and range, assigns a variable value to be a position synchronized with the trackbar and specifies the callback function onChange to be called on the trackbar position change.The created trackbar is displayed in the specified window winName.
public static int CreateTrackbar(string trackbarName, string winName, int count, TrackbarCallbackNative? onChange = null, nint userData = 0)
Parameters
trackbarNamestringName of the created trackbar.
winNamestringName of the window that will be used as a parent of the created trackbar.
countintMaximal position of the slider. The minimal position is always 0.
onChangeTrackbarCallbackNativePointer to the function to be called every time the slider changes position. This function should be prototyped as void Foo(int, void*); , where the first parameter is the trackbar position and the second parameter is the user data(see the next parameter). If the callback is the NULL pointer, no callbacks are called, but only value is updated.
userDatanintUser data that is passed as is to the callback. It can be used to handle trackbar events without using global variables.
Returns
CreateTrackbar(string, string, ref int, int, TrackbarCallbackNative?, nint)
Creates a trackbar and attaches it to the specified window. The function createTrackbar creates a trackbar(a slider or range control) with the specified name and range, assigns a variable value to be a position synchronized with the trackbar and specifies the callback function onChange to be called on the trackbar position change.The created trackbar is displayed in the specified window winName.
public static int CreateTrackbar(string trackbarName, string winName, ref int value, int count, TrackbarCallbackNative? onChange = null, nint userData = 0)
Parameters
trackbarNamestringName of the created trackbar.
winNamestringName of the window that will be used as a parent of the created trackbar.
valueintOptional pointer to an integer variable whose value reflects the position of the slider.Upon creation, the slider position is defined by this variable.
countintMaximal position of the slider. The minimal position is always 0.
onChangeTrackbarCallbackNativePointer to the function to be called every time the slider changes position. This function should be prototyped as void Foo(int, void*); , where the first parameter is the trackbar position and the second parameter is the user data(see the next parameter). If the callback is the NULL pointer, no callbacks are called, but only value is updated.
userDatanintUser data that is passed as is to the callback. It can be used to handle trackbar events without using global variables.
Returns
CubeRoot(float)
computes cube root of the argument
public static float CubeRoot(float val)
Parameters
valfloat
Returns
CvtColor(InputArray, OutputArray, ColorConversionCodes, int)
Converts image from one color space to another
public static void CvtColor(InputArray src, OutputArray dst, ColorConversionCodes code, int dstCn = 0)
Parameters
srcInputArrayThe source image, 8-bit unsigned, 16-bit unsigned or single-precision floating-point
dstOutputArrayThe destination image; will have the same size and the same depth as src
codeColorConversionCodesThe color space conversion code
dstCnintThe number of channels in the destination image; if the parameter is 0, the number of the channels will be derived automatically from src and the code
CvtColorTwoPlane(InputArray, InputArray, OutputArray, ColorConversionCodes)
Converts an image from one color space to another where the source image is stored in two planes. This function only supports YUV420 to RGB conversion as of now.
public static void CvtColorTwoPlane(InputArray src1, InputArray src2, OutputArray dst, ColorConversionCodes code)
Parameters
src1InputArray8-bit image (#CV_8U) of the Y plane.
src2InputArrayimage containing interleaved U/V plane.
dstOutputArrayoutput image.
codeColorConversionCodesSpecifies the type of conversion. It can take any of the following values: - #COLOR_YUV2BGR_NV12 - #COLOR_YUV2RGB_NV12 - #COLOR_YUV2BGRA_NV12 - #COLOR_YUV2RGBA_NV12 - #COLOR_YUV2BGR_NV21 - #COLOR_YUV2RGB_NV21 - #COLOR_YUV2BGRA_NV21 - #COLOR_YUV2RGBA_NV21
Dct(InputArray, OutputArray, DctFlags)
Performs forward or inverse 1D or 2D Discrete Cosine Transformation
public static void Dct(InputArray src, OutputArray dst, DctFlags flags = DctFlags.None)
Parameters
srcInputArrayThe source floating-point array
dstOutputArrayThe destination array; will have the same size and same type as src
flagsDctFlagsTransformation flags, a combination of DctFlag2 values
Decolor(InputArray, OutputArray, OutputArray)
Transforms a color image to a grayscale image. It is a basic tool in digital printing, stylized black-and-white photograph rendering, and in many single channel image processing applications @cite CL12 .
public static void Decolor(InputArray src, OutputArray grayscale, OutputArray colorBoost)
Parameters
srcInputArrayInput 8-bit 3-channel image.
grayscaleOutputArrayOutput 8-bit 1-channel image.
colorBoostOutputArrayOutput 8-bit 3-channel image.
DecomposeHomographyMat(InputArray, InputArray, out Mat[], out Mat[], out Mat[])
Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).
public static int DecomposeHomographyMat(InputArray h, InputArray k, out Mat[] rotations, out Mat[] translations, out Mat[] normals)
Parameters
hInputArrayThe input homography matrix between two images.
kInputArrayThe input intrinsic camera calibration matrix.
rotationsMat[]Array of rotation matrices.
translationsMat[]Array of translation matrices.
normalsMat[]Array of plane normal matrices.
Returns
DecomposeProjectionMatrix(InputArray, OutputArray, OutputArray, OutputArray, OutputArray?, OutputArray?, OutputArray?, OutputArray?)
Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector
public static void DecomposeProjectionMatrix(InputArray projMatrix, OutputArray cameraMatrix, OutputArray rotMatrix, OutputArray transVect, OutputArray? rotMatrixX = null, OutputArray? rotMatrixY = null, OutputArray? rotMatrixZ = null, OutputArray? eulerAngles = null)
Parameters
projMatrixInputArray3x4 input projection matrix P.
cameraMatrixOutputArrayOutput 3x3 camera matrix K.
rotMatrixOutputArrayOutput 3x3 external rotation matrix R.
transVectOutputArrayOutput 4x1 translation vector T.
rotMatrixXOutputArrayOptional 3x3 rotation matrix around x-axis.
rotMatrixYOutputArrayOptional 3x3 rotation matrix around y-axis.
rotMatrixZOutputArrayOptional 3x3 rotation matrix around z-axis.
eulerAnglesOutputArrayptional three-element vector containing three Euler angles of rotation in degrees.
DecomposeProjectionMatrix(double[,], out double[,], out double[,], out double[])
Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector
public static void DecomposeProjectionMatrix(double[,] projMatrix, out double[,] cameraMatrix, out double[,] rotMatrix, out double[] transVect)
Parameters
projMatrixdouble[,]3x4 input projection matrix P.
cameraMatrixdouble[,]Output 3x3 camera matrix K.
rotMatrixdouble[,]Output 3x3 external rotation matrix R.
transVectdouble[]Output 4x1 translation vector T.
DecomposeProjectionMatrix(double[,], out double[,], out double[,], out double[], out double[,], out double[,], out double[,], out double[])
Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector
public static void DecomposeProjectionMatrix(double[,] projMatrix, out double[,] cameraMatrix, out double[,] rotMatrix, out double[] transVect, out double[,] rotMatrixX, out double[,] rotMatrixY, out double[,] rotMatrixZ, out double[] eulerAngles)
Parameters
projMatrixdouble[,]3x4 input projection matrix P.
cameraMatrixdouble[,]Output 3x3 camera matrix K.
rotMatrixdouble[,]Output 3x3 external rotation matrix R.
transVectdouble[]Output 4x1 translation vector T.
rotMatrixXdouble[,]Optional 3x3 rotation matrix around x-axis.
rotMatrixYdouble[,]Optional 3x3 rotation matrix around y-axis.
rotMatrixZdouble[,]Optional 3x3 rotation matrix around z-axis.
eulerAnglesdouble[]ptional three-element vector containing three Euler angles of rotation in degrees.
Demosaicing(InputArray, OutputArray, ColorConversionCodes, int)
main function for all demosaicing processes
public static void Demosaicing(InputArray src, OutputArray dst, ColorConversionCodes code, int dstCn = 0)
Parameters
srcInputArrayinput image: 8-bit unsigned or 16-bit unsigned.
dstOutputArrayoutput image of the same size and depth as src.
codeColorConversionCodesColor space conversion code (see the description below).
dstCnintnumber of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.
Remarks
The function can do the following transformations:
Demosaicing using bilinear interpolation
#COLOR_BayerBG2BGR , #COLOR_BayerGB2BGR , #COLOR_BayerRG2BGR , #COLOR_BayerGR2BGR #COLOR_BayerBG2GRAY , #COLOR_BayerGB2GRAY , #COLOR_BayerRG2GRAY , #COLOR_BayerGR2GRAY
Demosaicing using Variable Number of Gradients.
#COLOR_BayerBG2BGR_VNG , #COLOR_BayerGB2BGR_VNG , #COLOR_BayerRG2BGR_VNG , #COLOR_BayerGR2BGR_VNG
Edge-Aware Demosaicing.
#COLOR_BayerBG2BGR_EA , #COLOR_BayerGB2BGR_EA , #COLOR_BayerRG2BGR_EA , #COLOR_BayerGR2BGR_EA
Demosaicing with alpha channel
COLOR_BayerBG2BGRA , #COLOR_BayerGB2BGRA , #COLOR_BayerRG2BGRA , #COLOR_BayerGR2BGRA
DenoiseTVL1(IEnumerable<Mat>, Mat, double, int)
Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented.
public static void DenoiseTVL1(IEnumerable<Mat> observations, Mat result, double lambda = 1, int niters = 30)
Parameters
observationsIEnumerable<Mat>This array should contain one or more noised versions of the image that is to be restored.
resultMatHere the denoised image will be stored. There is no need to do pre-allocation of storage space, as it will be automatically allocated, if necessary.
lambdadoubleCorresponds to \f$\lambda\f$ in the formulas above. As it is enlarged, the smooth (blurred) images are treated more favorably than detailed (but maybe more noised) ones. Roughly speaking, as it becomes smaller, the result will be more blur but more sever outliers will be removed.
nitersintNumber of iterations that the algorithm will run. Of course, as more iterations as better, but it is hard to quantitatively refine this statement, so just use the default and increase it if the results are poor.
DestroyAllWindows()
Destroys all of the HighGUI windows.
public static void DestroyAllWindows()
DestroyWindow(string)
Destroys the specified window.
public static void DestroyWindow(string winName)
Parameters
winNamestring
DetailEnhance(InputArray, OutputArray, float, float)
This filter enhances the details of a particular image.
public static void DetailEnhance(InputArray src, OutputArray dst, float sigmaS = 10, float sigmaR = 0.15)
Parameters
srcInputArrayInput 8-bit 3-channel image.
dstOutputArrayOutput image with the same size and type as src.
sigmaSfloatRange between 0 to 200.
sigmaRfloatRange between 0 to 1.
Determinant(InputArray)
computes determinant of a square matrix
public static double Determinant(InputArray mtx)
Parameters
mtxInputArrayThe input matrix; must have CV_32FC1 or CV_64FC1 type and square size
Returns
- double
determinant of the specified matrix.
Dft(InputArray, OutputArray, DftFlags, int)
Performs a forward Discrete Fourier transform of 1D or 2D floating-point array.
public static void Dft(InputArray src, OutputArray dst, DftFlags flags = DftFlags.None, int nonzeroRows = 0)
Parameters
srcInputArrayThe source array, real or complex
dstOutputArrayThe destination array, which size and type depends on the flags
flagsDftFlagsTransformation flags, a combination of the DftFlag2 values
nonzeroRowsintWhen the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT
Dilate(InputArray, OutputArray, InputArray?, Point?, int, BorderTypes, Scalar?)
Dilates an image by using a specific structuring element.
public static void Dilate(InputArray src, OutputArray dst, InputArray? element, Point? anchor = null, int iterations = 1, BorderTypes borderType = BorderTypes.Constant, Scalar? borderValue = null)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image. It will have the same size and the same type as src
elementInputArrayThe structuring element used for dilation. If element=new Mat() , a 3x3 rectangular structuring element is used
anchorPoint?Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center
iterationsintThe number of times dilation is applied. [By default this is 1]
borderTypeBorderTypesThe pixel extrapolation method. [By default this is BorderType.Constant]
borderValueScalar?The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()]
DistanceTransform(InputArray, OutputArray, DistanceTypes, DistanceTransformMasks, int)
computes the distance transform map
public static void DistanceTransform(InputArray src, OutputArray dst, DistanceTypes distanceType, DistanceTransformMasks maskSize, int dstType = 4)
Parameters
srcInputArray8-bit, single-channel (binary) source image.
dstOutputArrayOutput image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src.
distanceTypeDistanceTypesType of distance
maskSizeDistanceTransformMasksSize of the distance transform mask, see #DistanceTransformMasks. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a 3x3 mask gives the same result as 5x5 or any larger aperture.
dstTypeintType of output image. It can be MatType.CV_8U or MatType.CV_32F. Type CV_8U can be used only for the first variant of the function and distanceType == #DIST_L1.
DistanceTransformWithLabels(InputArray, OutputArray, OutputArray, DistanceTypes, DistanceTransformMasks, DistanceTransformLabelTypes)
Calculates the distance to the closest zero pixel for each pixel of the source image.
public static void DistanceTransformWithLabels(InputArray src, OutputArray dst, OutputArray labels, DistanceTypes distanceType, DistanceTransformMasks maskSize, DistanceTransformLabelTypes labelType = DistanceTransformLabelTypes.CComp)
Parameters
srcInputArray8-bit, single-channel (binary) source image.
dstOutputArrayOutput image with calculated distances. It is a 8-bit or 32-bit floating-point, single-channel image of the same size as src.
labelsOutputArrayOutput 2D array of labels (the discrete Voronoi diagram). It has the type CV_32SC1 and the same size as src.
distanceTypeDistanceTypesType of distance
maskSizeDistanceTransformMasksSize of the distance transform mask, see #DistanceTransformMasks. #DIST_MASK_PRECISE is not supported by this variant. In case of the #DIST_L1 or #DIST_C distance type, the parameter is forced to 3 because a 3x3 mask gives the same result as 5x5 or any larger aperture.
labelTypeDistanceTransformLabelTypesType of the label array to build
Divide(InputArray, InputArray, OutputArray, double, MatType?)
Performs per-element division of two arrays or a scalar by an array.
public static void Divide(InputArray src1, InputArray src2, OutputArray dst, double scale = 1, MatType? dtype = null)
Parameters
src1InputArrayThe first source array
src2InputArrayThe second source array; should have the same size and same type as src1
dstOutputArrayThe destination array; will have the same size and same type as src2
scaledoubleScale factor [By default this is 1]
dtypeMatType?
Divide(double, InputArray, OutputArray, int)
Performs per-element division of two arrays or a scalar by an array.
public static void Divide(double scale, InputArray src2, OutputArray dst, int dtype = -1)
Parameters
scaledoubleScale factor
src2InputArrayThe first source array
dstOutputArrayThe destination array; will have the same size and same type as src2
dtypeint
DrawChessboardCorners(InputOutputArray, Size, InputArray, bool)
Renders the detected chessboard corners.
public static void DrawChessboardCorners(InputOutputArray image, Size patternSize, InputArray corners, bool patternWasFound)
Parameters
imageInputOutputArrayDestination image. It must be an 8-bit color image.
patternSizeSizeNumber of inner corners per a chessboard row and column (patternSize = cv::Size(points_per_row,points_per_column)).
cornersInputArrayArray of detected corners, the output of findChessboardCorners.
patternWasFoundboolParameter indicating whether the complete board was found or not. The return value of findChessboardCorners() should be passed here.
DrawChessboardCorners(InputOutputArray, Size, IEnumerable<Point2f>, bool)
Renders the detected chessboard corners.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void DrawChessboardCorners(InputOutputArray image, Size patternSize, IEnumerable<Point2f> corners, bool patternWasFound)
Parameters
imageInputOutputArrayDestination image. It must be an 8-bit color image.
patternSizeSizeNumber of inner corners per a chessboard row and column (patternSize = cv::Size(points_per_row,points_per_column)).
cornersIEnumerable<Point2f>Array of detected corners, the output of findChessboardCorners.
patternWasFoundboolParameter indicating whether the complete board was found or not. The return value of findChessboardCorners() should be passed here.
DrawContours(InputOutputArray, IEnumerable<Mat>, int, Scalar, int, LineTypes, Mat?, int, Point?)
draws contours in the image
public static void DrawContours(InputOutputArray image, IEnumerable<Mat> contours, int contourIdx, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, Mat? hierarchy = null, int maxLevel = 2147483647, Point? offset = null)
Parameters
imageInputOutputArrayDestination image.
contoursIEnumerable<Mat>All the input contours. Each contour is stored as a point vector.
contourIdxintParameter indicating a contour to draw. If it is negative, all the contours are drawn.
colorScalarColor of the contours.
thicknessintThickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn.
lineTypeLineTypesLine connectivity.
hierarchyMatOptional information about hierarchy. It is only needed if you want to draw only some of the contours
maxLevelintMaximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.
offsetPoint?Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy)
DrawContours(InputOutputArray, IEnumerable<IEnumerable<Point>>, int, Scalar, int, LineTypes, IEnumerable<HierarchyIndex>?, int, Point?)
draws contours in the image
public static void DrawContours(InputOutputArray image, IEnumerable<IEnumerable<Point>> contours, int contourIdx, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, IEnumerable<HierarchyIndex>? hierarchy = null, int maxLevel = 2147483647, Point? offset = null)
Parameters
imageInputOutputArrayDestination image.
contoursIEnumerable<IEnumerable<Point>>All the input contours. Each contour is stored as a point vector.
contourIdxintParameter indicating a contour to draw. If it is negative, all the contours are drawn.
colorScalarColor of the contours.
thicknessintThickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn.
lineTypeLineTypesLine connectivity.
hierarchyIEnumerable<HierarchyIndex>Optional information about hierarchy. It is only needed if you want to draw only some of the contours
maxLevelintMaximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available.
offsetPoint?Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy)
DrawFrameAxes(InputOutputArray, InputArray, InputArray, InputArray, InputArray, float, int)
Draw axes of the world/object coordinate system from pose estimation.
public static void DrawFrameAxes(InputOutputArray image, InputArray cameraMatrix, InputArray distCoeffs, InputArray rvec, InputArray tvec, float length, int thickness = 3)
Parameters
imageInputOutputArrayInput/output image. It must have 1 or 3 channels. The number of channels is not altered.
cameraMatrixInputArrayInput 3x3 floating-point matrix of camera intrinsic parameters.
distCoeffsInputArrayInput vector of distortion coefficients \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$ of 4, 5, 8, 12 or 14 elements.If the vector is empty, the zero distortion coefficients are assumed.
rvecInputArrayRotation vector (see @ref Rodrigues ) that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
tvecInputArrayTranslation vector.
lengthfloatLength of the painted axes in the same unit than tvec (usually in meters).
thicknessintLine thickness of the painted axes.
Remarks
This function draws the axes of the world/object coordinate system w.r.t. to the camera frame. OX is drawn in red, OY in green and OZ in blue.
DrawKeypoints(InputArray, IEnumerable<KeyPoint>, InputOutputArray, Scalar?, DrawMatchesFlags)
Draw keypoints.
public static void DrawKeypoints(InputArray image, IEnumerable<KeyPoint> keypoints, InputOutputArray outImage, Scalar? color = null, DrawMatchesFlags flags = DrawMatchesFlags.Default)
Parameters
imageInputArraySource image.
keypointsIEnumerable<KeyPoint>Keypoints from the source image.
outImageInputOutputArrayOutput image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.
colorScalar?Color of keypoints.
flagsDrawMatchesFlagsFlags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.
DrawMarker(InputOutputArray, Point, Scalar, MarkerTypes, int, int, LineTypes)
Draws a marker on a predefined position in an image.
The function cv::drawMarker draws a marker on a given position in the image.For the moment several marker types are supported, see #MarkerTypes for more information.
public static void DrawMarker(InputOutputArray img, Point position, Scalar color, MarkerTypes markerType = MarkerTypes.Cross, int markerSize = 20, int thickness = 1, LineTypes lineType = LineTypes.Link8)
Parameters
imgInputOutputArrayImage.
positionPointThe point where the crosshair is positioned.
colorScalarLine color.
markerTypeMarkerTypesThe specific type of marker you want to use.
markerSizeintThe length of the marker axis [default = 20 pixels]
thicknessintLine thickness.
lineTypeLineTypesType of the line.
DrawMatches(Mat, IEnumerable<KeyPoint>, Mat, IEnumerable<KeyPoint>, IEnumerable<DMatch>, Mat, Scalar?, Scalar?, IEnumerable<byte>?, DrawMatchesFlags)
Draws the found matches of keypoints from two images.
public static void DrawMatches(Mat img1, IEnumerable<KeyPoint> keypoints1, Mat img2, IEnumerable<KeyPoint> keypoints2, IEnumerable<DMatch> matches1To2, Mat outImg, Scalar? matchColor = null, Scalar? singlePointColor = null, IEnumerable<byte>? matchesMask = null, DrawMatchesFlags flags = DrawMatchesFlags.Default)
Parameters
img1MatFirst source image.
keypoints1IEnumerable<KeyPoint>Keypoints from the first source image.
img2MatSecond source image.
keypoints2IEnumerable<KeyPoint>Keypoints from the second source image.
matches1To2IEnumerable<DMatch>Matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]] .
outImgMatOutput image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.
matchColorScalar?Color of matches (lines and connected keypoints). If matchColor==Scalar::all(-1), the color is generated randomly.
singlePointColorScalar?Color of single keypoints (circles), which means that keypoints do not have the matches. If singlePointColor==Scalar::all(-1) , the color is generated randomly.
matchesMaskIEnumerable<byte>Mask determining which matches are drawn. If the mask is empty, all matches are drawn.
flagsDrawMatchesFlagsFlags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.
DrawMatchesKnn(Mat, IEnumerable<KeyPoint>, Mat, IEnumerable<KeyPoint>, IEnumerable<IEnumerable<DMatch>>, Mat, Scalar?, Scalar?, IEnumerable<IEnumerable<byte>>?, DrawMatchesFlags)
Draws the found matches of keypoints from two images.
public static void DrawMatchesKnn(Mat img1, IEnumerable<KeyPoint> keypoints1, Mat img2, IEnumerable<KeyPoint> keypoints2, IEnumerable<IEnumerable<DMatch>> matches1To2, Mat outImg, Scalar? matchColor = null, Scalar? singlePointColor = null, IEnumerable<IEnumerable<byte>>? matchesMask = null, DrawMatchesFlags flags = DrawMatchesFlags.Default)
Parameters
img1MatFirst source image.
keypoints1IEnumerable<KeyPoint>Keypoints from the first source image.
img2MatSecond source image.
keypoints2IEnumerable<KeyPoint>Keypoints from the second source image.
matches1To2IEnumerable<IEnumerable<DMatch>>Matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]] .
outImgMatOutput image. Its content depends on the flags value defining what is drawn in the output image. See possible flags bit values below.
matchColorScalar?Color of matches (lines and connected keypoints). If matchColor==Scalar::all(-1), the color is generated randomly.
singlePointColorScalar?Color of single keypoints (circles), which means that keypoints do not have the matches. If singlePointColor==Scalar::all(-1) , the color is generated randomly.
matchesMaskIEnumerable<IEnumerable<byte>>Mask determining which matches are drawn. If the mask is empty, all matches are drawn.
flagsDrawMatchesFlagsFlags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.
EMD(InputArray, InputArray, DistanceTypes)
Computes the "minimal work" distance between two weighted point configurations.
The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations.One of the applications described in @cite RubnerSept98, @cite Rubner2000 is multi-dimensional histogram comparison for image retrieval.EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster.In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.
public static float EMD(InputArray signature1, InputArray signature2, DistanceTypes distType)
Parameters
signature1InputArrayFirst signature, a \f$\texttt{size1}\times \texttt{dims}+1\f$ floating-point matrix. Each row stores the point weight followed by the point coordinates.The matrix is allowed to have a single column(weights only) if the user-defined cost matrix is used.The weights must be non-negative and have at least one non-zero value.
signature2InputArraySecond signature of the same format as signature1 , though the number of rows may be different.The total weights may be different.In this case an extra "dummy" point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.
distTypeDistanceTypesUsed metric.
Returns
EMD(InputArray, InputArray, DistanceTypes, InputArray?)
Computes the "minimal work" distance between two weighted point configurations.
The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations.One of the applications described in @cite RubnerSept98, @cite Rubner2000 is multi-dimensional histogram comparison for image retrieval.EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster.In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.
public static float EMD(InputArray signature1, InputArray signature2, DistanceTypes distType, InputArray? cost)
Parameters
signature1InputArrayFirst signature, a \f$\texttt{size1}\times \texttt{dims}+1\f$ floating-point matrix. Each row stores the point weight followed by the point coordinates.The matrix is allowed to have a single column(weights only) if the user-defined cost matrix is used.The weights must be non-negative and have at least one non-zero value.
signature2InputArraySecond signature of the same format as signature1 , though the number of rows may be different.The total weights may be different.In this case an extra "dummy" point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.
distTypeDistanceTypesUsed metric.
costInputArrayUser-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.
Returns
EMD(InputArray, InputArray, DistanceTypes, InputArray?, out float, OutputArray?)
Computes the "minimal work" distance between two weighted point configurations.
The function computes the earth mover distance and/or a lower boundary of the distance between the two weighted point configurations.One of the applications described in @cite RubnerSept98, @cite Rubner2000 is multi-dimensional histogram comparison for image retrieval.EMD is a transportation problem that is solved using some modification of a simplex algorithm, thus the complexity is exponential in the worst case, though, on average it is much faster.In the case of a real metric the lower boundary can be calculated even faster (using linear-time algorithm) and it can be used to determine roughly whether the two signatures are far enough so that they cannot relate to the same object.
public static float EMD(InputArray signature1, InputArray signature2, DistanceTypes distType, InputArray? cost, out float lowerBound, OutputArray? flow = null)
Parameters
signature1InputArrayFirst signature, a \f$\texttt{size1}\times \texttt{dims}+1\f$ floating-point matrix. Each row stores the point weight followed by the point coordinates.The matrix is allowed to have a single column(weights only) if the user-defined cost matrix is used.The weights must be non-negative and have at least one non-zero value.
signature2InputArraySecond signature of the same format as signature1 , though the number of rows may be different.The total weights may be different.In this case an extra "dummy" point is added to either signature1 or signature2. The weights must be non-negative and have at least one non-zero value.
distTypeDistanceTypesUsed metric.
costInputArrayUser-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.
lowerBoundfloatOptional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers.The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only(the signature matrices have a single column). You ** must** initialize *lowerBound.If the calculated distance between mass centers is greater or equal to *lowerBound(it means that the signatures are far enough), the function does not calculate EMD. In any case *lowerBound is set to the calculated distance between mass centers on return. Thus, if you want to calculate both distance between mass centers and EMD, *lowerBound should be set to 0.
flowOutputArrayResultant size1 x size2 flow matrix: flow[i,j] is a flow from i-th point of signature1 to j-th point of signature2.
Returns
EdgePreservingFilter(InputArray, OutputArray, EdgePreservingMethods, float, float)
Filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications @cite EM11 .
public static void EdgePreservingFilter(InputArray src, OutputArray dst, EdgePreservingMethods flags = EdgePreservingMethods.RecursFilter, float sigmaS = 60, float sigmaR = 0.4)
Parameters
srcInputArrayInput 8-bit 3-channel image.
dstOutputArrayOutput 8-bit 3-channel image.
flagsEdgePreservingMethodsEdge preserving filters
sigmaSfloatRange between 0 to 200.
sigmaRfloatRange between 0 to 1.
Eigen(InputArray, OutputArray, OutputArray)
Computes eigenvalues and eigenvectors of a symmetric matrix.
public static bool Eigen(InputArray src, OutputArray eigenvalues, OutputArray eigenvectors)
Parameters
srcInputArrayThe input matrix; must have CV_32FC1 or CV_64FC1 type, square size and be symmetric: src^T == src
eigenvaluesOutputArrayThe output vector of eigenvalues of the same type as src; The eigenvalues are stored in the descending order.
eigenvectorsOutputArrayThe output matrix of eigenvectors; It will have the same size and the same type as src; The eigenvectors are stored as subsequent matrix rows, in the same order as the corresponding eigenvalues
Returns
EigenNonSymmetric(InputArray, OutputArray, OutputArray)
Calculates eigenvalues and eigenvectors of a non-symmetric matrix (real eigenvalues only).
public static void EigenNonSymmetric(InputArray src, OutputArray eigenvalues, OutputArray eigenvectors)
Parameters
srcInputArrayinput matrix (CV_32FC1 or CV_64FC1 type).
eigenvaluesOutputArrayoutput vector of eigenvalues (type is the same type as src).
eigenvectorsOutputArrayoutput matrix of eigenvectors (type is the same type as src). The eigenvectors are stored as subsequent matrix rows, in the same order as the corresponding eigenvalues.
Ellipse(InputOutputArray, Point, Size, double, double, double, Scalar, int, LineTypes, int)
Draws simple or thick elliptic arc or fills ellipse sector
public static void Ellipse(InputOutputArray img, Point center, Size axes, double angle, double startAngle, double endAngle, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayImage.
centerPointCenter of the ellipse.
axesSizeLength of the ellipse axes.
angledoubleRotation angle.
startAngledoubleStarting angle of the elliptic arc.
endAngledoubleEnding angle of the elliptic arc.
colorScalarEllipse color.
thicknessintThickness of the ellipse arc. [By default this is 1]
lineTypeLineTypesType of the ellipse boundary. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the center coordinates and axes' values. [By default this is 0]
Ellipse(InputOutputArray, RotatedRect, Scalar, int, LineTypes)
Draws simple or thick elliptic arc or fills ellipse sector
public static void Ellipse(InputOutputArray img, RotatedRect box, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8)
Parameters
imgInputOutputArrayImage.
boxRotatedRectThe enclosing box of the ellipse drawn
colorScalarEllipse color.
thicknessintThickness of the ellipse boundary. [By default this is 1]
lineTypeLineTypesType of the ellipse boundary. [By default this is LineType.Link8]
Ellipse2Poly(Point, Size, int, int, int, int)
Approximates an elliptic arc with a polyline. The function ellipse2Poly computes the vertices of a polyline that approximates the specified elliptic arc. It is used by cv::ellipse.
public static Point[] Ellipse2Poly(Point center, Size axes, int angle, int arcStart, int arcEnd, int delta)
Parameters
centerPointCenter of the arc.
axesSizeHalf of the size of the ellipse main axes. See the ellipse for details.
angleintRotation angle of the ellipse in degrees. See the ellipse for details.
arcStartintStarting angle of the elliptic arc in degrees.
arcEndintEnding angle of the elliptic arc in degrees.
deltaintAngle between the subsequent polyline vertices. It defines the approximation
Returns
- Point[]
Output vector of polyline vertices.
Ellipse2Poly(Point2d, Size2d, int, int, int, int)
Approximates an elliptic arc with a polyline. The function ellipse2Poly computes the vertices of a polyline that approximates the specified elliptic arc. It is used by cv::ellipse.
public static Point2d[] Ellipse2Poly(Point2d center, Size2d axes, int angle, int arcStart, int arcEnd, int delta)
Parameters
centerPoint2dCenter of the arc.
axesSize2dHalf of the size of the ellipse main axes. See the ellipse for details.
angleintRotation angle of the ellipse in degrees. See the ellipse for details.
arcStartintStarting angle of the elliptic arc in degrees.
arcEndintEnding angle of the elliptic arc in degrees.
deltaintAngle between the subsequent polyline vertices. It defines the approximation
Returns
- Point2d[]
Output vector of polyline vertices.
EqualizeHist(InputArray, OutputArray)
normalizes the grayscale image brightness and contrast by normalizing its histogram
public static void EqualizeHist(InputArray src, OutputArray dst)
Parameters
srcInputArrayThe source 8-bit single channel image
dstOutputArrayThe destination image; will have the same size and the same type as src
Erode(InputArray, OutputArray, InputArray?, Point?, int, BorderTypes, Scalar?)
Erodes an image by using a specific structuring element.
public static void Erode(InputArray src, OutputArray dst, InputArray? element, Point? anchor = null, int iterations = 1, BorderTypes borderType = BorderTypes.Constant, Scalar? borderValue = null)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image. It will have the same size and the same type as src
elementInputArrayThe structuring element used for dilation. If element=new Mat(), a 3x3 rectangular structuring element is used
anchorPoint?Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center
iterationsintThe number of times erosion is applied
borderTypeBorderTypesThe pixel extrapolation method
borderValueScalar?The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()]
EstimateAffine2D(InputArray, InputArray, OutputArray?, RobustEstimationAlgorithms, double, ulong, double, ulong)
Computes an optimal affine transformation between two 2D point sets.
public static Mat? EstimateAffine2D(InputArray from, InputArray to, OutputArray? inliers = null, RobustEstimationAlgorithms method = RobustEstimationAlgorithms.RANSAC, double ransacReprojThreshold = 3, ulong maxIters = 2000, double confidence = 0.99, ulong refineIters = 10)
Parameters
fromInputArrayFirst input 2D point set containing (X,Y).
toInputArraySecond input 2D point set containing (x,y).
inliersOutputArrayOutput vector indicating which points are inliers (1-inlier, 0-outlier).
methodRobustEstimationAlgorithmsRobust method used to compute transformation.
ransacReprojThresholddoubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier.Applies only to RANSAC.
maxItersulongThe maximum number of robust method iterations.
confidencedoubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough.Values too close to 1 can slow down the estimation significantly.Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
refineItersulongMaximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.
Returns
- Mat
Output 2D affine transformation matrix \f$2 \times 3\f$ or empty matrix if transformation could not be estimated.
EstimateAffine3D(InputArray, InputArray, OutputArray, OutputArray, double, double)
Computes an optimal affine transformation between two 3D point sets.
public static int EstimateAffine3D(InputArray src, InputArray dst, OutputArray outVal, OutputArray inliers, double ransacThreshold = 3, double confidence = 0.99)
Parameters
srcInputArrayFirst input 3D point set.
dstInputArraySecond input 3D point set.
outValOutputArrayOutput 3D affine transformation matrix 3 x 4 .
inliersOutputArrayOutput vector indicating which points are inliers.
ransacThresholddoubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier.
confidencedoubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
Returns
EstimateAffinePartial2D(InputArray, InputArray, OutputArray?, RobustEstimationAlgorithms, double, ulong, double, ulong)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.
public static Mat? EstimateAffinePartial2D(InputArray from, InputArray to, OutputArray? inliers = null, RobustEstimationAlgorithms method = RobustEstimationAlgorithms.RANSAC, double ransacReprojThreshold = 3, ulong maxIters = 2000, double confidence = 0.99, ulong refineIters = 10)
Parameters
fromInputArrayFirst input 2D point set.
toInputArraySecond input 2D point set.
inliersOutputArrayOutput vector indicating which points are inliers.
methodRobustEstimationAlgorithmsRobust method used to compute transformation.
ransacReprojThresholddoubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier.Applies only to RANSAC.
maxItersulongThe maximum number of robust method iterations.
confidencedoubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough.Values too close to 1 can slow down the estimation significantly.Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
refineItersulong
Returns
- Mat
Output 2D affine transformation (4 degrees of freedom) matrix 2x3 or empty matrix if transformation could not be estimated.
EvaluateFeatureDetector(Mat, Mat, Mat, ref KeyPoint[], ref KeyPoint[], out float, out int)
public static void EvaluateFeatureDetector(Mat img1, Mat img2, Mat H1to2, ref KeyPoint[] keypoints1, ref KeyPoint[] keypoints2, out float repeatability, out int correspCount)
Parameters
img1Matimg2MatH1to2Matkeypoints1KeyPoint[]keypoints2KeyPoint[]repeatabilityfloatcorrespCountint
Exp(InputArray, OutputArray)
computes exponent of each matrix element (dst = e**src)
public static void Exp(InputArray src, OutputArray dst)
Parameters
srcInputArrayThe source array
dstOutputArrayThe destination array; will have the same size and same type as src
ExtractChannel(InputArray, OutputArray, int)
extracts a single channel from src (coi is 0-based index)
public static void ExtractChannel(InputArray src, OutputArray dst, int coi)
Parameters
srcInputArraydstOutputArraycoiint
FAST(InputArray, int, bool)
Detects corners using the FAST algorithm
public static KeyPoint[] FAST(InputArray image, int threshold, bool nonmaxSupression = true)
Parameters
imageInputArraygrayscale image where keypoints (corners) are detected.
thresholdintthreshold on difference between intensity of the central pixel and pixels of a circle around this pixel.
nonmaxSupressionboolif true, non-maximum suppression is applied to detected corners (keypoints).
Returns
- KeyPoint[]
keypoints detected on the image.
FAST(InputArray, int, bool, FASTType)
Detects corners using the FAST algorithm
public static KeyPoint[] FAST(InputArray image, int threshold, bool nonmaxSupression, FASTType type)
Parameters
imageInputArraygrayscale image where keypoints (corners) are detected.
thresholdintthreshold on difference between intensity of the central pixel and pixels of a circle around this pixel.
nonmaxSupressionboolif true, non-maximum suppression is applied to detected corners (keypoints).
typeFASTTypeone of the three neighborhoods as defined in the paper
Returns
- KeyPoint[]
keypoints detected on the image.
FastAtan2(float, float)
computes the angle in degrees (0..360) of the vector (x,y)
public static float FastAtan2(float y, float x)
Parameters
Returns
FastNlMeansDenoising(InputArray, OutputArray, float, int, int)
Perform image denoising using Non-local Means Denoising algorithm with several computational optimizations. Noise expected to be a gaussian white noise
public static void FastNlMeansDenoising(InputArray src, OutputArray dst, float h = 3, int templateWindowSize = 7, int searchWindowSize = 21)
Parameters
srcInputArrayInput 8-bit 1-channel, 2-channel or 3-channel image.
dstOutputArrayOutput image with the same size and type as src .
hfloatParameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise
templateWindowSizeintSize in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels
searchWindowSizeintSize in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels
FastNlMeansDenoisingColored(InputArray, OutputArray, float, float, int, int)
Modification of fastNlMeansDenoising function for colored images
public static void FastNlMeansDenoisingColored(InputArray src, OutputArray dst, float h = 3, float hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21)
Parameters
srcInputArrayInput 8-bit 3-channel image.
dstOutputArrayOutput image with the same size and type as src.
hfloatParameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise
hColorfloatThe same as h but for color components. For most images value equals 10 will be enought to remove colored noise and do not distort colors
templateWindowSizeintSize in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels
searchWindowSizeintSize in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels
FastNlMeansDenoisingColoredMulti(IEnumerable<Mat>, OutputArray, int, int, float, float, int, int)
Modification of fastNlMeansDenoisingMulti function for colored images sequences
public static void FastNlMeansDenoisingColoredMulti(IEnumerable<Mat> srcImgs, OutputArray dst, int imgToDenoiseIndex, int temporalWindowSize, float h = 3, float hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21)
Parameters
srcImgsIEnumerable<Mat>Input 8-bit 3-channel images sequence. All images should have the same type and size.
dstOutputArrayOutput image with the same size and type as srcImgs images.
imgToDenoiseIndexintTarget image to denoise index in srcImgs sequence
temporalWindowSizeintNumber of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image.
hfloatParameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.
hColorfloatThe same as h but for color components.
templateWindowSizeintSize in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels
searchWindowSizeintSize in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels
FastNlMeansDenoisingMulti(IEnumerable<Mat>, OutputArray, int, int, float, int, int)
Modification of fastNlMeansDenoising function for images sequence where consequtive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces.
public static void FastNlMeansDenoisingMulti(IEnumerable<Mat> srcImgs, OutputArray dst, int imgToDenoiseIndex, int temporalWindowSize, float h = 3, int templateWindowSize = 7, int searchWindowSize = 21)
Parameters
srcImgsIEnumerable<Mat>Input 8-bit 1-channel, 2-channel or 3-channel images sequence. All images should have the same type and size.
dstOutputArrayOutput image with the same size and type as srcImgs images.
imgToDenoiseIndexintTarget image to denoise index in srcImgs sequence
temporalWindowSizeintNumber of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image.
hfloatParameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise
templateWindowSizeintSize in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels
searchWindowSizeintSize in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels
FillConvexPoly(InputOutputArray, InputArray, Scalar, LineTypes, int)
Fills a convex polygon.
public static void FillConvexPoly(InputOutputArray img, InputArray pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayImage
ptsInputArrayThe polygon vertices
colorScalarPolygon color
lineTypeLineTypesType of the polygon boundaries
shiftintThe number of fractional bits in the vertex coordinates
FillConvexPoly(Mat, IEnumerable<Point>, Scalar, LineTypes, int)
Fills a convex polygon.
public static void FillConvexPoly(Mat img, IEnumerable<Point> pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgMatImage
ptsIEnumerable<Point>The polygon vertices
colorScalarPolygon color
lineTypeLineTypesType of the polygon boundaries
shiftintThe number of fractional bits in the vertex coordinates
FillPoly(InputOutputArray, InputArray, Scalar, LineTypes, int, Point?)
Fills the area bounded by one or more polygons
public static void FillPoly(InputOutputArray img, InputArray pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0, Point? offset = null)
Parameters
imgInputOutputArrayImage
ptsInputArrayArray of polygons, each represented as an array of points
colorScalarPolygon color
lineTypeLineTypesType of the polygon boundaries
shiftintThe number of fractional bits in the vertex coordinates
offsetPoint?
FillPoly(Mat, IEnumerable<IEnumerable<Point>>, Scalar, LineTypes, int, Point?)
Fills the area bounded by one or more polygons
public static void FillPoly(Mat img, IEnumerable<IEnumerable<Point>> pts, Scalar color, LineTypes lineType = LineTypes.Link8, int shift = 0, Point? offset = null)
Parameters
imgMatImage
ptsIEnumerable<IEnumerable<Point>>Array of polygons, each represented as an array of points
colorScalarPolygon color
lineTypeLineTypesType of the polygon boundaries
shiftintThe number of fractional bits in the vertex coordinates
offsetPoint?
Filter2D(InputArray, OutputArray, MatType, InputArray, Point?, double, BorderTypes)
Convolves an image with the kernel
public static void Filter2D(InputArray src, OutputArray dst, MatType ddepth, InputArray kernel, Point? anchor = null, double delta = 0, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image. It will have the same size and the same number of channels as src
ddepthMatTypeThe desired depth of the destination image. If it is negative, it will be the same as src.depth()
kernelInputArrayConvolution kernel (or rather a correlation kernel), a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually
anchorPoint?The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor should lie within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center
deltadoubleThe optional value added to the filtered pixels before storing them in dst
borderTypeBorderTypesThe pixel extrapolation method
FilterHomographyDecompByVisibleRefpoints(IEnumerable<Mat>, IEnumerable<Mat>, InputArray, InputArray, OutputArray, InputArray?)
Filters homography decompositions based on additional information.
public static void FilterHomographyDecompByVisibleRefpoints(IEnumerable<Mat> rotations, IEnumerable<Mat> normals, InputArray beforePoints, InputArray afterPoints, OutputArray possibleSolutions, InputArray? pointsMask = null)
Parameters
rotationsIEnumerable<Mat>Vector of rotation matrices.
normalsIEnumerable<Mat>Vector of plane normal matrices.
beforePointsInputArrayVector of (rectified) visible reference points before the homography is applied
afterPointsInputArrayVector of (rectified) visible reference points after the homography is applied
possibleSolutionsOutputArrayVector of int indices representing the viable solution set after filtering
pointsMaskInputArrayoptional Mat/Vector of 8u type representing the mask for the inliers as given by the findHomography function
FilterSpeckles(InputOutputArray, double, int, double, InputOutputArray?)
filters off speckles (small regions of incorrectly computed disparity)
public static void FilterSpeckles(InputOutputArray img, double newVal, int maxSpeckleSize, double maxDiff, InputOutputArray? buf = null)
Parameters
imgInputOutputArrayThe input 16-bit signed disparity image
newValdoubleThe disparity value used to paint-off the speckles
maxSpeckleSizeintThe maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithm
maxDiffdoubleMaximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value.
bufInputOutputArrayThe optional temporary buffer to avoid memory allocation within the function.
Find4QuadCornerSubpix(InputArray, InputOutputArray, Size)
finds subpixel-accurate positions of the chessboard corners
public static bool Find4QuadCornerSubpix(InputArray img, InputOutputArray corners, Size regionSize)
Parameters
imgInputArraycornersInputOutputArrayregionSizeSize
Returns
Find4QuadCornerSubpix(InputArray, Point2f[], Size)
finds subpixel-accurate positions of the chessboard corners
public static bool Find4QuadCornerSubpix(InputArray img, Point2f[] corners, Size regionSize)
Parameters
imgInputArraycornersPoint2f[]regionSizeSize
Returns
FindChessboardCorners(InputArray, Size, OutputArray, ChessboardFlags)
Finds the positions of internal corners of the chessboard.
public static bool FindChessboardCorners(InputArray image, Size patternSize, OutputArray corners, ChessboardFlags flags = ChessboardFlags.AdaptiveThresh | ChessboardFlags.NormalizeImage)
Parameters
imageInputArraySource chessboard view. It must be an 8-bit grayscale or color image.
patternSizeSizeNumber of inner corners per a chessboard row and column ( patternSize = Size(points_per_row,points_per_colum) = Size(columns, rows) ).
cornersOutputArrayOutput array of detected corners.
flagsChessboardFlagsVarious operation flags that can be zero or a combination of the ChessboardFlag values
Returns
- bool
The function returns true if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder them, it returns false.
FindChessboardCorners(InputArray, Size, out Point2f[], ChessboardFlags)
Finds the positions of internal corners of the chessboard.
public static bool FindChessboardCorners(InputArray image, Size patternSize, out Point2f[] corners, ChessboardFlags flags = ChessboardFlags.AdaptiveThresh | ChessboardFlags.NormalizeImage)
Parameters
imageInputArraySource chessboard view. It must be an 8-bit grayscale or color image.
patternSizeSizeNumber of inner corners per a chessboard row and column ( patternSize = Size(points_per_row,points_per_colum) = Size(columns, rows) ).
cornersPoint2f[]Output array of detected corners.
flagsChessboardFlagsVarious operation flags that can be zero or a combination of the ChessboardFlag values
Returns
- bool
The function returns true if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder them, it returns false.
FindChessboardCornersSB(InputArray, Size, OutputArray, ChessboardFlags)
Finds the positions of internal corners of the chessboard using a sector based approach.
public static bool FindChessboardCornersSB(InputArray image, Size patternSize, OutputArray corners, ChessboardFlags flags = ChessboardFlags.None)
Parameters
imageInputArrayimage Source chessboard view. It must be an 8-bit grayscale or color image.
patternSizeSizeNumber of inner corners per a chessboard row and column (patternSize = Size(points_per_row, points_per_column) = Size(columns, rows) ).
cornersOutputArrayOutput array of detected corners.
flagsChessboardFlagsflags Various operation flags that can be zero or a combination of the ChessboardFlags values.
Returns
FindChessboardCornersSB(InputArray, Size, out Point2f[], ChessboardFlags)
Finds the positions of internal corners of the chessboard using a sector based approach.
public static bool FindChessboardCornersSB(InputArray image, Size patternSize, out Point2f[] corners, ChessboardFlags flags = ChessboardFlags.None)
Parameters
imageInputArrayimage Source chessboard view. It must be an 8-bit grayscale or color image.
patternSizeSizeNumber of inner corners per a chessboard row and column (patternSize = Size(points_per_row, points_per_column) = Size(columns, rows) ).
cornersPoint2f[]Output array of detected corners.
flagsChessboardFlagsflags Various operation flags that can be zero or a combination of the ChessboardFlags values.
Returns
FindCirclesGrid(InputArray, Size, OutputArray, FindCirclesGridFlags, Feature2D?)
Finds centers in the grid of circles.
public static bool FindCirclesGrid(InputArray image, Size patternSize, OutputArray centers, FindCirclesGridFlags flags = FindCirclesGridFlags.SymmetricGrid, Feature2D? blobDetector = null)
Parameters
imageInputArraygrid view of input circles; it must be an 8-bit grayscale or color image.
patternSizeSizenumber of circles per row and column ( patternSize = Size(points_per_row, points_per_colum) ).
centersOutputArrayoutput array of detected centers.
flagsFindCirclesGridFlagsvarious operation flags that can be one of the FindCirclesGridFlag values
blobDetectorFeature2Dfeature detector that finds blobs like dark circles on light background.
Returns
FindCirclesGrid(InputArray, Size, out Point2f[], FindCirclesGridFlags, Feature2D?)
Finds centers in the grid of circles.
public static bool FindCirclesGrid(InputArray image, Size patternSize, out Point2f[] centers, FindCirclesGridFlags flags = FindCirclesGridFlags.SymmetricGrid, Feature2D? blobDetector = null)
Parameters
imageInputArraygrid view of input circles; it must be an 8-bit grayscale or color image.
patternSizeSizenumber of circles per row and column ( patternSize = Size(points_per_row, points_per_colum) ).
centersPoint2f[]output array of detected centers.
flagsFindCirclesGridFlagsvarious operation flags that can be one of the FindCirclesGridFlag values
blobDetectorFeature2Dfeature detector that finds blobs like dark circles on light background.
Returns
FindContours(InputArray, out Mat[], OutputArray, RetrievalModes, ContourApproximationModes, Point?)
Finds contours in a binary image.
public static void FindContours(InputArray image, out Mat[] contours, OutputArray hierarchy, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)
Parameters
imageInputArraySource, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.
contoursMat[]Detected contours. Each contour is stored as a vector of points.
hierarchyOutputArrayOptional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative.
modeRetrievalModesContour retrieval mode
methodContourApproximationModesContour approximation method
offsetPoint?Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.
FindContours(InputArray, out Point[][], out HierarchyIndex[], RetrievalModes, ContourApproximationModes, Point?)
Finds contours in a binary image.
public static void FindContours(InputArray image, out Point[][] contours, out HierarchyIndex[] hierarchy, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)
Parameters
imageInputArraySource, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.
contoursPoint[][]Detected contours. Each contour is stored as a vector of points.
hierarchyHierarchyIndex[]Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative.
modeRetrievalModesContour retrieval mode
methodContourApproximationModesContour approximation method
offsetPoint?Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.
FindContoursAsArray(InputArray, RetrievalModes, ContourApproximationModes, Point?)
Finds contours in a binary image.
public static Point[][] FindContoursAsArray(InputArray image, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)
Parameters
imageInputArraySource, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.
modeRetrievalModesContour retrieval mode
methodContourApproximationModesContour approximation method
offsetPoint?Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.
Returns
- Point[][]
Detected contours. Each contour is stored as a vector of points.
FindContoursAsMat(InputArray, RetrievalModes, ContourApproximationModes, Point?)
Finds contours in a binary image.
public static Mat<Point>[] FindContoursAsMat(InputArray image, RetrievalModes mode, ContourApproximationModes method, Point? offset = null)
Parameters
imageInputArraySource, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours.
modeRetrievalModesContour retrieval mode
methodContourApproximationModesContour approximation method
offsetPoint?Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context.
Returns
FindEssentialMat(InputArray, InputArray, InputArray, EssentialMatMethod, double, double, OutputArray?)
Calculates an essential matrix from the corresponding points in two images.
public static Mat FindEssentialMat(InputArray points1, InputArray points2, InputArray cameraMatrix, EssentialMatMethod method = EssentialMatMethod.Ransac, double prob = 0.999, double threshold = 1, OutputArray? mask = null)
Parameters
points1InputArrayArray of N (N >= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision).
points2InputArrayArray of the second image points of the same size and format as points1 .
cameraMatrixInputArrayCamera matrix K=⎡⎣⎢fx000fy0cxcy1⎤⎦⎥ . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.
methodEssentialMatMethodMethod for computing an essential matrix. RANSAC for the RANSAC algorithm. LMEDS for the LMedS algorithm.
probdoubleParameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.
thresholddoubleParameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.
maskOutputArrayOutput array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.
Returns
- Mat
essential matrix
FindEssentialMat(InputArray, InputArray, double, Point2d, EssentialMatMethod, double, double, OutputArray?)
Calculates an essential matrix from the corresponding points in two images.
public static Mat FindEssentialMat(InputArray points1, InputArray points2, double focal, Point2d pp, EssentialMatMethod method = EssentialMatMethod.Ransac, double prob = 0.999, double threshold = 1, OutputArray? mask = null)
Parameters
points1InputArrayArray of N (N >= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision).
points2InputArrayArray of the second image por LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.
focaldoubleFocal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.
ppPoint2dprincipal point of the camera.
methodEssentialMatMethodMethod for computing an essential matrix. RANSAC for the RANSAC algorithm. LMEDS for the LMedS algorithm.
probdoubleParameter used for the RANSAC othe accuracy of the point localization, image resolution, and the image noise.
thresholddoubleParameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on ints of the same size and format as points1 .
maskOutputArrayOutput array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.
Returns
- Mat
essential matrix
FindFundamentalMat(InputArray, InputArray, FundamentalMatMethods, double, double, OutputArray?)
Calculates a fundamental matrix from the corresponding points in two images.
public static Mat FindFundamentalMat(InputArray points1, InputArray points2, FundamentalMatMethods method = FundamentalMatMethods.Ransac, double param1 = 3, double param2 = 0.99, OutputArray? mask = null)
Parameters
points1InputArrayArray of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2InputArrayArray of the second image points of the same size and format as points1 .
methodFundamentalMatMethodsMethod for computing a fundamental matrix.
param1doubleParameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.
param2doubleParameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.
maskOutputArrayOutput array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.
Returns
- Mat
fundamental matrix
FindFundamentalMat(IEnumerable<Point2d>, IEnumerable<Point2d>, FundamentalMatMethods, double, double, OutputArray?)
Calculates a fundamental matrix from the corresponding points in two images.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Mat FindFundamentalMat(IEnumerable<Point2d> points1, IEnumerable<Point2d> points2, FundamentalMatMethods method = FundamentalMatMethods.Ransac, double param1 = 3, double param2 = 0.99, OutputArray? mask = null)
Parameters
points1IEnumerable<Point2d>Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2IEnumerable<Point2d>Array of the second image points of the same size and format as points1 .
methodFundamentalMatMethodsMethod for computing a fundamental matrix.
param1doubleParameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.
param2doubleParameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.
maskOutputArrayOutput array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.
Returns
- Mat
fundamental matrix
FindFundamentalMat(IEnumerable<Point2f>, IEnumerable<Point2f>, FundamentalMatMethods, double, double, OutputArray?)
Calculates a fundamental matrix from the corresponding points in two images.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Mat FindFundamentalMat(IEnumerable<Point2f> points1, IEnumerable<Point2f> points2, FundamentalMatMethods method = FundamentalMatMethods.Ransac, double param1 = 3, double param2 = 0.99, OutputArray? mask = null)
Parameters
points1IEnumerable<Point2f>Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2IEnumerable<Point2f>Array of the second image points of the same size and format as points1 .
methodFundamentalMatMethodsMethod for computing a fundamental matrix.
param1doubleParameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.
param2doubleParameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.
maskOutputArrayOutput array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.
Returns
- Mat
fundamental matrix
FindHomography(InputArray, InputArray, HomographyMethods, double, OutputArray?, int, double)
computes the best-fit perspective transformation mapping srcPoints to dstPoints.
public static Mat FindHomography(InputArray srcPoints, InputArray dstPoints, HomographyMethods method = HomographyMethods.None, double ransacReprojThreshold = 3, OutputArray? mask = null, int maxIters = 2000, double confidence = 0.995)
Parameters
srcPointsInputArrayCoordinates of the points in the original plane, a matrix of the type CV_32FC2
dstPointsInputArrayCoordinates of the points in the target plane, a matrix of the type CV_32FC2
methodHomographyMethodsMethod used to computed a homography matrix.
ransacReprojThresholddoubleMaximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only)
maskOutputArrayOptional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.
maxItersintThe maximum number of RANSAC iterations.
confidencedoubleConfidence level, between 0 and 1.
Returns
FindHomography(InputArray, InputArray, OutputArray, UsacParams?)
computes the best-fit perspective transformation mapping srcPoints to dstPoints.
public static Mat FindHomography(InputArray srcPoints, InputArray dstPoints, OutputArray mask, UsacParams? @params)
Parameters
srcPointsInputArrayCoordinates of the points in the original plane, a matrix of the type CV_32FC2
dstPointsInputArrayCoordinates of the points in the target plane, a matrix of the type CV_32FC2
maskOutputArrayOptional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.
paramsUsacParams
Returns
Exceptions
FindHomography(IEnumerable<Point2d>, IEnumerable<Point2d>, HomographyMethods, double, OutputArray?, int, double)
computes the best-fit perspective transformation mapping srcPoints to dstPoints.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Mat FindHomography(IEnumerable<Point2d> srcPoints, IEnumerable<Point2d> dstPoints, HomographyMethods method = HomographyMethods.None, double ransacReprojThreshold = 3, OutputArray? mask = null, int maxIters = 2000, double confidence = 0.995)
Parameters
srcPointsIEnumerable<Point2d>Coordinates of the points in the original plane
dstPointsIEnumerable<Point2d>Coordinates of the points in the target plane
methodHomographyMethodsMethod used to computed a homography matrix.
ransacReprojThresholddoubleMaximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only)
maskOutputArrayOptional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.
maxItersintThe maximum number of RANSAC iterations.
confidencedoubleConfidence level, between 0 and 1.
Returns
FindNonZero(InputArray, OutputArray)
returns the list of locations of non-zero pixels
public static void FindNonZero(InputArray src, OutputArray idx)
Parameters
srcInputArrayidxOutputArray
FindTransformECC(InputArray, InputArray, InputOutputArray, MotionTypes, TermCriteria, InputArray?, int)
Finds the geometric transform (warp) between two images in terms of the ECC criterion @cite EP08 .
public static double FindTransformECC(InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, MotionTypes motionType, TermCriteria criteria, InputArray? inputMask = null, int gaussFiltSize = 5)
Parameters
templateImageInputArraysingle-channel template image; CV_8U or CV_32F array.
inputImageInputArraysingle-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as templateImage.
warpMatrixInputOutputArrayfloating-point \f$2\times 3\f$ or \f$3\times 3\f$ mapping matrix (warp).
motionTypeMotionTypesparameter, specifying the type of motion
criteriaTermCriteriaparameter, specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations(a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values are shown in the declaration above.
inputMaskInputArrayAn optional mask to indicate valid values of inputImage.
gaussFiltSizeintAn optional value indicating size of gaussian blur filter; (DEFAULT: 5)
Returns
FindTransformECC(InputArray, InputArray, InputOutputArray, MotionTypes, TermCriteria?, InputArray?)
Finds the geometric transform (warp) between two images in terms of the ECC criterion @cite EP08 .
public static double FindTransformECC(InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, MotionTypes motionType = MotionTypes.Affine, TermCriteria? criteria = null, InputArray? inputMask = null)
Parameters
templateImageInputArraysingle-channel template image; CV_8U or CV_32F array.
inputImageInputArraysingle-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as templateImage.
warpMatrixInputOutputArrayfloating-point \f$2\times 3\f$ or \f$3\times 3\f$ mapping matrix (warp).
motionTypeMotionTypesparameter, specifying the type of motion
criteriaTermCriteria?parameter, specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations(a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values are shown in the declaration above.
inputMaskInputArrayAn optional mask to indicate valid values of inputImage.
Returns
FitEllipse(InputArray)
Fits ellipse to the set of 2D points.
public static RotatedRect FitEllipse(InputArray points)
Parameters
pointsInputArrayInput 2D point set
Returns
FitEllipse(IEnumerable<Point2f>)
Fits ellipse to the set of 2D points.
public static RotatedRect FitEllipse(IEnumerable<Point2f> points)
Parameters
pointsIEnumerable<Point2f>Input 2D point set
Returns
FitEllipse(IEnumerable<Point>)
Fits ellipse to the set of 2D points.
public static RotatedRect FitEllipse(IEnumerable<Point> points)
Parameters
pointsIEnumerable<Point>Input 2D point set
Returns
FitEllipseAMS(InputArray)
Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square(AMS) proposed by @cite Taubin1991 is used.
public static RotatedRect FitEllipseAMS(InputArray points)
Parameters
pointsInputArrayInput 2D point set
Returns
FitEllipseAMS(IEnumerable<Point2f>)
Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square(AMS) proposed by @cite Taubin1991 is used.
public static RotatedRect FitEllipseAMS(IEnumerable<Point2f> points)
Parameters
pointsIEnumerable<Point2f>Input 2D point set
Returns
FitEllipseAMS(IEnumerable<Point>)
Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Approximate Mean Square(AMS) proposed by @cite Taubin1991 is used.
public static RotatedRect FitEllipseAMS(IEnumerable<Point> points)
Parameters
pointsIEnumerable<Point>Input 2D point set
Returns
FitEllipseDirect(InputArray)
Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square(Direct) method by @cite Fitzgibbon1999 is used.
public static RotatedRect FitEllipseDirect(InputArray points)
Parameters
pointsInputArrayInput 2D point set
Returns
FitEllipseDirect(IEnumerable<Point2f>)
Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square(Direct) method by @cite Fitzgibbon1999 is used.
public static RotatedRect FitEllipseDirect(IEnumerable<Point2f> points)
Parameters
pointsIEnumerable<Point2f>Input 2D point set
Returns
FitEllipseDirect(IEnumerable<Point>)
Fits an ellipse around a set of 2D points.
The function calculates the ellipse that fits a set of 2D points. It returns the rotated rectangle in which the ellipse is inscribed. The Direct least square(Direct) method by @cite Fitzgibbon1999 is used.
public static RotatedRect FitEllipseDirect(IEnumerable<Point> points)
Parameters
pointsIEnumerable<Point>Input 2D point set
Returns
FitLine(InputArray, OutputArray, DistanceTypes, double, double, double)
Fits line to the set of 2D points using M-estimator algorithm
public static void FitLine(InputArray points, OutputArray line, DistanceTypes distType, double param, double reps, double aeps)
Parameters
pointsInputArrayInput vector of 2D or 3D points
lineOutputArrayOutput line parameters. In case of 2D fitting, it should be a vector of 4 elements (like Vec4f) - (vx, vy, x0, y0), where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is a point on the line. In case of 3D fitting, it should be a vector of 6 elements (like Vec6f) - (vx, vy, vz, x0, y0, z0), where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is a point on the line.
distTypeDistanceTypesDistance used by the M-estimator
paramdoubleNumerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.
repsdoubleSufficient accuracy for the radius (distance between the coordinate origin and the line).
aepsdoubleSufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.
FitLine(IEnumerable<Point2f>, DistanceTypes, double, double, double)
Fits line to the set of 2D points using M-estimator algorithm
public static Line2D FitLine(IEnumerable<Point2f> points, DistanceTypes distType, double param, double reps, double aeps)
Parameters
pointsIEnumerable<Point2f>Input vector of 2D or 3D points
distTypeDistanceTypesDistance used by the M-estimator
paramdoubleNumerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.
repsdoubleSufficient accuracy for the radius (distance between the coordinate origin and the line).
aepsdoubleSufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.
Returns
- Line2D
Output line parameters.
FitLine(IEnumerable<Point3f>, DistanceTypes, double, double, double)
Fits line to the set of 3D points using M-estimator algorithm
public static Line3D FitLine(IEnumerable<Point3f> points, DistanceTypes distType, double param, double reps, double aeps)
Parameters
pointsIEnumerable<Point3f>Input vector of 2D or 3D points
distTypeDistanceTypesDistance used by the M-estimator
paramdoubleNumerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.
repsdoubleSufficient accuracy for the radius (distance between the coordinate origin and the line).
aepsdoubleSufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.
Returns
- Line3D
Output line parameters.
FitLine(IEnumerable<Point3i>, DistanceTypes, double, double, double)
Fits line to the set of 3D points using M-estimator algorithm
public static Line3D FitLine(IEnumerable<Point3i> points, DistanceTypes distType, double param, double reps, double aeps)
Parameters
pointsIEnumerable<Point3i>Input vector of 2D or 3D points
distTypeDistanceTypesDistance used by the M-estimator
paramdoubleNumerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.
repsdoubleSufficient accuracy for the radius (distance between the coordinate origin and the line).
aepsdoubleSufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.
Returns
- Line3D
Output line parameters.
FitLine(IEnumerable<Point>, DistanceTypes, double, double, double)
Fits line to the set of 2D points using M-estimator algorithm
public static Line2D FitLine(IEnumerable<Point> points, DistanceTypes distType, double param, double reps, double aeps)
Parameters
pointsIEnumerable<Point>Input vector of 2D or 3D points
distTypeDistanceTypesDistance used by the M-estimator
paramdoubleNumerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen.
repsdoubleSufficient accuracy for the radius (distance between the coordinate origin and the line).
aepsdoubleSufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps.
Returns
- Line2D
Output line parameters.
Flip(InputArray, OutputArray, FlipMode)
reverses the order of the rows, columns or both in a matrix
public static void Flip(InputArray src, OutputArray dst, FlipMode flipCode)
Parameters
srcInputArrayThe source array
dstOutputArrayThe destination array; will have the same size and same type as src
flipCodeFlipModeSpecifies how to flip the array: 0 means flipping around the x-axis, positive (e.g., 1) means flipping around y-axis, and negative (e.g., -1) means flipping around both axes. See also the discussion below for the formulas.
FloodFill(InputOutputArray, InputOutputArray, Point, Scalar)
Fills a connected component with the given color.
public static int FloodFill(InputOutputArray image, InputOutputArray mask, Point seedPoint, Scalar newVal)
Parameters
imageInputOutputArrayInput/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
maskInputOutputArray(For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap.
seedPointPointStarting point.
newValScalarNew value of the repainted domain pixels.
Returns
FloodFill(InputOutputArray, InputOutputArray, Point, Scalar, out Rect, Scalar?, Scalar?, FloodFillFlags)
Fills a connected component with the given color.
public static int FloodFill(InputOutputArray image, InputOutputArray mask, Point seedPoint, Scalar newVal, out Rect rect, Scalar? loDiff = null, Scalar? upDiff = null, FloodFillFlags flags = FloodFillFlags.Link4)
Parameters
imageInputOutputArrayInput/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
maskInputOutputArray(For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap.
seedPointPointStarting point.
newValScalarNew value of the repainted domain pixels.
rectRectOptional output parameter set by the function to the minimum bounding rectangle of the repainted domain.
loDiffScalar?Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
upDiffScalar?Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
flagsFloodFillFlagsOperation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Using FloodFillFlags.MaskOnly will fill in the mask using the grey value 255 (white).
Returns
FloodFill(InputOutputArray, Point, Scalar)
Fills a connected component with the given color.
public static int FloodFill(InputOutputArray image, Point seedPoint, Scalar newVal)
Parameters
imageInputOutputArrayInput/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
seedPointPointStarting point.
newValScalarNew value of the repainted domain pixels.
Returns
FloodFill(InputOutputArray, Point, Scalar, out Rect, Scalar?, Scalar?, FloodFillFlags)
Fills a connected component with the given color.
public static int FloodFill(InputOutputArray image, Point seedPoint, Scalar newVal, out Rect rect, Scalar? loDiff = null, Scalar? upDiff = null, FloodFillFlags flags = FloodFillFlags.Link4)
Parameters
imageInputOutputArrayInput/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below.
seedPointPointStarting point.
newValScalarNew value of the repainted domain pixels.
rectRectOptional output parameter set by the function to the minimum bounding rectangle of the repainted domain.
loDiffScalar?Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
upDiffScalar?Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component.
flagsFloodFillFlagsOperation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Using FloodFillFlags.MaskOnly will fill in the mask using the grey value 255 (white).
Returns
Format(InputArray, FormatType)
public static string Format(InputArray mtx, FormatType format = FormatType.Default)
Parameters
mtxInputArrayformatFormatType
Returns
GaussianBlur(InputArray, OutputArray, Size, double, double, BorderTypes)
Blurs an image using a Gaussian filter.
public static void GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY = 0, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayinput image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dstOutputArrayoutput image of the same size and type as src.
ksizeSizeGaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma* .
sigmaXdoubleGaussian kernel standard deviation in X direction.
sigmaYdoubleGaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.
borderTypeBorderTypespixel extrapolation method
Gemm(InputArray, InputArray, double, InputArray, double, OutputArray, GemmFlags)
implements generalized matrix product algorithm GEMM from BLAS
public static void Gemm(InputArray src1, InputArray src2, double alpha, InputArray src3, double gamma, OutputArray dst, GemmFlags flags = GemmFlags.None)
Parameters
src1InputArraysrc2InputArrayalphadoublesrc3InputArraygammadoubledstOutputArrayflagsGemmFlags
GetAffineTransform(InputArray, InputArray)
Calculates an affine transform from three pairs of the corresponding points. The function calculates the 2×3 matrix of an affine transform.
public static Mat GetAffineTransform(InputArray src, InputArray dst)
Parameters
srcInputArrayCoordinates of triangle vertices in the source image.
dstInputArrayCoordinates of the corresponding triangle vertices in the destination image.
Returns
GetAffineTransform(IEnumerable<Point2f>, IEnumerable<Point2f>)
Calculates an affine transform from three pairs of the corresponding points. The function calculates the 2×3 matrix of an affine transform.
public static Mat GetAffineTransform(IEnumerable<Point2f> src, IEnumerable<Point2f> dst)
Parameters
srcIEnumerable<Point2f>Coordinates of triangle vertices in the source image.
dstIEnumerable<Point2f>Coordinates of the corresponding triangle vertices in the destination image.
Returns
GetBuildInformation()
Returns full configuration time cmake output.
Returned value is raw cmake output including version control system revision, compiler version, compiler flags, enabled modules and third party libraries, etc.Output format depends on target architecture.
public static string GetBuildInformation()
Returns
GetCpuFeaturesLine()
Returns list of CPU features enabled during compilation. Returned value is a string containing space separated list of CPU features with following markers:
- no markers - baseline features
- prefix
*- features enabled in dispatcher - suffix
?- features enabled but not available in HW
public static string GetCpuFeaturesLine()
Returns
Examples
SSE SSE2 SSE3* SSE4.1 *SSE4.2 *FP16* AVX *AVX2* AVX512-SKX?
GetCpuTickCount()
Returns the number of CPU ticks.
The function returns the current number of CPU ticks on some architectures(such as x86, x64, PowerPC). On other platforms the function is equivalent to getTickCount.It can also be used for very accurate time measurements, as well as for RNG initialization.Note that in case of multi-CPU systems a thread, from which getCPUTickCount is called, can be suspended and resumed at another CPU with its own counter. So, theoretically (and practically) the subsequent calls to the function do not necessary return the monotonously increasing values. Also, since a modern CPU varies the CPU frequency depending on the load, the number of CPU clocks spent in some code cannot be directly converted to time units.Therefore, getTickCount is generally a preferable solution for measuringexecution time.
public static long GetCpuTickCount()
Returns
GetDefaultNewCameraMatrix(InputArray, Size?, bool)
returns the default new camera matrix (by default it is the same as cameraMatrix unless centerPricipalPoint=true)
public static Mat GetDefaultNewCameraMatrix(InputArray cameraMatrix, Size? imgSize = null, bool centerPrincipalPoint = false)
Parameters
cameraMatrixInputArrayInput camera matrix.
imgSizeSize?Camera view image size in pixels.
centerPrincipalPointboolLocation of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not.
Returns
- Mat
the camera matrix that is either an exact copy of the input cameraMatrix (when centerPrinicipalPoint=false), or the modified one (when centerPrincipalPoint=true).
GetDerivKernels(OutputArray, OutputArray, int, int, int, bool, MatType?)
Returns filter coefficients for computing spatial image derivatives.
public static void GetDerivKernels(OutputArray kx, OutputArray ky, int dx, int dy, int ksize, bool normalize = false, MatType? ktype = null)
Parameters
kxOutputArrayOutput matrix of row filter coefficients. It has the type ktype.
kyOutputArrayOutput matrix of column filter coefficients. It has the type ktype.
dxintDerivative order in respect of x.
dyintDerivative order in respect of y.
ksizeintAperture size. It can be CV_SCHARR, 1, 3, 5, or 7.
normalizeboolFlag indicating whether to normalize (scale down) the filter coefficients or not. Theoretically, the coefficients should have the denominator \f$=2^{ksize*2-dx-dy-2}\f$. If you are going to filter floating-point images, you are likely to use the normalized kernels. But if you compute derivatives of an 8-bit image, store the results in a 16-bit image, and wish to preserve all the fractional bits, you may want to set normalize = false.
ktypeMatType?Type of filter coefficients. It can be CV_32f or CV_64F.
GetFontScaleFromHeight(HersheyFonts, int, int)
Calculates the font-specific size to use to achieve a given height in pixels.
public static double GetFontScaleFromHeight(HersheyFonts fontFace, int pixelHeight, int thickness = 1)
Parameters
fontFaceHersheyFontsFont to use, see cv::HersheyFonts.
pixelHeightintPixel height to compute the fontScale for
thicknessintThickness of lines used to render the text.See putText for details.
Returns
- double
The fontSize to use for cv::putText
GetGaborKernel(Size, double, double, double, double, double, int)
Returns Gabor filter coefficients.
public static Mat GetGaborKernel(Size ksize, double sigma, double theta, double lambd, double gamma, double psi, int ktype)
Parameters
ksizeSizeSize of the filter returned.
sigmadoubleStandard deviation of the gaussian envelope.
thetadoubleOrientation of the normal to the parallel stripes of a Gabor function.
lambddoubleWavelength of the sinusoidal factor.
gammadoubleSpatial aspect ratio.
psidoublePhase offset.
ktypeintType of filter coefficients. It can be CV_32F or CV_64F.
Returns
Remarks
For more details about gabor filter equations and parameters, see: https://en.wikipedia.org/wiki/Gabor_filter
GetGaussianKernel(int, double, MatType?)
Returns Gaussian filter coefficients.
public static Mat? GetGaussianKernel(int ksize, double sigma, MatType? ktype = null)
Parameters
ksizeintAperture size. It should be odd and positive.
sigmadoubleGaussian standard deviation. If it is non-positive, it is computed from ksize as
sigma = 0.3*((ksize-1)*0.5 - 1) + 0.8.ktypeMatType?Type of filter coefficients. It can be CV_32F or CV_64F.
Returns
GetHardwareFeatureName(CpuFeatures)
Returns feature name by ID. Returns empty string if feature is not defined
public static string GetHardwareFeatureName(CpuFeatures feature)
Parameters
featureCpuFeatures
Returns
GetLogLevel()
Get global logging level
public static LogLevel GetLogLevel()
Returns
- LogLevel
logging level
GetMouseWheelDelta(MouseEventFlags)
Gets the mouse-wheel motion delta, when handling mouse-wheel events cv::EVENT_MOUSEWHEEL and cv::EVENT_MOUSEHWHEEL.
For regular mice with a scroll-wheel, delta will be a multiple of 120. The value 120 corresponds to a one notch rotation of the wheel or the threshold for action to be taken and one such action should occur for each delta.Some high-precision mice with higher-resolution freely-rotating wheels may generate smaller values.
For cv::EVENT_MOUSEWHEEL positive and negative values mean forward and backward scrolling, respectively.For cv::EVENT_MOUSEHWHEEL, where available, positive and negative values mean right and left scrolling, respectively.
public static int GetMouseWheelDelta(MouseEventFlags flags)
Parameters
flagsMouseEventFlagsThe mouse callback flags parameter.
Returns
GetNearestPoint(IEnumerable<Point2f>, float)
public static int GetNearestPoint(IEnumerable<Point2f> recallPrecisionCurve, float lPrecision)
Parameters
recallPrecisionCurveIEnumerable<Point2f>lPrecisionfloat
Returns
GetNumThreads()
Returns the number of threads used by OpenCV for parallel regions.
Always returns 1 if OpenCV is built without threading support. The exact meaning of return value depends on the threading framework used by OpenCV library:
TBB- The number of threads, that OpenCV will try to use for parallel regions. If there is any tbb::thread_scheduler_init in user code conflicting with OpenCV, then function returns default number of threads used by TBB library.OpenMP- An upper bound on the number of threads that could be used to form a new team.Concurrency- The number of threads, that OpenCV will try to use for parallel regions.GCD- Unsupported; returns the GCD thread pool limit(512) for compatibility.C=- The number of threads, that OpenCV will try to use for parallel regions, if before called setNumThreads with threads > 0, otherwise returns the number of logical CPUs, available for the process.
public static int GetNumThreads()
Returns
GetNumberOfCpus()
Returns the number of logical CPUs available for the process.
public static int GetNumberOfCpus()
Returns
GetOptimalDFTSize(int)
Returns the optimal DFT size for a given vector size.
public static int GetOptimalDFTSize(int vecSize)
Parameters
vecSizeintvector size.
Returns
GetOptimalNewCameraMatrix(InputArray, InputArray?, Size, double, Size, out Rect, bool)
Returns the new camera matrix based on the free scaling parameter.
public static Mat GetOptimalNewCameraMatrix(InputArray cameraMatrix, InputArray? distCoeffs, Size imageSize, double alpha, Size newImgSize, out Rect validPixROI, bool centerPrincipalPoint = false)
Parameters
cameraMatrixInputArrayInput camera matrix.
distCoeffsInputArrayInput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the array is null, the zero distortion coefficients are assumed.
imageSizeSizeOriginal image size.
alphadoubleFree scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image).
newImgSizeSizeImage size after rectification. By default,it is set to imageSize .
validPixROIRectOptional output rectangle that outlines all-good-pixels region in the undistorted image. See roi1, roi2 description in stereoRectify() .
centerPrincipalPointboolOptional flag that indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image.
Returns
- Mat
optimal new camera matrix
GetOptimalNewCameraMatrix(double[,], double[], Size, double, Size, out Rect, bool)
Returns the new camera matrix based on the free scaling parameter.
public static double[,]? GetOptimalNewCameraMatrix(double[,] cameraMatrix, double[] distCoeffs, Size imageSize, double alpha, Size newImgSize, out Rect validPixROI, bool centerPrincipalPoint = false)
Parameters
cameraMatrixdouble[,]Input camera matrix.
distCoeffsdouble[]Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the array is null, the zero distortion coefficients are assumed.
imageSizeSizeOriginal image size.
alphadoubleFree scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image).
newImgSizeSizeImage size after rectification. By default,it is set to imageSize .
validPixROIRectOptional output rectangle that outlines all-good-pixels region in the undistorted image. See roi1, roi2 description in stereoRectify() .
centerPrincipalPointboolOptional flag that indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image.
Returns
- double[,]
optimal new camera matrix
GetPerspectiveTransform(InputArray, InputArray)
Calculates a perspective transform from four pairs of the corresponding points. The function calculates the 3×3 matrix of a perspective transform.
public static Mat GetPerspectiveTransform(InputArray src, InputArray dst)
Parameters
srcInputArrayCoordinates of quadrangle vertices in the source image.
dstInputArrayCoordinates of the corresponding quadrangle vertices in the destination image.
Returns
GetPerspectiveTransform(IEnumerable<Point2f>, IEnumerable<Point2f>)
Calculates a perspective transform from four pairs of the corresponding points. The function calculates the 3×3 matrix of a perspective transform.
public static Mat GetPerspectiveTransform(IEnumerable<Point2f> src, IEnumerable<Point2f> dst)
Parameters
srcIEnumerable<Point2f>Coordinates of quadrangle vertices in the source image.
dstIEnumerable<Point2f>Coordinates of the corresponding quadrangle vertices in the destination image.
Returns
GetRecall(IEnumerable<Point2f>, float)
public static float GetRecall(IEnumerable<Point2f> recallPrecisionCurve, float lPrecision)
Parameters
recallPrecisionCurveIEnumerable<Point2f>lPrecisionfloat
Returns
GetRectSubPix(InputArray, Size, Point2f, OutputArray, int)
Retrieves a pixel rectangle from an image with sub-pixel accuracy.
public static void GetRectSubPix(InputArray image, Size patchSize, Point2f center, OutputArray patch, int patchType = -1)
Parameters
imageInputArraySource image.
patchSizeSizeSize of the extracted patch.
centerPoint2fFloating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image.
patchOutputArrayExtracted patch that has the size patchSize and the same number of channels as src .
patchTypeintDepth of the extracted pixels. By default, they have the same depth as src.
GetRotationMatrix2D(Point2f, double, double)
Calculates an affine matrix of 2D rotation.
public static Mat GetRotationMatrix2D(Point2f center, double angle, double scale)
Parameters
centerPoint2fCenter of the rotation in the source image.
angledoubleRotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner).
scaledoubleIsotropic scale factor.
Returns
GetStructuringElement(MorphShapes, Size)
Returns a structuring element of the specified size and shape for morphological operations. The function constructs and returns the structuring element that can be further passed to erode, dilate or morphologyEx.But you can also construct an arbitrary binary mask yourself and use it as the structuring element.
public static Mat GetStructuringElement(MorphShapes shape, Size ksize)
Parameters
shapeMorphShapesElement shape that could be one of MorphShapes
ksizeSizeSize of the structuring element.
Returns
GetStructuringElement(MorphShapes, Size, Point)
Returns a structuring element of the specified size and shape for morphological operations. The function constructs and returns the structuring element that can be further passed to erode, dilate or morphologyEx.But you can also construct an arbitrary binary mask yourself and use it as the structuring element.
public static Mat GetStructuringElement(MorphShapes shape, Size ksize, Point anchor)
Parameters
shapeMorphShapesElement shape that could be one of MorphShapes
ksizeSizeSize of the structuring element.
anchorPointAnchor position within the element. The default value (−1,−1) means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.
Returns
GetTextSize(string, HersheyFonts, double, int, out int)
returns bounding box of the text string
public static Size GetTextSize(string text, HersheyFonts fontFace, double fontScale, int thickness, out int baseLine)
Parameters
textstringInput text string.
fontFaceHersheyFontsFont to use, see #HersheyFonts.
fontScaledoubleFont scale factor that is multiplied by the font-specific base size.
thicknessintThickness of lines used to render the text. See #putText for details.
baseLineintbaseLine y-coordinate of the baseline relative to the bottom-most text
Returns
- Size
The size of a box that contains the specified text.
GetTheRNG()
Returns the thread-local Random number generator
public static RNG GetTheRNG()
Returns
GetThreadNum()
Returns the index of the currently executed thread within the current parallel region. Always returns 0 if called outside of parallel region. @deprecated Current implementation doesn't corresponding to this documentation. The exact meaning of the return value depends on the threading framework used by OpenCV library:
TBB- Unsupported with current 4.1 TBB release.Maybe will be supported in future.OpenMP- The thread number, within the current team, of the calling thread.Concurrency- An ID for the virtual processor that the current context is executing on(0 for master thread and unique number for others, but not necessary 1,2,3,...).GCD- System calling thread's ID. Never returns 0 inside parallel region.C=- The index of the current parallel task.
public static int GetThreadNum()
Returns
GetTickCount()
Returns the number of ticks. The function returns the number of ticks after the certain event (for example, when the machine was turned on). It can be used to initialize RNG or to measure a function execution time by reading the tick count before and after the function call.
public static long GetTickCount()
Returns
GetTickFrequency()
Returns the number of ticks per second. The function returns the number of ticks per second.That is, the following code computes the execution time in seconds:
public static double GetTickFrequency()
Returns
GetTrackbarPos(string, string)
Returns the trackbar position.
public static int GetTrackbarPos(string trackbarName, string winName)
Parameters
trackbarNamestringName of the trackbar.
winNamestringName of the window that is the parent of the trackbar.
Returns
- int
trackbar position
GetValidDisparityROI(Rect, Rect, int, int, int)
computes valid disparity ROI from the valid ROIs of the rectified images (that are returned by cv::stereoRectify())
public static Rect GetValidDisparityROI(Rect roi1, Rect roi2, int minDisparity, int numberOfDisparities, int SADWindowSize)
Parameters
Returns
GetVersionMajor()
Returns major library version
public static int GetVersionMajor()
Returns
GetVersionMinor()
Returns minor library version
public static int GetVersionMinor()
Returns
GetVersionRevision()
Returns revision field of the library version
public static int GetVersionRevision()
Returns
GetVersionString()
Returns library version string. For example "3.4.1-dev".
public static string? GetVersionString()
Returns
GetWindowHandle(string)
get native window handle (HWND in case of Win32 and Widget in case of X Window)
public static nint GetWindowHandle(string windowName)
Parameters
windowNamestring
Returns
GetWindowImageRect(string)
Provides rectangle of image in the window. The function getWindowImageRect returns the client screen coordinates, width and height of the image rendering area.
public static Rect GetWindowImageRect(string winName)
Parameters
winNamestringName of the window.
Returns
GetWindowProperty(string, WindowPropertyFlags)
Provides parameters of a window.
public static double GetWindowProperty(string winName, WindowPropertyFlags propId)
Parameters
winNamestringName of the window.
propIdWindowPropertyFlagsWindow property to retrieve.
Returns
GoodFeaturesToTrack(InputArray, int, double, double, InputArray, int, bool, double)
finds the strong enough corners where the cornerMinEigenVal() or cornerHarris() report the local maxima
public static Point2f[] GoodFeaturesToTrack(InputArray src, int maxCorners, double qualityLevel, double minDistance, InputArray mask, int blockSize, bool useHarrisDetector, double k)
Parameters
srcInputArrayInput 8-bit or floating-point 32-bit, single-channel image.
maxCornersintMaximum number of corners to return. If there are more corners than are found, the strongest of them is returned.
qualityLeveldoubleParameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue or the Harris function response (see cornerHarris() ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01, then all the corners with the quality measure less than 15 are rejected.
minDistancedoubleMinimum possible Euclidean distance between the returned corners.
maskInputArrayOptional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected.
blockSizeintSize of an average block for computing a derivative covariation matrix over each pixel neighborhood.
useHarrisDetectorboolParameter indicating whether to use a Harris detector
kdoubleFree parameter of the Harris detector.
Returns
- Point2f[]
Output vector of detected corners.
GrabCut(InputArray, InputOutputArray, Rect, InputOutputArray, InputOutputArray, int, GrabCutModes)
Segments the image using GrabCut algorithm
public static void GrabCut(InputArray img, InputOutputArray mask, Rect rect, InputOutputArray bgdModel, InputOutputArray fgdModel, int iterCount, GrabCutModes mode)
Parameters
imgInputArrayInput 8-bit 3-channel image.
maskInputOutputArrayInput/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have Cv2.GC_BGD / Cv2.GC_FGD / Cv2.GC_PR_BGD / Cv2.GC_PR_FGD
rectRectROI containing a segmented object. The pixels outside of the ROI are marked as "obvious background". The parameter is only used when mode==GC_INIT_WITH_RECT.
bgdModelInputOutputArrayTemporary array for the background model. Do not modify it while you are processing the same image.
fgdModelInputOutputArrayTemporary arrays for the foreground model. Do not modify it while you are processing the same image.
iterCountintNumber of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==GC_INIT_WITH_MASK or mode==GC_EVAL .
modeGrabCutModesOperation mode that could be one of GrabCutFlag value.
GroupRectangles(IList<Rect>, int, double)
Groups the object candidate rectangles.
public static void GroupRectangles(IList<Rect> rectList, int groupThreshold, double eps = 0.2)
Parameters
rectListIList<Rect>Input/output vector of rectangles. Output vector includes retained and grouped rectangles.
groupThresholdintMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
epsdouble
GroupRectangles(IList<Rect>, int, double, out int[], out double[])
Groups the object candidate rectangles.
public static void GroupRectangles(IList<Rect> rectList, int groupThreshold, double eps, out int[] weights, out double[] levelWeights)
Parameters
GroupRectangles(IList<Rect>, out int[], out double[], int, double)
Groups the object candidate rectangles.
public static void GroupRectangles(IList<Rect> rectList, out int[] rejectLevels, out double[] levelWeights, int groupThreshold, double eps = 0.2)
Parameters
GroupRectangles(IList<Rect>, out int[], int, double)
Groups the object candidate rectangles.
public static void GroupRectangles(IList<Rect> rectList, out int[] weights, int groupThreshold, double eps = 0.2)
Parameters
rectListIList<Rect>Input/output vector of rectangles. Output vector includes retained and grouped rectangles.
weightsint[]groupThresholdintMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
epsdoubleRelative difference between sides of the rectangles to merge them into a group.
GroupRectanglesMeanshift(IList<Rect>, out double[], out double[], double, Size?)
public static void GroupRectanglesMeanshift(IList<Rect> rectList, out double[] foundWeights, out double[] foundScales, double detectThreshold = 0, Size? winDetSize = null)
Parameters
rectListIList<Rect>foundWeightsdouble[]foundScalesdouble[]detectThresholddoublewinDetSizeSize?
HConcat(InputArray, InputArray, OutputArray)
Applies horizontal concatenation to given matrices.
public static void HConcat(InputArray src1, InputArray src2, OutputArray dst)
Parameters
src1InputArrayfirst input array to be considered for horizontal concatenation.
src2InputArraysecond input array to be considered for horizontal concatenation.
dstOutputArrayoutput array. It has the same number of rows and depth as the src1 and src2, and the sum of cols of the src1 and src2.
HConcat(IEnumerable<Mat>, OutputArray)
Applies horizontal concatenation to given matrices.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void HConcat(IEnumerable<Mat> src, OutputArray dst)
Parameters
srcIEnumerable<Mat>input array or vector of matrices. all of the matrices must have the same number of rows and the same depth.
dstOutputArrayoutput array. It has the same number of rows and depth as the src, and the sum of cols of the src.
HaveImageReader(string)
public static bool HaveImageReader(string fileName)
Parameters
fileNamestring
Returns
HaveImageWriter(string)
public static bool HaveImageWriter(string fileName)
Parameters
fileNamestring
Returns
HoughCircles(InputArray, HoughModes, double, double, double, double, int, int)
Finds circles in a grayscale image using a Hough transform.
public static CircleSegment[] HoughCircles(InputArray image, HoughModes method, double dp, double minDist, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0)
Parameters
imageInputArrayThe 8-bit, single-channel, grayscale input image
methodHoughModesThe available methods are HoughMethods.Gradient and HoughMethods.GradientAlt
dpdoubleThe inverse ratio of the accumulator resolution to the image resolution.
minDistdoubleMinimum distance between the centers of the detected circles.
param1doubleThe first method-specific parameter. [By default this is 100]
param2doubleThe second method-specific parameter. [By default this is 100]
minRadiusintMinimum circle radius. [By default this is 0]
maxRadiusintMaximum circle radius. [By default this is 0]
Returns
- CircleSegment[]
The output vector found circles. Each vector is encoded as 3-element floating-point vector (x, y, radius)
HoughLines(InputArray, double, double, int, double, double)
Finds lines in a binary image using standard Hough transform.
public static LineSegmentPolar[] HoughLines(InputArray image, double rho, double theta, int threshold, double srn = 0, double stn = 0)
Parameters
imageInputArrayThe 8-bit, single-channel, binary source image. The image may be modified by the function
rhodoubleDistance resolution of the accumulator in pixels
thetadoubleAngle resolution of the accumulator in radians
thresholdintThe accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold )
srndoubleFor the multi-scale Hough transform it is the divisor for the distance resolution rho. [By default this is 0]
stndoubleFor the multi-scale Hough transform it is the divisor for the distance resolution theta. [By default this is 0]
Returns
- LineSegmentPolar[]
The output vector of lines. Each line is represented by a two-element vector (rho, theta) . rho is the distance from the coordinate origin (0,0) (top-left corner of the image) and theta is the line rotation angle in radians
HoughLinesP(InputArray, double, double, int, double, double)
Finds lines segments in a binary image using probabilistic Hough transform.
public static LineSegmentPoint[] HoughLinesP(InputArray image, double rho, double theta, int threshold, double minLineLength = 0, double maxLineGap = 0)
Parameters
imageInputArrayrhodoubleDistance resolution of the accumulator in pixels
thetadoubleAngle resolution of the accumulator in radians
thresholdintThe accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold )
minLineLengthdoubleThe minimum line length. Line segments shorter than that will be rejected. [By default this is 0]
maxLineGapdoubleThe maximum allowed gap between points on the same line to link them. [By default this is 0]
Returns
- LineSegmentPoint[]
The output lines. Each line is represented by a 4-element vector (x1, y1, x2, y2)
HoughLinesPointSet(InputArray, OutputArray, int, int, double, double, double, double, double, double)
Finds lines in a set of points using the standard Hough transform. The function finds lines in a set of points using a modification of the Hough transform.
public static void HoughLinesPointSet(InputArray point, OutputArray lines, int linesMax, int threshold, double minRho, double maxRho, double rhoStep, double minTheta, double maxTheta, double thetaStep)
Parameters
pointInputArrayInput vector of points. Each vector must be encoded as a Point vector \f$(x,y)\f$. Type must be CV_32FC2 or CV_32SC2.
linesOutputArrayOutput vector of found lines. Each vector is encoded as a vector<Vec3d>
linesMaxintMax count of hough lines.
thresholdintAccumulator threshold parameter. Only those lines are returned that get enough votes
minRhodoubleMinimum Distance value of the accumulator in pixels.
maxRhodoubleMaximum Distance value of the accumulator in pixels.
rhoStepdoubleDistance resolution of the accumulator in pixels.
minThetadoubleMinimum angle value of the accumulator in radians.
maxThetadoubleMaximum angle value of the accumulator in radians.
thetaStepdoubleAngle resolution of the accumulator in radians.
Idct(InputArray, OutputArray, DctFlags)
Performs inverse 1D or 2D Discrete Cosine Transformation
public static void Idct(InputArray src, OutputArray dst, DctFlags flags = DctFlags.None)
Parameters
srcInputArrayThe source floating-point array
dstOutputArrayThe destination array; will have the same size and same type as src
flagsDctFlagsTransformation flags, a combination of DctFlag2 values
Idft(InputArray, OutputArray, DftFlags, int)
Performs an inverse Discrete Fourier transform of 1D or 2D floating-point array.
public static void Idft(InputArray src, OutputArray dst, DftFlags flags = DftFlags.None, int nonzeroRows = 0)
Parameters
srcInputArrayThe source array, real or complex
dstOutputArrayThe destination array, which size and type depends on the flags
flagsDftFlagsTransformation flags, a combination of the DftFlag2 values
nonzeroRowsintWhen the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT
IlluminationChange(InputArray, InputArray?, OutputArray, float, float)
Applying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image.
public static void IlluminationChange(InputArray src, InputArray? mask, OutputArray dst, float alpha = 0.2, float beta = 0.4)
Parameters
srcInputArrayInput 8-bit 3-channel image.
maskInputArrayInput 8-bit 1 or 3-channel image.
dstOutputArrayOutput image with the same size and type as src.
alphafloatValue ranges between 0-2.
betafloatValue ranges between 0-2.
Remarks
This is useful to highlight under-exposed foreground objects or to reduce specular reflections.
ImDecode(InputArray, ImreadModes)
Reads image from the specified buffer in memory.
public static Mat ImDecode(InputArray buf, ImreadModes flags)
Parameters
bufInputArrayThe input array of vector of bytes.
flagsImreadModesThe same flags as in imread
Returns
ImDecode(Mat, ImreadModes)
Reads image from the specified buffer in memory.
public static Mat ImDecode(Mat buf, ImreadModes flags)
Parameters
bufMatThe input array of vector of bytes.
flagsImreadModesThe same flags as in imread
Returns
ImDecode(byte[], ImreadModes)
Reads image from the specified buffer in memory.
public static Mat ImDecode(byte[] buf, ImreadModes flags)
Parameters
bufbyte[]The input array of vector of bytes.
flagsImreadModesThe same flags as in imread
Returns
ImDecode(ReadOnlySpan<byte>, ImreadModes)
Reads image from the specified buffer in memory.
public static Mat ImDecode(ReadOnlySpan<byte> span, ImreadModes flags)
Parameters
spanReadOnlySpan<byte>The input slice of bytes.
flagsImreadModesThe same flags as in imread
Returns
ImEncode(string, InputArray, out byte[], params ImageEncodingParam[])
Compresses the image and stores it in the memory buffer
public static void ImEncode(string ext, InputArray img, out byte[] buf, params ImageEncodingParam[] prms)
Parameters
extstringThe file extension that defines the output format
imgInputArrayThe image to be written
bufbyte[]Output buffer resized to fit the compressed image.
prmsImageEncodingParam[]Format-specific parameters.
ImEncode(string, InputArray, out byte[], int[]?)
Compresses the image and stores it in the memory buffer
public static bool ImEncode(string ext, InputArray img, out byte[] buf, int[]? prms = null)
Parameters
extstringThe file extension that defines the output format
imgInputArrayThe image to be written
bufbyte[]Output buffer resized to fit the compressed image.
prmsint[]Format-specific parameters.
Returns
ImRead(string, ImreadModes)
Loads an image from a file.
public static Mat ImRead(string fileName, ImreadModes flags = ImreadModes.Color)
Parameters
fileNamestringName of file to be loaded.
flagsImreadModesSpecifies color type of the loaded image
Returns
ImReadMulti(string, out Mat[], ImreadModes)
Loads a multi-page image from a file.
public static bool ImReadMulti(string filename, out Mat[] mats, ImreadModes flags = ImreadModes.AnyColor)
Parameters
filenamestringName of file to be loaded.
matsMat[]A vector of Mat objects holding each page, if more than one.
flagsImreadModesFlag that can take values of @ref cv::ImreadModes, default with IMREAD_ANYCOLOR.
Returns
ImShow(string, Mat)
Displays the image in the specified window
public static void ImShow(string winName, Mat mat)
Parameters
ImWrite(string, Mat, params ImageEncodingParam[])
Saves an image to a specified file.
public static bool ImWrite(string fileName, Mat img, params ImageEncodingParam[] prms)
Parameters
fileNamestringName of the file.
imgMatImage to be saved.
prmsImageEncodingParam[]Format-specific save parameters encoded as pairs
Returns
ImWrite(string, Mat, int[]?)
Saves an image to a specified file.
public static bool ImWrite(string fileName, Mat img, int[]? prms = null)
Parameters
fileNamestringName of the file.
imgMatImage to be saved.
prmsint[]Format-specific save parameters encoded as pairs
Returns
ImWrite(string, IEnumerable<Mat>, params ImageEncodingParam[])
Saves an image to a specified file.
public static bool ImWrite(string fileName, IEnumerable<Mat> img, params ImageEncodingParam[] prms)
Parameters
fileNamestringName of the file.
imgIEnumerable<Mat>Image to be saved.
prmsImageEncodingParam[]Format-specific save parameters encoded as pairs
Returns
ImWrite(string, IEnumerable<Mat>, int[]?)
Saves an image to a specified file.
public static bool ImWrite(string fileName, IEnumerable<Mat> img, int[]? prms = null)
Parameters
fileNamestringName of the file.
imgIEnumerable<Mat>Image to be saved.
prmsint[]Format-specific save parameters encoded as pairs
Returns
InRange(InputArray, InputArray, InputArray, OutputArray)
Checks if array elements lie between the elements of two other arrays.
public static void InRange(InputArray src, InputArray lowerb, InputArray upperb, OutputArray dst)
Parameters
srcInputArrayfirst input array.
lowerbInputArrayinclusive lower boundary array or a scalar.
upperbInputArrayinclusive upper boundary array or a scalar.
dstOutputArrayoutput array of the same size as src and CV_8U type.
InRange(InputArray, Scalar, Scalar, OutputArray)
Checks if array elements lie between the elements of two other arrays.
public static void InRange(InputArray src, Scalar lowerb, Scalar upperb, OutputArray dst)
Parameters
srcInputArrayfirst input array.
lowerbScalarinclusive lower boundary array or a scalar.
upperbScalarinclusive upper boundary array or a scalar.
dstOutputArrayoutput array of the same size as src and CV_8U type.
InitCameraMatrix2D(IEnumerable<Mat>, IEnumerable<Mat>, Size, double)
initializes camera matrix from a few 3D points and the corresponding projections.
public static Mat InitCameraMatrix2D(IEnumerable<Mat> objectPoints, IEnumerable<Mat> imagePoints, Size imageSize, double aspectRatio = 1)
Parameters
objectPointsIEnumerable<Mat>Vector of vectors (vector<vector<Point3d>>) of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the per-view vectors are concatenated.
imagePointsIEnumerable<Mat>Vector of vectors (vector<vector<Point2d>>) of the projections of the calibration pattern points. In the old interface all the per-view vectors are concatenated.
imageSizeSizeImage size in pixels used to initialize the principal point.
aspectRatiodoubleIf it is zero or negative, both f_x and f_y are estimated independently. Otherwise, f_x = f_y * aspectRatio .
Returns
InitCameraMatrix2D(IEnumerable<IEnumerable<Point3f>>, IEnumerable<IEnumerable<Point2f>>, Size, double)
initializes camera matrix from a few 3D points and the corresponding projections.
public static Mat InitCameraMatrix2D(IEnumerable<IEnumerable<Point3f>> objectPoints, IEnumerable<IEnumerable<Point2f>> imagePoints, Size imageSize, double aspectRatio = 1)
Parameters
objectPointsIEnumerable<IEnumerable<Point3f>>Vector of vectors of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the per-view vectors are concatenated.
imagePointsIEnumerable<IEnumerable<Point2f>>Vector of vectors of the projections of the calibration pattern points. In the old interface all the per-view vectors are concatenated.
imageSizeSizeImage size in pixels used to initialize the principal point.
aspectRatiodoubleIf it is zero or negative, both f_x and f_y are estimated independently. Otherwise, f_x = f_y * aspectRatio .
Returns
InitUndistortRectifyMap(InputArray, InputArray, InputArray, InputArray, Size, MatType, OutputArray, OutputArray)
initializes maps for cv::remap() to correct lens distortion and optionally rectify the image
public static void InitUndistortRectifyMap(InputArray cameraMatrix, InputArray distCoeffs, InputArray r, InputArray newCameraMatrix, Size size, MatType m1Type, OutputArray map1, OutputArray map2)
Parameters
cameraMatrixInputArraydistCoeffsInputArrayrInputArraynewCameraMatrixInputArraysizeSizem1TypeMatTypemap1OutputArraymap2OutputArray
InitWideAngleProjMap(InputArray, InputArray, Size, int, MatType, OutputArray, OutputArray, ProjectionType, double)
initializes maps for cv::remap() for wide-angle
public static float InitWideAngleProjMap(InputArray cameraMatrix, InputArray distCoeffs, Size imageSize, int destImageWidth, MatType m1Type, OutputArray map1, OutputArray map2, ProjectionType projType, double alpha = 0)
Parameters
cameraMatrixInputArraydistCoeffsInputArrayimageSizeSizedestImageWidthintm1TypeMatTypemap1OutputArraymap2OutputArrayprojTypeProjectionTypealphadouble
Returns
Inpaint(InputArray, InputArray, OutputArray, double, InpaintMethod)
Restores the selected region in an image using the region neighborhood.
public static void Inpaint(InputArray src, InputArray inpaintMask, OutputArray dst, double inpaintRadius, InpaintMethod flags)
Parameters
srcInputArrayInput 8-bit, 16-bit unsigned or 32-bit float 1-channel or 8-bit 3-channel image.
inpaintMaskInputArrayInpainting mask, 8-bit 1-channel image. Non-zero pixels indicate the area that needs to be inpainted.
dstOutputArrayOutput image with the same size and type as src.
inpaintRadiusdoubleRadius of a circular neighborhood of each point inpainted that is considered by the algorithm.
flagsInpaintMethodInpainting method that could be cv::INPAINT_NS or cv::INPAINT_TELEA
InsertChannel(InputArray, InputOutputArray, int)
inserts a single channel to dst (coi is 0-based index)
public static void InsertChannel(InputArray src, InputOutputArray dst, int coi)
Parameters
srcInputArraydstInputOutputArraycoiint
Integral(InputArray, OutputArray, OutputArray, OutputArray, MatType?, MatType?)
Calculates the integral of an image. The function calculates one or more integral images for the source image.
public static void Integral(InputArray src, OutputArray sum, OutputArray sqsum, OutputArray tilted, MatType? sdepth = null, MatType? sqdepth = null)
Parameters
srcInputArrayinput image as W×H, 8-bit or floating-point (32f or 64f).
sumOutputArrayintegral image as (W+1)×(H+1) , 32-bit integer or floating-point (32f or 64f).
sqsumOutputArrayintegral image for squared pixel values; it is (W+1)×(H+1), double-precision floating-point (64f) array.
tiltedOutputArrayintegral for the image rotated by 45 degrees; it is (W+1)×(H+1) array with the same data type as sum.
sdepthMatType?desired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F.
sqdepthMatType?desired depth of the integral image of squared pixel values, CV_32F or CV_64F.
Integral(InputArray, OutputArray, OutputArray, MatType?)
Calculates the integral of an image. The function calculates one or more integral images for the source image.
public static void Integral(InputArray src, OutputArray sum, OutputArray sqsum, MatType? sdepth = null)
Parameters
srcInputArraysumOutputArraysqsumOutputArraysdepthMatType?
Integral(InputArray, OutputArray, MatType?)
Calculates the integral of an image. The function calculates one or more integral images for the source image.
public static void Integral(InputArray src, OutputArray sum, MatType? sdepth = null)
Parameters
srcInputArraysumOutputArraysdepthMatType?
IntersectConvexConvex(InputArray, InputArray, OutputArray, bool)
finds intersection of two convex polygons
public static float IntersectConvexConvex(InputArray p1, InputArray p2, OutputArray p12, bool handleNested = true)
Parameters
p1InputArrayp2InputArrayp12OutputArrayhandleNestedbool
Returns
IntersectConvexConvex(IEnumerable<Point2f>, IEnumerable<Point2f>, out Point2f[], bool)
finds intersection of two convex polygons
public static float IntersectConvexConvex(IEnumerable<Point2f> p1, IEnumerable<Point2f> p2, out Point2f[] p12, bool handleNested = true)
Parameters
p1IEnumerable<Point2f>p2IEnumerable<Point2f>p12Point2f[]handleNestedbool
Returns
IntersectConvexConvex(IEnumerable<Point>, IEnumerable<Point>, out Point[], bool)
finds intersection of two convex polygons
public static float IntersectConvexConvex(IEnumerable<Point> p1, IEnumerable<Point> p2, out Point[] p12, bool handleNested = true)
Parameters
p1IEnumerable<Point>p2IEnumerable<Point>p12Point[]handleNestedbool
Returns
Invert(InputArray, OutputArray, DecompTypes)
computes inverse or pseudo-inverse matrix
public static double Invert(InputArray src, OutputArray dst, DecompTypes flags = DecompTypes.LU)
Parameters
srcInputArrayThe source floating-point MxN matrix
dstOutputArrayThe destination matrix; will have NxM size and the same type as src
flagsDecompTypesThe inversion method
Returns
InvertAffineTransform(InputArray, OutputArray)
Inverts an affine transformation.
public static void InvertAffineTransform(InputArray m, OutputArray im)
Parameters
mInputArrayOriginal affine transformation.
imOutputArrayOutput reverse affine transformation.
IsContourConvex(InputArray)
returns true if the contour is convex. Does not support contours with self-intersection
public static bool IsContourConvex(InputArray contour)
Parameters
contourInputArrayInput vector of 2D points
Returns
IsContourConvex(IEnumerable<Point2f>)
returns true if the contour is convex. D oes not support contours with self-intersection
public static bool IsContourConvex(IEnumerable<Point2f> contour)
Parameters
contourIEnumerable<Point2f>Input vector of 2D points
Returns
IsContourConvex(IEnumerable<Point>)
returns true if the contour is convex. Does not support contours with self-intersection
public static bool IsContourConvex(IEnumerable<Point> contour)
Parameters
contourIEnumerable<Point>Input vector of 2D points
Returns
Kmeans(InputArray, int, InputOutputArray, TermCriteria, int, KMeansFlags, OutputArray?)
Finds centers of clusters and groups input samples around the clusters.
public static double Kmeans(InputArray data, int k, InputOutputArray bestLabels, TermCriteria criteria, int attempts, KMeansFlags flags, OutputArray? centers = null)
Parameters
dataInputArrayData for clustering. An array of N-Dimensional points with float coordinates is needed.
kintNumber of clusters to split the set by.
bestLabelsInputOutputArrayInput/output integer array that stores the cluster indices for every sample.
criteriaTermCriteriaThe algorithm termination criteria, that is, the maximum number of iterations and/or the desired accuracy. The accuracy is specified as criteria.epsilon. As soon as each of the cluster centers moves by less than criteria.epsilon on some iteration, the algorithm stops.
attemptsintFlag to specify the number of times the algorithm is executed using different initial labellings. The algorithm returns the labels that yield the best compactness (see the last function parameter).
flagsKMeansFlagsFlag that can take values of cv::KmeansFlags
centersOutputArrayOutput matrix of the cluster centers, one row per each cluster center.
Returns
- double
The function returns the compactness measure that is computed as \f[\sum _i | \texttt{samples} _i - \texttt{centers} _{ \texttt{labels} _i} | ^2\f] after every attempt. The best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function. Basically, you can use only the core of the function, set the number of attempts to 1, initialize labels each time using a custom algorithm, pass them with the ( flags = #KMEANS_USE_INITIAL_LABELS ) flag, and then choose the best (most-compact) clustering.
LUT(InputArray, InputArray, OutputArray)
transforms array of numbers using a lookup table: dst(i)=lut(src(i))
public static void LUT(InputArray src, InputArray lut, OutputArray dst)
Parameters
srcInputArraySource array of 8-bit elements
lutInputArrayLook-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array
dstOutputArrayDestination array; will have the same size and the same number of channels as src, and the same depth as lut
LUT(InputArray, byte[], OutputArray)
transforms array of numbers using a lookup table: dst(i)=lut(src(i))
public static void LUT(InputArray src, byte[] lut, OutputArray dst)
Parameters
srcInputArraySource array of 8-bit elements
lutbyte[]Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array
dstOutputArrayDestination array; will have the same size and the same number of channels as src, and the same depth as lut
Laplacian(InputArray, OutputArray, MatType, int, double, double, BorderTypes)
Calculates the Laplacian of an image
public static void Laplacian(InputArray src, OutputArray dst, MatType ddepth, int ksize = 1, double scale = 1, double delta = 0, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArraySource image
dstOutputArrayDestination image; will have the same size and the same number of channels as src
ddepthMatTypeThe desired depth of the destination image
ksizeintThe aperture size used to compute the second-derivative filters
scaledoubleThe optional scale factor for the computed Laplacian values (by default, no scaling is applied
deltadoubleThe optional delta value, added to the results prior to storing them in dst
borderTypeBorderTypesThe pixel extrapolation method
Line(InputOutputArray, Point, Point, Scalar, int, LineTypes, int)
Draws a line segment connecting two points
public static void Line(InputOutputArray img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayThe image.
pt1PointFirst point of the line segment.
pt2PointSecond point of the line segment.
colorScalarLine color.
thicknessintLine thickness. [By default this is 1]
lineTypeLineTypesType of the line. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the point coordinates. [By default this is 0]
Line(InputOutputArray, int, int, int, int, Scalar, int, LineTypes, int)
Draws a line segment connecting two points
public static void Line(InputOutputArray img, int pt1X, int pt1Y, int pt2X, int pt2Y, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayThe image.
pt1XintFirst point's x-coordinate of the line segment.
pt1YintFirst point's y-coordinate of the line segment.
pt2XintSecond point's x-coordinate of the line segment.
pt2YintSecond point's y-coordinate of the line segment.
colorScalarLine color.
thicknessintLine thickness. [By default this is 1]
lineTypeLineTypesType of the line. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the point coordinates. [By default this is 0]
LinearPolar(InputArray, OutputArray, Point2f, double, InterpolationFlags)
Remaps an image to polar space.
public static void LinearPolar(InputArray src, OutputArray dst, Point2f center, double maxRadius, InterpolationFlags flags)
Parameters
srcInputArraySource image
dstOutputArrayDestination image
centerPoint2fThe transformation center
maxRadiusdoubleInverse magnitude scale parameter
flagsInterpolationFlagsA combination of interpolation methods, see cv::InterpolationFlags
Log(InputArray, OutputArray)
computes natural logarithm of absolute value of each matrix element: dst = log(abs(src))
public static void Log(InputArray src, OutputArray dst)
Parameters
srcInputArrayThe source array
dstOutputArrayThe destination array; will have the same size and same type as src
LogPolar(InputArray, OutputArray, Point2f, double, InterpolationFlags)
Remaps an image to log-polar space.
public static void LogPolar(InputArray src, OutputArray dst, Point2f center, double m, InterpolationFlags flags)
Parameters
srcInputArraySource image
dstOutputArrayDestination image
centerPoint2fThe transformation center; where the output precision is maximal
mdoubleMagnitude scale parameter.
flagsInterpolationFlagsA combination of interpolation methods, see cv::InterpolationFlags
Magnitude(InputArray, InputArray, OutputArray)
Calculates the magnitude of 2D vectors.
public static void Magnitude(InputArray x, InputArray y, OutputArray magnitude)
Parameters
xInputArrayfloating-point array of x-coordinates of the vectors.
yInputArrayfloating-point array of y-coordinates of the vectors; it must have the same size as x.
magnitudeOutputArrayoutput array of the same size and type as x.
Mahalanobis(InputArray, InputArray, InputArray)
Calculates the Mahalanobis distance between two vectors.
public static double Mahalanobis(InputArray v1, InputArray v2, InputArray icovar)
Parameters
v1InputArrayfirst 1D input vector.
v2InputArraysecond 1D input vector.
icovarInputArrayinverse covariance matrix.
Returns
MatMulDeriv(InputArray, InputArray, OutputArray, OutputArray)
computes derivatives of the matrix product w.r.t each of the multiplied matrix coefficients
public static void MatMulDeriv(InputArray a, InputArray b, OutputArray dABdA, OutputArray dABdB)
Parameters
aInputArrayFirst multiplied matrix.
bInputArraySecond multiplied matrix.
dABdAOutputArrayFirst output derivative matrix d(AB)/dA of size A.rowsB.cols X A.rows*A.cols .
dABdBOutputArraySecond output derivative matrix d(AB)/dB of size A.rowsB.cols X B.rows*B.cols .
MatchShapes(InputArray, InputArray, ShapeMatchModes, double)
Compares two shapes.
public static double MatchShapes(InputArray contour1, InputArray contour2, ShapeMatchModes method, double parameter = 0)
Parameters
contour1InputArrayFirst contour or grayscale image.
contour2InputArraySecond contour or grayscale image.
methodShapeMatchModesComparison method
parameterdoubleMethod-specific parameter (not supported now)
Returns
MatchShapes(IEnumerable<Point>, IEnumerable<Point>, ShapeMatchModes, double)
Compares two shapes.
public static double MatchShapes(IEnumerable<Point> contour1, IEnumerable<Point> contour2, ShapeMatchModes method, double parameter = 0)
Parameters
contour1IEnumerable<Point>First contour or grayscale image.
contour2IEnumerable<Point>Second contour or grayscale image.
methodShapeMatchModesComparison method
parameterdoubleMethod-specific parameter (not supported now)
Returns
MatchTemplate(InputArray, InputArray, OutputArray, TemplateMatchModes, InputArray?)
Computes the proximity map for the raster template and the image where the template is searched for
public static void MatchTemplate(InputArray image, InputArray templ, OutputArray result, TemplateMatchModes method, InputArray? mask = null)
Parameters
imageInputArrayImage where the search is running; should be 8-bit or 32-bit floating-point
templInputArraySearched template; must be not greater than the source image and have the same data type
resultOutputArrayA map of comparison results; will be single-channel 32-bit floating-point. If image is WxH and templ is wxh then result will be (W-w+1) x (H-h+1).
methodTemplateMatchModesSpecifies the comparison method
maskInputArrayMask of searched template. It must have the same datatype and size with templ. It is not set by default.
Max(InputArray, InputArray, OutputArray)
computes per-element maximum of two arrays (dst = max(src1, src2))
public static void Max(InputArray src1, InputArray src2, OutputArray dst)
Parameters
src1InputArraysrc2InputArraydstOutputArray
Max(Mat, Mat, Mat)
computes per-element maximum of two arrays (dst = max(src1, src2))
public static void Max(Mat src1, Mat src2, Mat dst)
Parameters
Max(Mat, double, Mat)
computes per-element maximum of array and scalar (dst = max(src1, src2))
public static void Max(Mat src1, double src2, Mat dst)
Parameters
Mean(InputArray, InputArray?)
computes mean value of selected array elements
public static Scalar Mean(InputArray src, InputArray? mask = null)
Parameters
srcInputArrayThe source array; it should have 1 to 4 channels (so that the result can be stored in Scalar)
maskInputArrayThe optional operation mask
Returns
MeanShift(InputArray, ref Rect, TermCriteria)
Finds an object on a back projection image.
public static int MeanShift(InputArray probImage, ref Rect window, TermCriteria criteria)
Parameters
probImageInputArrayBack projection of the object histogram.
windowRectInitial search window.
criteriaTermCriteriaStop criteria for the iterative search algorithm.
Returns
- int
Number of iterations CAMSHIFT took to converge.
MeanStdDev(InputArray, OutputArray, OutputArray, InputArray?)
computes mean value and standard deviation of all or selected array elements
public static void MeanStdDev(InputArray src, OutputArray mean, OutputArray stddev, InputArray? mask = null)
Parameters
srcInputArrayThe source array; it should have 1 to 4 channels (so that the results can be stored in Scalar's)
meanOutputArrayThe output parameter: computed mean value
stddevOutputArrayThe output parameter: computed standard deviation
maskInputArrayThe optional operation mask
MeanStdDev(InputArray, out Scalar, out Scalar, InputArray?)
computes mean value and standard deviation of all or selected array elements
public static void MeanStdDev(InputArray src, out Scalar mean, out Scalar stddev, InputArray? mask = null)
Parameters
srcInputArrayThe source array; it should have 1 to 4 channels (so that the results can be stored in Scalar's)
meanScalarThe output parameter: computed mean value
stddevScalarThe output parameter: computed standard deviation
maskInputArrayThe optional operation mask
MedianBlur(InputArray, OutputArray, int)
Smoothes image using median filter
public static void MedianBlur(InputArray src, OutputArray dst, int ksize)
Parameters
srcInputArrayThe source 1-, 3- or 4-channel image. When ksize is 3 or 5, the image depth should be CV_8U , CV_16U or CV_32F. For larger aperture sizes it can only be CV_8U
dstOutputArrayThe destination array; will have the same size and the same type as src
ksizeintThe aperture linear size. It must be odd and more than 1, i.e. 3, 5, 7 ...
Merge(Mat[], Mat)
makes multi-channel array out of several single-channel arrays
public static void Merge(Mat[] mv, Mat dst)
Parameters
Min(InputArray, InputArray, OutputArray)
computes per-element minimum of two arrays (dst = min(src1, src2))
public static void Min(InputArray src1, InputArray src2, OutputArray dst)
Parameters
src1InputArraysrc2InputArraydstOutputArray
Min(Mat, Mat, Mat)
computes per-element minimum of two arrays (dst = min(src1, src2))
public static void Min(Mat src1, Mat src2, Mat dst)
Parameters
Min(Mat, double, Mat)
computes per-element minimum of array and scalar (dst = min(src1, src2))
public static void Min(Mat src1, double src2, Mat dst)
Parameters
MinAreaRect(InputArray)
Finds the minimum area rotated rectangle enclosing a 2D point set.
public static RotatedRect MinAreaRect(InputArray points)
Parameters
pointsInputArrayThe input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
Returns
MinAreaRect(IEnumerable<Point2f>)
Finds the minimum area rotated rectangle enclosing a 2D point set.
public static RotatedRect MinAreaRect(IEnumerable<Point2f> points)
Parameters
pointsIEnumerable<Point2f>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
Returns
MinAreaRect(IEnumerable<Point>)
Finds the minimum area rotated rectangle enclosing a 2D point set.
public static RotatedRect MinAreaRect(IEnumerable<Point> points)
Parameters
pointsIEnumerable<Point>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
Returns
MinEnclosingCircle(InputArray, out Point2f, out float)
Finds the minimum area circle enclosing a 2D point set.
public static void MinEnclosingCircle(InputArray points, out Point2f center, out float radius)
Parameters
pointsInputArrayThe input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
centerPoint2fThe output center of the circle
radiusfloatThe output radius of the circle
MinEnclosingCircle(IEnumerable<Point2f>, out Point2f, out float)
Finds the minimum area circle enclosing a 2D point set.
public static void MinEnclosingCircle(IEnumerable<Point2f> points, out Point2f center, out float radius)
Parameters
pointsIEnumerable<Point2f>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
centerPoint2fThe output center of the circle
radiusfloatThe output radius of the circle
MinEnclosingCircle(IEnumerable<Point>, out Point2f, out float)
Finds the minimum area circle enclosing a 2D point set.
public static void MinEnclosingCircle(IEnumerable<Point> points, out Point2f center, out float radius)
Parameters
pointsIEnumerable<Point>The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix.
centerPoint2fThe output center of the circle
radiusfloatThe output radius of the circle
MinEnclosingTriangle(InputArray, OutputArray)
Finds a triangle of minimum area enclosing a 2D point set and returns its area.
public static double MinEnclosingTriangle(InputArray points, OutputArray triangle)
Parameters
pointsInputArrayInput vector of 2D points with depth CV_32S or CV_32F, stored in std::vector or Mat
triangleOutputArrayOutput vector of three 2D points defining the vertices of the triangle. The depth
Returns
- double
Triangle area
MinEnclosingTriangle(IEnumerable<Point2f>, out Point2f[])
Finds a triangle of minimum area enclosing a 2D point set and returns its area.
public static double MinEnclosingTriangle(IEnumerable<Point2f> points, out Point2f[] triangle)
Parameters
pointsIEnumerable<Point2f>Input vector of 2D points with depth CV_32S or CV_32F, stored in std::vector or Mat
trianglePoint2f[]Output vector of three 2D points defining the vertices of the triangle. The depth
Returns
- double
Triangle area
MinEnclosingTriangle(IEnumerable<Point>, out Point2f[])
Finds a triangle of minimum area enclosing a 2D point set and returns its area.
public static double MinEnclosingTriangle(IEnumerable<Point> points, out Point2f[] triangle)
Parameters
pointsIEnumerable<Point>Input vector of 2D points with depth CV_32S or CV_32F, stored in std::vector or Mat
trianglePoint2f[]Output vector of three 2D points defining the vertices of the triangle. The depth
Returns
- double
Triangle area
MinMaxIdx(InputArray, out double, out double)
finds global minimum and maximum array elements and returns their values and their locations
public static void MinMaxIdx(InputArray src, out double minVal, out double maxVal)
Parameters
srcInputArrayThe source single-channel array
minValdoublePointer to returned minimum value
maxValdoublePointer to returned maximum value
MinMaxIdx(InputArray, out double, out double, int[], int[], InputArray?)
finds global minimum and maximum array elements and returns their values and their locations
public static void MinMaxIdx(InputArray src, out double minVal, out double maxVal, int[] minIdx, int[] maxIdx, InputArray? mask = null)
Parameters
srcInputArrayThe source single-channel array
minValdoublePointer to returned minimum value
maxValdoublePointer to returned maximum value
minIdxint[]maxIdxint[]maskInputArray
MinMaxIdx(InputArray, int[], int[])
finds global minimum and maximum array elements and returns their values and their locations
public static void MinMaxIdx(InputArray src, int[] minIdx, int[] maxIdx)
Parameters
srcInputArrayThe source single-channel array
minIdxint[]maxIdxint[]
MinMaxLoc(InputArray, out Point, out Point)
finds global minimum and maximum array elements and returns their values and their locations
public static void MinMaxLoc(InputArray src, out Point minLoc, out Point maxLoc)
Parameters
srcInputArrayThe source single-channel array
minLocPointPointer to returned minimum location
maxLocPointPointer to returned maximum location
MinMaxLoc(InputArray, out double, out double)
finds global minimum and maximum array elements and returns their values and their locations
public static void MinMaxLoc(InputArray src, out double minVal, out double maxVal)
Parameters
srcInputArrayThe source single-channel array
minValdoublePointer to returned minimum value
maxValdoublePointer to returned maximum value
MinMaxLoc(InputArray, out double, out double, out Point, out Point, InputArray?)
finds global minimum and maximum array elements and returns their values and their locations
public static void MinMaxLoc(InputArray src, out double minVal, out double maxVal, out Point minLoc, out Point maxLoc, InputArray? mask = null)
Parameters
srcInputArrayThe source single-channel array
minValdoublePointer to returned minimum value
maxValdoublePointer to returned maximum value
minLocPointPointer to returned minimum location
maxLocPointPointer to returned maximum location
maskInputArrayThe optional mask used to select a sub-array
MixChannels(Mat[], Mat[], int[])
copies selected channels from the input arrays to the selected channels of the output arrays
public static void MixChannels(Mat[] src, Mat[] dst, int[] fromTo)
Parameters
Moments(InputArray, bool)
Calculates all of the moments up to the third order of a polygon or rasterized shape.
public static Moments Moments(InputArray array, bool binaryImage = false)
Parameters
arrayInputArrayA raster image (single-channel, 8-bit or floating-point 2D array) or an array ( 1xN or Nx1 ) of 2D points ( Point or Point2f )
binaryImageboolIf it is true, then all the non-zero image pixels are treated as 1’s
Returns
Moments(byte[,], bool)
Calculates all of the moments up to the third order of a polygon or rasterized shape.
public static Moments Moments(byte[,] array, bool binaryImage = false)
Parameters
arraybyte[,]A raster image (8-bit) 2D array
binaryImageboolIf it is true, then all the non-zero image pixels are treated as 1’s
Returns
Moments(IEnumerable<Point2f>, bool)
Calculates all of the moments up to the third order of a polygon or rasterized shape.
public static Moments Moments(IEnumerable<Point2f> array, bool binaryImage = false)
Parameters
arrayIEnumerable<Point2f>Array of 2D points
binaryImageboolIf it is true, then all the non-zero image pixels are treated as 1’s
Returns
Moments(IEnumerable<Point>, bool)
Calculates all of the moments up to the third order of a polygon or rasterized shape.
public static Moments Moments(IEnumerable<Point> array, bool binaryImage = false)
Parameters
arrayIEnumerable<Point>Array of 2D points
binaryImageboolIf it is true, then all the non-zero image pixels are treated as 1’s
Returns
Moments(float[,], bool)
Calculates all of the moments up to the third order of a polygon or rasterized shape.
public static Moments Moments(float[,] array, bool binaryImage = false)
Parameters
arrayfloat[,]A raster image (floating-point) 2D array
binaryImageboolIf it is true, then all the non-zero image pixels are treated as 1’s
Returns
MorphologyDefaultBorderValue()
Default borderValue for Dilate/Erode
public static Scalar MorphologyDefaultBorderValue()
Returns
MorphologyEx(InputArray, OutputArray, MorphTypes, InputArray?, Point?, int, BorderTypes, Scalar?)
Performs advanced morphological transformations
public static void MorphologyEx(InputArray src, OutputArray dst, MorphTypes op, InputArray? element, Point? anchor = null, int iterations = 1, BorderTypes borderType = BorderTypes.Constant, Scalar? borderValue = null)
Parameters
srcInputArraySource image
dstOutputArrayDestination image. It will have the same size and the same type as src
opMorphTypesType of morphological operation
elementInputArrayStructuring element
anchorPoint?Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center
iterationsintNumber of times erosion and dilation are applied. [By default this is 1]
borderTypeBorderTypesThe pixel extrapolation method. [By default this is BorderType.Constant]
borderValueScalar?The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()]
MoveWindow(string, int, int)
Moves window to the specified position
public static void MoveWindow(string winName, int x, int y)
Parameters
winNamestringWindow name
xintThe new x-coordinate of the window
yintThe new y-coordinate of the window
MulSpectrums(InputArray, InputArray, OutputArray, DftFlags, bool)
Performs the per-element multiplication of two Fourier spectrums.
public static void MulSpectrums(InputArray a, InputArray b, OutputArray c, DftFlags flags, bool conjB = false)
Parameters
aInputArrayfirst input array.
bInputArraysecond input array of the same size and type as src1.
cOutputArrayoutput array of the same size and type as src1.
flagsDftFlagsoperation flags; currently, the only supported flag is cv::DFT_ROWS, which indicates that each row of src1 and src2 is an independent 1D Fourier spectrum. If you do not want to use this flag, then simply add a
0as value.conjBbooloptional flag that conjugates the second input array before the multiplication (true) or not (false).
MulTransposed(InputArray, OutputArray, bool, InputArray?, double, int)
multiplies matrix by its transposition from the left or from the right
public static void MulTransposed(InputArray src, OutputArray dst, bool aTa, InputArray? delta = null, double scale = 1, int dtype = -1)
Parameters
srcInputArrayThe source matrix
dstOutputArrayThe destination square matrix
aTaboolSpecifies the multiplication ordering; see the description below
deltaInputArrayThe optional delta matrix, subtracted from src before the multiplication. When the matrix is empty ( delta=Mat() ), it’s assumed to be zero, i.e. nothing is subtracted, otherwise if it has the same size as src, then it’s simply subtracted, otherwise it is "repeated" to cover the full src and then subtracted. Type of the delta matrix, when it's not empty, must be the same as the type of created destination matrix, see the rtype description
scaledoubleThe optional scale factor for the matrix product
dtypeintWhen it’s negative, the destination matrix will have the same type as src . Otherwise, it will have type=CV_MAT_DEPTH(rtype), which should be either CV_32F or CV_64F
Multiply(InputArray, InputArray, OutputArray, double, int)
Calculates the per-element scaled product of two arrays
public static void Multiply(InputArray src1, InputArray src2, OutputArray dst, double scale = 1, int dtype = -1)
Parameters
src1InputArrayThe first source array
src2InputArrayThe second source array of the same size and the same type as src1
dstOutputArrayThe destination array; will have the same size and the same type as src1
scaledoubleThe optional scale factor. [By default this is 1]
dtypeint
NamedWindow(string, WindowFlags)
Creates a window.
public static void NamedWindow(string winName, WindowFlags flags = WindowFlags.GuiExpanded)
Parameters
winNamestringName of the window in the window caption that may be used as a window identifier.
flagsWindowFlagsFlags of the window. Currently the only supported flag is CV WINDOW AUTOSIZE. If this is set, the window size is automatically adjusted to fit the displayed image (see imshow ), and the user can not change the window size manually.
Norm(InputArray, InputArray, NormTypes, InputArray?)
computes norm of selected part of the difference between two arrays
public static double Norm(InputArray src1, InputArray src2, NormTypes normType = NormTypes.L2, InputArray? mask = null)
Parameters
src1InputArrayThe first source array
src2InputArrayThe second source array of the same size and the same type as src1
normTypeNormTypesType of the norm
maskInputArrayThe optional operation mask
Returns
Norm(InputArray, NormTypes, InputArray?)
Calculates absolute array norm, absolute difference norm, or relative difference norm.
public static double Norm(InputArray src1, NormTypes normType = NormTypes.L2, InputArray? mask = null)
Parameters
src1InputArrayThe first source array
normTypeNormTypesType of the norm
maskInputArrayThe optional operation mask
Returns
Normalize(InputArray, InputOutputArray, double, double, NormTypes, int, InputArray?)
scales and shifts array elements so that either the specified norm (alpha) or the minimum (alpha) and maximum (beta) array values get the specified values
public static void Normalize(InputArray src, InputOutputArray dst, double alpha = 1, double beta = 0, NormTypes normType = NormTypes.L2, int dtype = -1, InputArray? mask = null)
Parameters
srcInputArrayThe source array
dstInputOutputArrayThe destination array; will have the same size as src
alphadoubleThe norm value to normalize to or the lower range boundary in the case of range normalization
betadoubleThe upper range boundary in the case of range normalization; not used for norm normalization
normTypeNormTypesThe normalization type
dtypeintWhen the parameter is negative, the destination array will have the same type as src, otherwise it will have the same number of channels as src and the depth =CV_MAT_DEPTH(rtype)
maskInputArrayThe optional operation mask
PCABackProject(InputArray, InputArray, InputArray, OutputArray)
Reconstructs vectors from their PC projections.
public static void PCABackProject(InputArray data, InputArray mean, InputArray eigenvectors, OutputArray result)
Parameters
dataInputArrayinput samples stored as the matrix rows or as the matrix columns.
meanInputArrayoptional mean value; if the matrix is empty (noArray()), the mean is computed from the data.
eigenvectorsInputArrayeigenvectors of the covariation matrix
resultOutputArrayoutput vectors
PCACompute(InputArray, InputOutputArray, OutputArray, OutputArray, int)
PCA of the supplied dataset.
public static void PCACompute(InputArray data, InputOutputArray mean, OutputArray eigenvectors, OutputArray eigenvalues, int maxComponents = 0)
Parameters
dataInputArrayinput samples stored as the matrix rows or as the matrix columns.
meanInputOutputArrayoptional mean value; if the matrix is empty (noArray()), the mean is computed from the data.
eigenvectorsOutputArrayeigenvectors of the covariation matrix
eigenvaluesOutputArrayeigenvalues of the covariation matrix
maxComponentsintmaximum number of components that PCA should retain; by default, all the components are retained.
PCACompute(InputArray, InputOutputArray, OutputArray, int)
PCA of the supplied dataset.
public static void PCACompute(InputArray data, InputOutputArray mean, OutputArray eigenvectors, int maxComponents = 0)
Parameters
dataInputArrayinput samples stored as the matrix rows or as the matrix columns.
meanInputOutputArrayoptional mean value; if the matrix is empty (noArray()), the mean is computed from the data.
eigenvectorsOutputArrayeigenvectors of the covariation matrix
maxComponentsintmaximum number of components that PCA should retain; by default, all the components are retained.
PCAComputeVar(InputArray, InputOutputArray, OutputArray, OutputArray, double)
PCA of the supplied dataset.
public static void PCAComputeVar(InputArray data, InputOutputArray mean, OutputArray eigenvectors, OutputArray eigenvalues, double retainedVariance)
Parameters
dataInputArrayinput samples stored as the matrix rows or as the matrix columns.
meanInputOutputArrayoptional mean value; if the matrix is empty (noArray()), the mean is computed from the data.
eigenvectorsOutputArrayeigenvectors of the covariation matrix
eigenvaluesOutputArrayeigenvalues of the covariation matrix
retainedVariancedoublePercentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2.
PCAComputeVar(InputArray, InputOutputArray, OutputArray, double)
PCA of the supplied dataset.
public static void PCAComputeVar(InputArray data, InputOutputArray mean, OutputArray eigenvectors, double retainedVariance)
Parameters
dataInputArrayinput samples stored as the matrix rows or as the matrix columns.
meanInputOutputArrayoptional mean value; if the matrix is empty (noArray()), the mean is computed from the data.
eigenvectorsOutputArrayeigenvectors of the covariation matrix
retainedVariancedoublePercentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2.
PCAProject(InputArray, InputArray, InputArray, OutputArray)
Projects vector(s) to the principal component subspace.
public static void PCAProject(InputArray data, InputArray mean, InputArray eigenvectors, OutputArray result)
Parameters
dataInputArrayinput samples stored as the matrix rows or as the matrix columns.
meanInputArrayoptional mean value; if the matrix is empty (noArray()), the mean is computed from the data.
eigenvectorsInputArrayeigenvectors of the covariation matrix
resultOutputArrayoutput vectors
PSNR(InputArray, InputArray, double)
Computes the Peak Signal-to-Noise Ratio (PSNR) image quality metric.
This function calculates the Peak Signal-to-Noise Ratio(PSNR) image quality metric in decibels(dB), between two input arrays src1 and src2.The arrays must have the same type.
public static double PSNR(InputArray src1, InputArray src2, double r = 255)
Parameters
src1InputArrayfirst input array.
src2InputArraysecond input array of the same size as src1.
rdoublethe maximum pixel value (255 by default)
Returns
Partition<T>(IEnumerable<T>, out int[], PartitionPredicate<T>)
Splits an element set into equivalency classes. Consider using GroupBy of Linq instead.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static int Partition<T>(IEnumerable<T> vec, out int[] labels, Cv2.PartitionPredicate<T> predicate)
Parameters
vecIEnumerable<T>Set of elements stored as a vector.
labelsint[]Output vector of labels. It contains as many elements as vec. Each label labels[i] is a 0-based cluster index of vec[i] .
predicateCv2.PartitionPredicate<T>Equivalence predicate (a boolean function of two arguments). The predicate returns true when the elements are certainly in the same class, and returns false if they may or may not be in the same class.
Returns
Type Parameters
T
PatchNaNs(InputOutputArray, double)
converts NaN's to the given number
public static void PatchNaNs(InputOutputArray a, double val = 0)
Parameters
aInputOutputArrayvaldouble
PencilSketch(InputArray, OutputArray, OutputArray, float, float, float)
Pencil-like non-photorealistic line drawing
public static void PencilSketch(InputArray src, OutputArray dst1, OutputArray dst2, float sigmaS = 60, float sigmaR = 0.07, float shadeFactor = 0.02)
Parameters
srcInputArrayInput 8-bit 3-channel image.
dst1OutputArrayOutput 8-bit 1-channel image.
dst2OutputArrayOutput image with the same size and type as src.
sigmaSfloatRange between 0 to 200.
sigmaRfloatRange between 0 to 1.
shadeFactorfloatRange between 0 to 0.1.
PerspectiveTransform(InputArray, OutputArray, InputArray)
performs perspective transformation of each element of multi-channel input matrix
public static void PerspectiveTransform(InputArray src, OutputArray dst, InputArray m)
Parameters
srcInputArrayThe source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed
dstOutputArrayThe destination array; it will have the same size and same type as src
mInputArray3x3 or 4x4 transformation matrix
PerspectiveTransform(IEnumerable<Point2d>, Mat)
performs perspective transformation of each element of multi-channel input matrix
public static Point2d[] PerspectiveTransform(IEnumerable<Point2d> src, Mat m)
Parameters
srcIEnumerable<Point2d>The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed
mMat3x3 or 4x4 transformation matrix
Returns
- Point2d[]
The destination array; it will have the same size and same type as src
PerspectiveTransform(IEnumerable<Point2f>, Mat)
performs perspective transformation of each element of multi-channel input matrix
public static Point2f[] PerspectiveTransform(IEnumerable<Point2f> src, Mat m)
Parameters
srcIEnumerable<Point2f>The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed
mMat3x3 or 4x4 transformation matrix
Returns
- Point2f[]
The destination array; it will have the same size and same type as src
PerspectiveTransform(IEnumerable<Point3d>, Mat)
performs perspective transformation of each element of multi-channel input matrix
public static Point3d[] PerspectiveTransform(IEnumerable<Point3d> src, Mat m)
Parameters
srcIEnumerable<Point3d>The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed
mMat3x3 or 4x4 transformation matrix
Returns
- Point3d[]
The destination array; it will have the same size and same type as src
PerspectiveTransform(IEnumerable<Point3f>, Mat)
performs perspective transformation of each element of multi-channel input matrix
public static Point3f[] PerspectiveTransform(IEnumerable<Point3f> src, Mat m)
Parameters
srcIEnumerable<Point3f>The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed
mMat3x3 or 4x4 transformation matrix
Returns
- Point3f[]
The destination array; it will have the same size and same type as src
Phase(InputArray, InputArray, OutputArray, bool)
Calculates the rotation angle of 2D vectors.
public static void Phase(InputArray x, InputArray y, OutputArray angle, bool angleInDegrees = false)
Parameters
xInputArrayinput floating-point array of x-coordinates of 2D vectors.
yInputArrayinput array of y-coordinates of 2D vectors; it must have the same size and the same type as x.
angleOutputArrayoutput array of vector angles; it has the same size and same type as x.
angleInDegreesboolwhen true, the function calculates the angle in degrees, otherwise, they are measured in radians.
PhaseCorrelate(InputArray, InputArray, InputArray, out double)
The function is used to detect translational shifts that occur between two images.
The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain.It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation.
Calculates the cross-power spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.
public static Point2d PhaseCorrelate(InputArray src1, InputArray src2, InputArray window, out double response)
Parameters
src1InputArraySource floating point array (CV_32FC1 or CV_64FC1)
src2InputArraySource floating point array (CV_32FC1 or CV_64FC1)
windowInputArrayFloating point array with windowing coefficients to reduce edge effects (optional).
responsedoubleSignal power within the 5x5 centroid around the peak, between 0 and 1 (optional).
Returns
- Point2d
detected phase shift(sub-pixel) between the two arrays.
PointPolygonTest(InputArray, Point2f, bool)
Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary
public static double PointPolygonTest(InputArray contour, Point2f pt, bool measureDist)
Parameters
contourInputArrayptPoint2fmeasureDistbool
Returns
PointPolygonTest(IEnumerable<Point2f>, Point2f, bool)
Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary.
public static double PointPolygonTest(IEnumerable<Point2f> contour, Point2f pt, bool measureDist)
Parameters
contourIEnumerable<Point2f>Input contour.
ptPoint2fPoint tested against the contour.
measureDistboolIf true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not.
Returns
- double
Positive (inside), negative (outside), or zero (on an edge) value.
PointPolygonTest(IEnumerable<Point>, Point2f, bool)
Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary
public static double PointPolygonTest(IEnumerable<Point> contour, Point2f pt, bool measureDist)
Parameters
contourIEnumerable<Point>ptPoint2fmeasureDistbool
Returns
PolarToCart(InputArray, InputArray, OutputArray, OutputArray, bool)
Calculates x and y coordinates of 2D vectors from their magnitude and angle.
public static void PolarToCart(InputArray magnitude, InputArray angle, OutputArray x, OutputArray y, bool angleInDegrees = false)
Parameters
magnitudeInputArrayinput floating-point array of magnitudes of 2D vectors; it can be an empty matrix(=Mat()), in this case, the function assumes that all the magnitudes are = 1; if it is not empty, it must have the same size and type as angle.
angleInputArrayinput floating-point array of angles of 2D vectors.
xOutputArrayoutput array of x-coordinates of 2D vectors; it has the same size and type as angle.
yOutputArrayoutput array of y-coordinates of 2D vectors; it has the same size and type as angle.
angleInDegreesboolwhen true, the input angles are measured in degrees, otherwise, they are measured in radians.
Polylines(InputOutputArray, InputArray, bool, Scalar, int, LineTypes, int)
draws one or more polygonal curves
public static void Polylines(InputOutputArray img, InputArray pts, bool isClosed, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayptsInputArrayisClosedboolcolorScalarthicknessintlineTypeLineTypesshiftint
Polylines(Mat, IEnumerable<IEnumerable<Point>>, bool, Scalar, int, LineTypes, int)
draws one or more polygonal curves
public static void Polylines(Mat img, IEnumerable<IEnumerable<Point>> pts, bool isClosed, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgMatptsIEnumerable<IEnumerable<Point>>isClosedboolcolorScalarthicknessintlineTypeLineTypesshiftint
Pow(InputArray, double, OutputArray)
raises the input matrix elements to the specified power (b = a**power)
public static void Pow(InputArray src, double power, OutputArray dst)
Parameters
srcInputArrayThe source array
powerdoubleThe exponent of power
dstOutputArrayThe destination array; will have the same size and the same type as src
PreCornerDetect(InputArray, OutputArray, int, BorderTypes)
computes another complex cornerness criteria at each pixel
public static void PreCornerDetect(InputArray src, OutputArray dst, int ksize, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArraydstOutputArrayksizeintborderTypeBorderTypes
ProjectPoints(InputArray, InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray?, double)
projects points from the model coordinate space to the image coordinates. Also computes derivatives of the image coordinates w.r.t the intrinsic and extrinsic camera parameters
public static void ProjectPoints(InputArray objectPoints, InputArray rvec, InputArray tvec, InputArray cameraMatrix, InputArray distCoeffs, OutputArray imagePoints, OutputArray? jacobian = null, double aspectRatio = 0)
Parameters
objectPointsInputArrayArray of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points in the view.
rvecInputArrayRotation vector (3x1).
tvecInputArrayTranslation vector (3x1).
cameraMatrixInputArrayCamera matrix (3x3)
distCoeffsInputArrayInput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
imagePointsOutputArrayOutput array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel
jacobianOutputArrayOptional output 2Nx(10 + numDistCoeffs) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.
aspectRatiodoubleOptional “fixed aspect ratio” parameter. If the parameter is not 0, the function assumes that the aspect ratio (fx/fy) is fixed and correspondingly adjusts the jacobian matrix.
ProjectPoints(IEnumerable<Point3f>, double[], double[], double[,], double[], out Point2f[], out double[,], double)
projects points from the model coordinate space to the image coordinates. Also computes derivatives of the image coordinates w.r.t the intrinsic and extrinsic camera parameters
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void ProjectPoints(IEnumerable<Point3f> objectPoints, double[] rvec, double[] tvec, double[,] cameraMatrix, double[] distCoeffs, out Point2f[] imagePoints, out double[,] jacobian, double aspectRatio = 0)
Parameters
objectPointsIEnumerable<Point3f>Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points in the view.
rvecdouble[]Rotation vector (3x1).
tvecdouble[]Translation vector (3x1).
cameraMatrixdouble[,]Camera matrix (3x3)
distCoeffsdouble[]Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
imagePointsPoint2f[]Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel
jacobiandouble[,]Optional output 2Nx(10 + numDistCoeffs) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.
aspectRatiodoubleOptional “fixed aspect ratio” parameter. If the parameter is not 0, the function assumes that the aspect ratio (fx/fy) is fixed and correspondingly adjusts the jacobian matrix.
PutText(InputOutputArray, string, Point, HersheyFonts, double, Scalar, int, LineTypes, bool)
renders text string in the image
public static void PutText(InputOutputArray img, string text, Point org, HersheyFonts fontFace, double fontScale, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, bool bottomLeftOrigin = false)
Parameters
imgInputOutputArrayImage.
textstringText string to be drawn.
orgPointBottom-left corner of the text string in the image.
fontFaceHersheyFontsFont type, see #HersheyFonts.
fontScaledoubleFont scale factor that is multiplied by the font-specific base size.
colorScalarText color.
thicknessintThickness of the lines used to draw a text.
lineTypeLineTypesLine type. See #LineTypes
bottomLeftOriginboolWhen true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.
PyrDown(InputArray, OutputArray, Size?, BorderTypes)
Blurs an image and downsamples it.
public static void PyrDown(InputArray src, OutputArray dst, Size? dstSize = null, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayinput image.
dstOutputArrayoutput image; it has the specified size and the same type as src.
dstSizeSize?size of the output image; by default, it is computed as Size((src.cols+1)/2
borderTypeBorderTypes
PyrMeanShiftFiltering(InputArray, OutputArray, double, double, int, TermCriteria?)
Performs initial step of meanshift segmentation of an image.
public static void PyrMeanShiftFiltering(InputArray src, OutputArray dst, double sp, double sr, int maxLevel = 1, TermCriteria? termcrit = null)
Parameters
srcInputArrayThe source 8-bit, 3-channel image.
dstOutputArrayThe destination image of the same format and the same size as the source.
spdoubleThe spatial window radius.
srdoubleThe color window radius.
maxLevelintMaximum level of the pyramid for the segmentation.
termcritTermCriteria?Termination criteria: when to stop meanshift iterations.
PyrUp(InputArray, OutputArray, Size?, BorderTypes)
Upsamples an image and then blurs it.
public static void PyrUp(InputArray src, OutputArray dst, Size? dstSize = null, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayinput image.
dstOutputArrayoutput image. It has the specified size and the same type as src.
dstSizeSize?size of the output image; by default, it is computed as Size(src.cols2, (src.rows2)
borderTypeBorderTypes
RQDecomp3x3(InputArray, OutputArray, OutputArray, OutputArray?, OutputArray?, OutputArray?)
Computes RQ decomposition of 3x3 matrix
public static Vec3d RQDecomp3x3(InputArray src, OutputArray mtxR, OutputArray mtxQ, OutputArray? qx = null, OutputArray? qy = null, OutputArray? qz = null)
Parameters
srcInputArray3x3 input matrix.
mtxROutputArrayOutput 3x3 upper-triangular matrix.
mtxQOutputArrayOutput 3x3 orthogonal matrix.
qxOutputArrayOptional output 3x3 rotation matrix around x-axis.
qyOutputArrayOptional output 3x3 rotation matrix around y-axis.
qzOutputArrayOptional output 3x3 rotation matrix around z-axis.
Returns
RQDecomp3x3(double[,], out double[,], out double[,])
Computes RQ decomposition of 3x3 matrix
public static Vec3d RQDecomp3x3(double[,] src, out double[,] mtxR, out double[,] mtxQ)
Parameters
srcdouble[,]3x3 input matrix.
mtxRdouble[,]Output 3x3 upper-triangular matrix.
mtxQdouble[,]Output 3x3 orthogonal matrix.
Returns
RQDecomp3x3(double[,], out double[,], out double[,], out double[,], out double[,], out double[,])
Computes RQ decomposition of 3x3 matrix
public static Vec3d RQDecomp3x3(double[,] src, out double[,] mtxR, out double[,] mtxQ, out double[,] qx, out double[,] qy, out double[,] qz)
Parameters
srcdouble[,]3x3 input matrix.
mtxRdouble[,]Output 3x3 upper-triangular matrix.
mtxQdouble[,]Output 3x3 orthogonal matrix.
qxdouble[,]Optional output 3x3 rotation matrix around x-axis.
qydouble[,]Optional output 3x3 rotation matrix around y-axis.
qzdouble[,]Optional output 3x3 rotation matrix around z-axis.
Returns
RandShuffle(InputOutputArray, double)
shuffles the input array elements
public static void RandShuffle(InputOutputArray dst, double iterFactor)
Parameters
dstInputOutputArrayThe input/output numerical 1D array
iterFactordoubleThe scale factor that determines the number of random swap operations.
RandShuffle(InputOutputArray, double, ref RNG)
shuffles the input array elements
public static void RandShuffle(InputOutputArray dst, double iterFactor, ref RNG rng)
Parameters
dstInputOutputArrayThe input/output numerical 1D array
iterFactordoubleThe scale factor that determines the number of random swap operations.
rngRNGThe optional random number generator used for shuffling. If it is null, theRng() is used instead.
Randn(InputOutputArray, InputArray, InputArray)
fills array with normally-distributed random numbers with the specified mean and the standard deviation
public static void Randn(InputOutputArray dst, InputArray mean, InputArray stddev)
Parameters
dstInputOutputArrayThe output array of random numbers. The array must be pre-allocated and have 1 to 4 channels
meanInputArrayThe mean value (expectation) of the generated random numbers
stddevInputArrayThe standard deviation of the generated random numbers
Randn(InputOutputArray, Scalar, Scalar)
fills array with normally-distributed random numbers with the specified mean and the standard deviation
public static void Randn(InputOutputArray dst, Scalar mean, Scalar stddev)
Parameters
dstInputOutputArrayThe output array of random numbers. The array must be pre-allocated and have 1 to 4 channels
meanScalarThe mean value (expectation) of the generated random numbers
stddevScalarThe standard deviation of the generated random numbers
Randu(InputOutputArray, InputArray, InputArray)
fills array with uniformly-distributed random numbers from the range [low, high)
public static void Randu(InputOutputArray dst, InputArray low, InputArray high)
Parameters
dstInputOutputArrayThe output array of random numbers. The array must be pre-allocated and have 1 to 4 channels
lowInputArrayThe inclusive lower boundary of the generated random numbers
highInputArrayThe exclusive upper boundary of the generated random numbers
Randu(InputOutputArray, Scalar, Scalar)
fills array with uniformly-distributed random numbers from the range [low, high)
public static void Randu(InputOutputArray dst, Scalar low, Scalar high)
Parameters
dstInputOutputArrayThe output array of random numbers. The array must be pre-allocated and have 1 to 4 channels
lowScalarThe inclusive lower boundary of the generated random numbers
highScalarThe exclusive upper boundary of the generated random numbers
RecoverPose(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, InputOutputArray?)
Recover relative camera rotation and translation from an estimated essential matrix and the corresponding points in two images, using cheirality check. Returns the number of inliers which pass the check.
public static int RecoverPose(InputArray E, InputArray points1, InputArray points2, InputArray cameraMatrix, OutputArray R, OutputArray t, InputOutputArray? mask = null)
Parameters
EInputArrayThe input essential matrix.
points1InputArrayArray of N 2D points from the first image. The point coordinates should be floating-point (single or double precision).
points2InputArrayArray of the second image points of the same size and format as points1.
cameraMatrixInputArrayCamera matrix K=⎡⎣⎢fx000fy0cxcy1⎤⎦⎥ . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.
ROutputArrayRecovered relative rotation.
tOutputArrayRecovered relative translation.
maskInputOutputArrayInput/output mask for inliers in points1 and points2. : If it is not empty, then it marks inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check basically means that the triangulated 3D points should have positive depth.
Returns
RecoverPose(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, double, InputOutputArray?, OutputArray?)
Recover relative camera rotation and translation from an estimated essential matrix and the corresponding points in two images, using cheirality check. Returns the number of inliers which pass the check.
public static int RecoverPose(InputArray E, InputArray points1, InputArray points2, InputArray cameraMatrix, OutputArray R, OutputArray t, double distanceTresh, InputOutputArray? mask = null, OutputArray? triangulatedPoints = null)
Parameters
EInputArrayThe input essential matrix.
points1InputArrayArray of N 2D points from the first image. The point coordinates should be floating-point (single or double precision).
points2InputArrayArray of the second image points of the same size and format as points1.
cameraMatrixInputArrayCamera matrix K=⎡⎣⎢fx000fy0cxcy1⎤⎦⎥ . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.
ROutputArrayRecovered relative rotation.
tOutputArrayRecovered relative translation.
distanceTreshdoublethreshold distance which is used to filter out far away points (i.e. infinite points).
maskInputOutputArrayInput/output mask for inliers in points1 and points2. : If it is not empty, then it marks inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check basically means that the triangulated 3D points should have positive depth.
triangulatedPointsOutputArray3d points which were reconstructed by triangulation.
Returns
RecoverPose(InputArray, InputArray, InputArray, OutputArray, OutputArray, double, Point2d, InputOutputArray?)
Recover relative camera rotation and translation from an estimated essential matrix and the corresponding points in two images, using cheirality check. Returns the number of inliers which pass the check.
public static int RecoverPose(InputArray E, InputArray points1, InputArray points2, OutputArray R, OutputArray t, double focal, Point2d pp, InputOutputArray? mask = null)
Parameters
EInputArrayThe input essential matrix.
points1InputArrayArray of N 2D points from the first image. The point coordinates should be floating-point (single or double precision).
points2InputArrayArray of the second image points of the same size and format as points1.
ROutputArrayRecovered relative rotation.
tOutputArrayRecovered relative translation.
focaldoubleFocal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.
ppPoint2dprincipal point of the camera.
maskInputOutputArrayInput/output mask for inliers in points1 and points2. : If it is not empty, then it marks inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check basically means that the triangulated 3D points should have positive depth.
Returns
Rectangle(InputOutputArray, Point, Point, Scalar, int, LineTypes, int)
Draws simple, thick or filled rectangle
public static void Rectangle(InputOutputArray img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayImage.
pt1PointOne of the rectangle vertices.
pt2PointOpposite rectangle vertex.
colorScalarLine color (RGB) or brightness (grayscale image).
thicknessintThickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]
lineTypeLineTypesType of the line, see cvLine description. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the point coordinates. [By default this is 0]
Rectangle(InputOutputArray, Rect, Scalar, int, LineTypes, int)
Draws simple, thick or filled rectangle
public static void Rectangle(InputOutputArray img, Rect rect, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgInputOutputArrayImage.
rectRectRectangle.
colorScalarLine color (RGB) or brightness (grayscale image).
thicknessintThickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]
lineTypeLineTypesType of the line, see cvLine description. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the point coordinates. [By default this is 0]
Rectangle(Mat, Point, Point, Scalar, int, LineTypes, int)
Draws simple, thick or filled rectangle
public static void Rectangle(Mat img, Point pt1, Point pt2, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgMatImage.
pt1PointOne of the rectangle vertices.
pt2PointOpposite rectangle vertex.
colorScalarLine color (RGB) or brightness (grayscale image).
thicknessintThickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]
lineTypeLineTypesType of the line, see cvLine description. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the point coordinates. [By default this is 0]
Rectangle(Mat, Rect, Scalar, int, LineTypes, int)
Draws simple, thick or filled rectangle
public static void Rectangle(Mat img, Rect rect, Scalar color, int thickness = 1, LineTypes lineType = LineTypes.Link8, int shift = 0)
Parameters
imgMatImage.
rectRectRectangle.
colorScalarLine color (RGB) or brightness (grayscale image).
thicknessintThickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1]
lineTypeLineTypesType of the line, see cvLine description. [By default this is LineType.Link8]
shiftintNumber of fractional bits in the point coordinates. [By default this is 0]
Rectify3Collinear(InputArray, InputArray, InputArray, InputArray, InputArray, InputArray, IEnumerable<InputArray>, IEnumerable<InputArray>, Size, InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, double, Size, out Rect, out Rect, StereoRectificationFlags)
computes the rectification transformations for 3-head camera, where all the heads are on the same line.
public static float Rectify3Collinear(InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, InputArray cameraMatrix3, InputArray distCoeffs3, IEnumerable<InputArray> imgpt1, IEnumerable<InputArray> imgpt3, Size imageSize, InputArray R12, InputArray T12, InputArray R13, InputArray T13, OutputArray R1, OutputArray R2, OutputArray R3, OutputArray P1, OutputArray P2, OutputArray P3, OutputArray Q, double alpha, Size newImgSize, out Rect roi1, out Rect roi2, StereoRectificationFlags flags)
Parameters
cameraMatrix1InputArraydistCoeffs1InputArraycameraMatrix2InputArraydistCoeffs2InputArraycameraMatrix3InputArraydistCoeffs3InputArrayimgpt1IEnumerable<InputArray>imgpt3IEnumerable<InputArray>imageSizeSizeR12InputArrayT12InputArrayR13InputArrayT13InputArrayR1OutputArrayR2OutputArrayR3OutputArrayP1OutputArrayP2OutputArrayP3OutputArrayQOutputArrayalphadoublenewImgSizeSizeroi1Rectroi2RectflagsStereoRectificationFlags
Returns
Reduce(InputArray, OutputArray, ReduceDimension, ReduceTypes, int)
transforms 2D matrix to 1D row or column vector by taking sum, minimum, maximum or mean value over all the rows
public static void Reduce(InputArray src, OutputArray dst, ReduceDimension dim, ReduceTypes rtype, int dtype)
Parameters
srcInputArrayThe source 2D matrix
dstOutputArrayThe destination vector. Its size and type is defined by dim and dtype parameters
dimReduceDimensionThe dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row and 1 means that the matrix is reduced to a single column
rtypeReduceTypesdtypeintWhen it is negative, the destination vector will have the same type as the source matrix, otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels())
ReduceArgMax(InputArray, OutputArray, int, bool)
Finds indices of max elements along provided axis
public static void ReduceArgMax(InputArray src, OutputArray dst, int axis, bool lastIndex = false)
Parameters
srcInputArrayInput single-channel array
dstOutputArrayOutput array of type CV_32SC1 with the same dimensionality as src, except for axis being reduced - it should be set to 1.
axisintAxis to reduce along
lastIndexboolWhether to get the index of first or last occurrence of max
ReduceArgMin(InputArray, OutputArray, int, bool)
Finds indices of min elements along provided axis
public static void ReduceArgMin(InputArray src, OutputArray dst, int axis, bool lastIndex = false)
Parameters
srcInputArrayInput single-channel array
dstOutputArrayOutput array of type CV_32SC1 with the same dimensionality as src, except for axis being reduced - it should be set to 1.
axisintAxis to reduce along
lastIndexboolWhether to get the index of first or last occurrence of min
Remap(InputArray, OutputArray, InputArray, InputArray, InterpolationFlags, BorderTypes, Scalar?)
Applies a generic geometrical transformation to an image.
public static void Remap(InputArray src, OutputArray dst, InputArray map1, InputArray map2, InterpolationFlags interpolation = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)
Parameters
srcInputArraySource image.
dstOutputArrayDestination image. It has the same size as map1 and the same type as src
map1InputArrayThe first map of either (x,y) points or just x values having the type CV_16SC2, CV_32FC1, or CV_32FC2.
map2InputArrayThe second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively.
interpolationInterpolationFlagsInterpolation method. The method INTER_AREA is not supported by this function.
borderModeBorderTypesPixel extrapolation method. When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function.
borderValueScalar?Value used in case of a constant border. By default, it is 0.
Repeat(InputArray, int, int, OutputArray)
replicates the input matrix the specified number of times in the horizontal and/or vertical direction
public static void Repeat(InputArray src, int ny, int nx, OutputArray dst)
Parameters
srcInputArrayThe source array to replicate
nyintHow many times the src is repeated along the vertical axis
nxintHow many times the src is repeated along the horizontal axis
dstOutputArrayThe destination array; will have the same type as src
Repeat(Mat, int, int)
replicates the input matrix the specified number of times in the horizontal and/or vertical direction
public static Mat Repeat(Mat src, int ny, int nx)
Parameters
srcMatThe source array to replicate
nyintHow many times the src is repeated along the vertical axis
nxintHow many times the src is repeated along the horizontal axis
Returns
ReprojectImageTo3D(InputArray, OutputArray, InputArray, bool, int)
reprojects disparity image to 3D: (x,y,d)->(X,Y,Z) using the matrix Q returned by cv::stereoRectify
public static void ReprojectImageTo3D(InputArray disparity, OutputArray _3dImage, InputArray Q, bool handleMissingValues = false, int ddepth = -1)
Parameters
disparityInputArrayInput single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image.
_3dImageOutputArrayOutput 3-channel floating-point image of the same size as disparity. Each element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map.
QInputArray4 x 4 perspective transformation matrix that can be obtained with stereoRectify().
handleMissingValuesboolIndicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoBM::operator() ) are transformed to 3D points with a very large Z value (currently set to 10000).
ddepthinthe optional output array depth. If it is -1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.
Resize(InputArray, OutputArray, Size, double, double, InterpolationFlags)
Resizes an image.
public static void Resize(InputArray src, OutputArray dst, Size dsize, double fx = 0, double fy = 0, InterpolationFlags interpolation = InterpolationFlags.Linear)
Parameters
srcInputArrayinput image.
dstOutputArrayoutput image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src.
dsizeSizeoutput image size; if it equals zero, it is computed as: dsize = Size(round(fxsrc.cols), round(fysrc.rows)) Either dsize or both fx and fy must be non-zero.
fxdoublescale factor along the horizontal axis; when it equals 0, it is computed as: (double)dsize.width/src.cols
fydoublescale factor along the vertical axis; when it equals 0, it is computed as: (double)dsize.height/src.rows
interpolationInterpolationFlagsinterpolation method
ResizeWindow(string, Size)
Resizes window to the specified size
public static void ResizeWindow(string winName, Size size)
Parameters
ResizeWindow(string, int, int)
Resizes window to the specified size
public static void ResizeWindow(string winName, int width, int height)
Parameters
Rodrigues(InputArray, OutputArray, OutputArray?)
converts rotation vector to rotation matrix or vice versa using Rodrigues transformation
public static void Rodrigues(InputArray src, OutputArray dst, OutputArray? jacobian = null)
Parameters
srcInputArrayInput rotation vector (3x1 or 1x3) or rotation matrix (3x3).
dstOutputArrayOutput rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively.
jacobianOutputArrayOptional output Jacobian matrix, 3x9 or 9x3, which is a matrix of partial derivatives of the output array components with respect to the input array components.
Rodrigues(double[,], out double[], out double[,])
converts rotation matrix to rotation vector using Rodrigues transformation
public static void Rodrigues(double[,] matrix, out double[] vector, out double[,] jacobian)
Parameters
matrixdouble[,]Input rotation matrix (3x3).
vectordouble[]Output rotation vector (3x1).
jacobiandouble[,]Optional output Jacobian matrix, 3x9, which is a matrix of partial derivatives of the output array components with respect to the input array components.
Rodrigues(double[], out double[,], out double[,])
converts rotation vector to rotation matrix using Rodrigues transformation
public static void Rodrigues(double[] vector, out double[,] matrix, out double[,] jacobian)
Parameters
vectordouble[]Input rotation vector (3x1).
matrixdouble[,]Output rotation matrix (3x3).
jacobiandouble[,]Optional output Jacobian matrix, 3x9, which is a matrix of partial derivatives of the output array components with respect to the input array components.
Rotate(InputArray, OutputArray, RotateFlags)
Rotates a 2D array in multiples of 90 degrees.
public static void Rotate(InputArray src, OutputArray dst, RotateFlags rotateCode)
Parameters
srcInputArrayinput array.
dstOutputArrayoutput array of the same type as src. The size is the same with ROTATE_180, and the rows and cols are switched for ROTATE_90_CLOCKWISE and ROTATE_90_COUNTERCLOCKWISE.
rotateCodeRotateFlagsan enum to specify how to rotate the array.
RotatedRectangleIntersection(RotatedRect, RotatedRect, OutputArray)
Finds out if there is any intersection between two rotated rectangles. If there is then the vertices of the interesecting region are returned as well. Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function.
public static RectanglesIntersectTypes RotatedRectangleIntersection(RotatedRect rect1, RotatedRect rect2, OutputArray intersectingRegion)
Parameters
rect1RotatedRectFirst rectangle
rect2RotatedRectSecond rectangle
intersectingRegionOutputArrayThe output array of the verticies of the intersecting region. It returns at most 8 vertices. Stored as std::vector<cv::Point2f> or cv::Mat as Mx1 of type CV_32FC2.
Returns
RotatedRectangleIntersection(RotatedRect, RotatedRect, out Point2f[])
Finds out if there is any intersection between two rotated rectangles. If there is then the vertices of the interesecting region are returned as well. Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function.
public static RectanglesIntersectTypes RotatedRectangleIntersection(RotatedRect rect1, RotatedRect rect2, out Point2f[] intersectingRegion)
Parameters
rect1RotatedRectFirst rectangle
rect2RotatedRectSecond rectangle
intersectingRegionPoint2f[]The output array of the verticies of the intersecting region. It returns at most 8 vertices.
Returns
SVBackSubst(InputArray, InputArray, InputArray, InputArray, OutputArray)
performs back substitution for the previously computed SVD
public static void SVBackSubst(InputArray w, InputArray u, InputArray vt, InputArray rhs, OutputArray dst)
Parameters
wInputArraycalculated singular values
uInputArraycalculated left singular vectors
vtInputArraytransposed matrix of right singular vectors
rhsInputArrayright-hand side of a linear system (uwv')*dst = rhs to be solved, where A has been previously decomposed.
dstOutputArrayoutput
SVDecomp(InputArray, OutputArray, OutputArray, OutputArray, Flags)
decomposes matrix and stores the results to user-provided matrices
public static void SVDecomp(InputArray src, OutputArray w, OutputArray u, OutputArray vt, SVD.Flags flags = Flags.None)
Parameters
srcInputArraydecomposed matrix. The depth has to be CV_32F or CV_64F.
wOutputArraycalculated singular values
uOutputArraycalculated left singular vectors
vtOutputArraytransposed matrix of right singular vectors
flagsSVD.Flagsperation flags - see SVD::Flags.
SampsonDistance(InputArray, InputArray, InputArray)
Calculates the Sampson Distance between two points.
public static double SampsonDistance(InputArray pt1, InputArray pt2, InputArray f)
Parameters
pt1InputArrayfirst homogeneous 2d point
pt2InputArraysecond homogeneous 2d point
fInputArrayF fundamental matrix
Returns
- double
The computed Sampson distance.
Remarks
SampsonDistance(Point3d, Point3d, double[,])
Calculates the Sampson Distance between two points.
public static double SampsonDistance(Point3d pt1, Point3d pt2, double[,] f)
Parameters
pt1Point3dfirst homogeneous 2d point
pt2Point3dsecond homogeneous 2d point
fdouble[,]F fundamental matrix
Returns
- double
The computed Sampson distance.
Remarks
ScaleAdd(InputArray, double, InputArray, OutputArray)
adds scaled array to another one (dst = alpha*src1 + src2)
public static void ScaleAdd(InputArray src1, double alpha, InputArray src2, OutputArray dst)
Parameters
src1InputArrayalphadoublesrc2InputArraydstOutputArray
Scharr(InputArray, OutputArray, MatType, int, int, double, double, BorderTypes)
Calculates the first x- or y- image derivative using Scharr operator
public static void Scharr(InputArray src, OutputArray dst, MatType ddepth, int xorder, int yorder, double scale = 1, double delta = 0, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image; will have the same size and the same number of channels as src
ddepthMatTypeThe destination image depth
xorderintOrder of the derivative x
yorderintOrder of the derivative y
scaledoubleThe optional scale factor for the computed derivative values (by default, no scaling is applie
deltadoubleThe optional delta value, added to the results prior to storing them in dst
borderTypeBorderTypesThe pixel extrapolation method
SeamlessClone(InputArray, InputArray, InputArray?, Point, OutputArray, SeamlessCloneMethods)
Image editing tasks concern either global changes (color/intensity corrections, filters, deformations) or local changes concerned to a selection. Here we are interested in achieving local changes, ones that are restricted to a region manually selected (ROI), in a seamless and effortless manner. The extent of the changes ranges from slight distortions to complete replacement by novel content @cite PM03 .
public static void SeamlessClone(InputArray src, InputArray dst, InputArray? mask, Point p, OutputArray blend, SeamlessCloneMethods flags)
Parameters
srcInputArrayInput 8-bit 3-channel image.
dstInputArrayInput 8-bit 3-channel image.
maskInputArrayInput 8-bit 1 or 3-channel image.
pPointPoint in dst image where object is placed.
blendOutputArrayOutput image with the same size and type as dst.
flagsSeamlessCloneMethodsCloning method
SelectROI(InputArray, bool, bool)
Selects ROI on the given image.
Function creates a window and allows user to select a ROI using mouse.
Controls: use space or enter to finish selection, use key c to cancel selection (function will return the zero cv::Rect).
public static Rect SelectROI(InputArray img, bool showCrosshair = true, bool fromCenter = false)
Parameters
imgInputArrayimage to select a ROI.
showCrosshairboolif true crosshair of selection rectangle will be shown.
fromCenterboolif true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspond to the initial mouse position.
Returns
- Rect
selected ROI or empty rect if selection canceled.
SelectROI(string, InputArray, bool, bool)
Selects ROI on the given image.
Function creates a window and allows user to select a ROI using mouse.
Controls: use space or enter to finish selection, use key c to cancel selection (function will return the zero cv::Rect).
public static Rect SelectROI(string windowName, InputArray img, bool showCrosshair = true, bool fromCenter = false)
Parameters
windowNamestringname of the window where selection process will be shown.
imgInputArrayimage to select a ROI.
showCrosshairboolif true crosshair of selection rectangle will be shown.
fromCenterboolif true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspond to the initial mouse position.
Returns
- Rect
selected ROI or empty rect if selection canceled.
SelectROIs(string, InputArray, bool, bool)
Selects ROIs on the given image.
Function creates a window and allows user to select a ROIs using mouse.
Controls: use space or enter to finish current selection and start a new one,
use esc to terminate multiple ROI selection process.
public static Rect[] SelectROIs(string windowName, InputArray img, bool showCrosshair = true, bool fromCenter = false)
Parameters
windowNamestringname of the window where selection process will be shown.
imgInputArrayimage to select a ROI.
showCrosshairboolif true crosshair of selection rectangle will be shown.
fromCenterboolif true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspond to the initial mouse position.
Returns
- Rect[]
selected ROIs.
SepFilter2D(InputArray, OutputArray, MatType, InputArray, InputArray, Point?, double, BorderTypes)
Applies separable linear filter to an image
public static void SepFilter2D(InputArray src, OutputArray dst, MatType ddepth, InputArray kernelX, InputArray kernelY, Point? anchor = null, double delta = 0, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image; will have the same size and the same number of channels as src
ddepthMatTypeThe destination image depth
kernelXInputArrayThe coefficients for filtering each row
kernelYInputArrayThe coefficients for filtering each column
anchorPoint?The anchor position within the kernel; The default value (-1, 1) means that the anchor is at the kernel center
deltadoubleThe value added to the filtered results before storing them
borderTypeBorderTypesThe pixel extrapolation method
SetBreakOnError(bool)
Sets/resets the break-on-error mode. When the break-on-error mode is set, the default error handler issues a hardware exception, which can make debugging more convenient.
public static bool SetBreakOnError(bool flag)
Parameters
flagbool
Returns
- bool
the previous state
SetIdentity(InputOutputArray, Scalar?)
initializes scaled identity matrix
public static void SetIdentity(InputOutputArray mtx, Scalar? s = null)
Parameters
mtxInputOutputArrayThe matrix to initialize (not necessarily square)
sScalar?The value to assign to the diagonal elements
SetLogLevel(LogLevel)
Set global logging level
public static LogLevel SetLogLevel(LogLevel logLevel)
Parameters
logLevelLogLevellogging level
Returns
- LogLevel
previous logging level
SetMouseCallback(string, MouseCallback, nint)
Sets the callback function for mouse events occuring within the specified window.
public static void SetMouseCallback(string windowName, MouseCallback onMouse, nint userData = 0)
Parameters
windowNamestringName of the window.
onMouseMouseCallbackReference to the function to be called every time mouse event occurs in the specified window.
userDatanint
SetNumThreads(int)
OpenCV will try to set the number of threads for the next parallel region. If threads == 0, OpenCV will disable threading optimizations and run all it's functions sequentially.Passing threads < 0 will reset threads number to system default. This function must be called outside of parallel region. OpenCV will try to run its functions with specified threads number, but some behaviour differs from framework:
TBB- User-defined parallel constructions will run with the same threads number, if another is not specified.If later on user creates his own scheduler, OpenCV will use it.OpenMP- No special defined behaviour.Concurrency- If threads == 1, OpenCV will disable threading optimizations and run its functions sequentially.GCD- Supports only values <= 0.C=- No special defined behaviour.
public static void SetNumThreads(int nThreads)
Parameters
nThreadsintNumber of threads used by OpenCV.
SetTheRNG(ulong)
Sets the thread-local Random number generator
public static RNG SetTheRNG(ulong state)
Parameters
stateulong
Returns
SetTrackbarMax(string, string, int)
Sets the trackbar maximum position. The function sets the maximum position of the specified trackbar in the specified window.
public static void SetTrackbarMax(string trackbarName, string winName, int maxVal)
Parameters
trackbarNamestringName of the trackbar.
winNamestringName of the window that is the parent of trackbar.
maxValintNew maximum position.
SetTrackbarMin(string, string, int)
Sets the trackbar minimum position. The function sets the minimum position of the specified trackbar in the specified window.
public static void SetTrackbarMin(string trackbarName, string winName, int minVal)
Parameters
trackbarNamestringName of the trackbar.
winNamestringName of the window that is the parent of trackbar.
minValintNew minimum position.
SetTrackbarPos(string, string, int)
Sets the trackbar position.
public static void SetTrackbarPos(string trackbarName, string winName, int pos)
Parameters
trackbarNamestringName of the trackbar.
winNamestringName of the window that is the parent of trackbar.
posintNew position.
SetUseOptimized(bool)
Turns on/off available optimization. The function turns on or off the optimized code in OpenCV. Some optimization can not be enabled or disabled, but, for example, most of SSE code in OpenCV can be temporarily turned on or off this way.
public static void SetUseOptimized(bool onoff)
Parameters
onoffbool
SetWindowProperty(string, WindowPropertyFlags, double)
Changes parameters of a window dynamically.
public static void SetWindowProperty(string winName, WindowPropertyFlags propId, double propValue)
Parameters
winNamestringName of the window.
propIdWindowPropertyFlagsWindow property to retrieve.
propValuedoubleNew value of the window property.
SetWindowTitle(string, string)
Updates window title
public static void SetWindowTitle(string winName, string title)
Parameters
Sobel(InputArray, OutputArray, MatType, int, int, int, double, double, BorderTypes)
Calculates the first, second, third or mixed image derivatives using an extended Sobel operator
public static void Sobel(InputArray src, OutputArray dst, MatType ddepth, int xorder, int yorder, int ksize = 3, double scale = 1, double delta = 0, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayThe source image
dstOutputArrayThe destination image; will have the same size and the same number of channels as src
ddepthMatTypeThe destination image depth
xorderintOrder of the derivative x
yorderintOrder of the derivative y
ksizeintSize of the extended Sobel kernel, must be 1, 3, 5 or 7
scaledoubleThe optional scale factor for the computed derivative values (by default, no scaling is applied
deltadoubleThe optional delta value, added to the results prior to storing them in dst
borderTypeBorderTypesThe pixel extrapolation method
Solve(InputArray, InputArray, OutputArray, DecompTypes)
solves linear system or a least-square problem
public static bool Solve(InputArray src1, InputArray src2, OutputArray dst, DecompTypes flags = DecompTypes.LU)
Parameters
src1InputArraysrc2InputArraydstOutputArrayflagsDecompTypes
Returns
SolveCubic(InputArray, OutputArray)
finds real roots of a cubic polynomial
public static int SolveCubic(InputArray coeffs, OutputArray roots)
Parameters
coeffsInputArrayThe equation coefficients, an array of 3 or 4 elements
rootsOutputArrayThe destination array of real roots which will have 1 or 3 elements
Returns
SolveLP(InputArray, InputArray, OutputArray)
Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method).
public static SolveLPResult SolveLP(InputArray func, InputArray constr, OutputArray z)
Parameters
funcInputArrayThis row-vector corresponds to \f$c\f$ in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers.As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to \f$c^T\f$.
constrInputArraym-by-n+1matrix, whose rightmost column corresponds to \f$b\f$ in formulation above and the remaining to \f$A\f$. It should containt 32- or 64-bit floating point numbers.zOutputArrayThe solution will be returned here as a column-vector - it corresponds to \f$c\f$ in the formulation above.It will contain 64-bit floating point numbers.
Returns
SolvePnP(InputArray, InputArray, InputArray, InputArray, OutputArray, OutputArray, bool, SolvePnPFlags)
Finds an object pose from 3D-2D point correspondences.
public static void SolvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess = false, SolvePnPFlags flags = SolvePnPFlags.Iterative)
Parameters
objectPointsInputArrayArray of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here.
imagePointsInputArrayArray of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here.
cameraMatrixInputArrayInput camera matrix
distCoeffsInputArrayInput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
rvecOutputArrayOutput rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
tvecOutputArrayOutput translation vector.
useExtrinsicGuessboolIf true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
flagsSolvePnPFlagsMethod for solving a PnP problem:
SolvePnP(IEnumerable<Point3f>, IEnumerable<Point2f>, double[,], IEnumerable<double>?, ref double[], ref double[], bool, SolvePnPFlags)
Finds an object pose from 3D-2D point correspondences.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void SolvePnP(IEnumerable<Point3f> objectPoints, IEnumerable<Point2f> imagePoints, double[,] cameraMatrix, IEnumerable<double>? distCoeffs, ref double[] rvec, ref double[] tvec, bool useExtrinsicGuess = false, SolvePnPFlags flags = SolvePnPFlags.Iterative)
Parameters
objectPointsIEnumerable<Point3f>Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here.
imagePointsIEnumerable<Point2f>Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here.
cameraMatrixdouble[,]Input camera matrix
distCoeffsIEnumerable<double>Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
rvecdouble[]Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
tvecdouble[]Output translation vector.
useExtrinsicGuessboolIf true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
flagsSolvePnPFlagsMethod for solving a PnP problem
SolvePnPRansac(InputArray, InputArray, InputArray, InputArray, OutputArray?, OutputArray?, bool, int, float, double, OutputArray?, SolvePnPFlags)
computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible.
public static void SolvePnPRansac(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray? rvec, OutputArray? tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8, double confidence = 0.99, OutputArray? inliers = null, SolvePnPFlags flags = SolvePnPFlags.Iterative)
Parameters
objectPointsInputArrayArray of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here.
imagePointsInputArrayArray of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here.
cameraMatrixInputArrayInput 3x3 camera matrix
distCoeffsInputArrayInput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
rvecOutputArrayOutput rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
tvecOutputArrayOutput translation vector.
useExtrinsicGuessboolIf true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
iterationsCountintNumber of iterations.
reprojectionErrorfloatInlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.
confidencedoubleThe probability that the algorithm produces a useful result.
inliersOutputArrayOutput vector that contains indices of inliers in objectPoints and imagePoints .
flagsSolvePnPFlagsMethod for solving a PnP problem
SolvePnPRansac(IEnumerable<Point3f>, IEnumerable<Point2f>, double[,], IEnumerable<double>, out double[], out double[])
computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible.
public static void SolvePnPRansac(IEnumerable<Point3f> objectPoints, IEnumerable<Point2f> imagePoints, double[,] cameraMatrix, IEnumerable<double> distCoeffs, out double[] rvec, out double[] tvec)
Parameters
objectPointsIEnumerable<Point3f>Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here.
imagePointsIEnumerable<Point2f>Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here.
cameraMatrixdouble[,]Input 3x3 camera matrix
distCoeffsIEnumerable<double>Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
rvecdouble[]Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
tvecdouble[]Output translation vector.
SolvePnPRansac(IEnumerable<Point3f>, IEnumerable<Point2f>, double[,], IEnumerable<double>?, out double[], out double[], out int[], bool, int, float, double, SolvePnPFlags)
computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void SolvePnPRansac(IEnumerable<Point3f> objectPoints, IEnumerable<Point2f> imagePoints, double[,] cameraMatrix, IEnumerable<double>? distCoeffs, out double[] rvec, out double[] tvec, out int[] inliers, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8, double confidence = 0.99, SolvePnPFlags flags = SolvePnPFlags.Iterative)
Parameters
objectPointsIEnumerable<Point3f>Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here.
imagePointsIEnumerable<Point2f>Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here.
cameraMatrixdouble[,]Input 3x3 camera matrix
distCoeffsIEnumerable<double>Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
rvecdouble[]Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
tvecdouble[]Output translation vector.
inliersint[]Output vector that contains indices of inliers in objectPoints and imagePoints .
useExtrinsicGuessboolIf true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
iterationsCountintNumber of iterations.
reprojectionErrorfloatInlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.
confidencedoubleThe probability that the algorithm produces a useful result.
flagsSolvePnPFlagsMethod for solving a PnP problem
SolvePoly(InputArray, OutputArray, int)
finds real and complex roots of a polynomial
public static double SolvePoly(InputArray coeffs, OutputArray roots, int maxIters = 300)
Parameters
coeffsInputArrayThe array of polynomial coefficients
rootsOutputArrayThe destination (complex) array of roots
maxItersintThe maximum number of iterations the algorithm does
Returns
Sort(InputArray, OutputArray, SortFlags)
sorts independently each matrix row or each matrix column
public static void Sort(InputArray src, OutputArray dst, SortFlags flags)
Parameters
srcInputArrayThe source single-channel array
dstOutputArrayThe destination array of the same size and the same type as src
flagsSortFlagsThe operation flags, a combination of the SortFlag values
SortIdx(InputArray, OutputArray, SortFlags)
sorts independently each matrix row or each matrix column
public static void SortIdx(InputArray src, OutputArray dst, SortFlags flags)
Parameters
srcInputArrayThe source single-channel array
dstOutputArrayThe destination integer array of the same size as src
flagsSortFlagsThe operation flags, a combination of SortFlag values
SpatialGradient(InputArray, OutputArray, OutputArray, int, BorderTypes)
Calculates the first order image derivative in both x and y using a Sobel operator
public static void SpatialGradient(InputArray src, OutputArray dx, OutputArray dy, int ksize = 3, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArrayinput image.
dxOutputArrayoutput image with first-order derivative in x.
dyOutputArrayoutput image with first-order derivative in y.
ksizeintsize of Sobel kernel. It must be 3.
borderTypeBorderTypespixel extrapolation method
Split(Mat)
Copies each plane of a multi-channel array to a dedicated array
public static Mat[] Split(Mat src)
Parameters
srcMatThe source multi-channel array
Returns
- Mat[]
The number of arrays must match mtx.channels() . The arrays themselves will be reallocated if needed
Split(Mat, out Mat[])
Copies each plane of a multi-channel array to a dedicated array
public static void Split(Mat src, out Mat[] mv)
Parameters
srcMatThe source multi-channel array
mvMat[]The destination array or vector of arrays; The number of arrays must match mtx.channels() . The arrays themselves will be reallocated if needed
SqrBoxFilter(InputArray, OutputArray, int, Size, Point?, bool, BorderTypes)
Calculates the normalized sum of squares of the pixel values overlapping the filter.
For every pixel f(x, y) in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel f(x, y).
The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.
public static void SqrBoxFilter(InputArray src, OutputArray dst, int ddepth, Size ksize, Point? anchor = null, bool normalize = true, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArraydstOutputArrayddepthintksizeSizeanchorPoint?normalizeboolborderTypeBorderTypes
Sqrt(InputArray, OutputArray)
computes square root of each matrix element (dst = src**0.5)
public static void Sqrt(InputArray src, OutputArray dst)
Parameters
srcInputArrayThe source floating-point array
dstOutputArrayThe destination array; will have the same size and the same type as src
StartWindowThread()
public static int StartWindowThread()
Returns
StereoCalibrate(IEnumerable<InputArray>, IEnumerable<InputArray>, IEnumerable<InputArray>, InputOutputArray, InputOutputArray, InputOutputArray, InputOutputArray, Size, OutputArray, OutputArray, OutputArray, OutputArray, CalibrationFlags, TermCriteria?)
finds intrinsic and extrinsic parameters of a stereo camera
public static double StereoCalibrate(IEnumerable<InputArray> objectPoints, IEnumerable<InputArray> imagePoints1, IEnumerable<InputArray> imagePoints2, InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1, InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2, Size imageSize, OutputArray R, OutputArray T, OutputArray E, OutputArray F, CalibrationFlags flags = CalibrationFlags.FixIntrinsic, TermCriteria? criteria = null)
Parameters
objectPointsIEnumerable<InputArray>Vector of vectors of the calibration pattern points.
imagePoints1IEnumerable<InputArray>Vector of vectors of the projections of the calibration pattern points, observed by the first camera.
imagePoints2IEnumerable<InputArray>Vector of vectors of the projections of the calibration pattern points, observed by the second camera.
cameraMatrix1InputOutputArrayInput/output first camera matrix
distCoeffs1InputOutputArrayInput/output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The output vector length depends on the flags.
cameraMatrix2InputOutputArrayInput/output second camera matrix. The parameter is similar to cameraMatrix1 .
distCoeffs2InputOutputArrayInput/output lens distortion coefficients for the second camera. The parameter is similar to distCoeffs1 .
imageSizeSizeSize of the image used only to initialize intrinsic camera matrix.
ROutputArrayOutput rotation matrix between the 1st and the 2nd camera coordinate systems.
TOutputArrayOutput translation vector between the coordinate systems of the cameras.
EOutputArrayOutput essential matrix.
FOutputArrayOutput fundamental matrix.
flagsCalibrationFlagsDifferent flags that may be zero or a combination of the CalibrationFlag values
criteriaTermCriteria?Termination criteria for the iterative optimization algorithm.
Returns
StereoCalibrate(IEnumerable<IEnumerable<Point3f>>, IEnumerable<IEnumerable<Point2f>>, IEnumerable<IEnumerable<Point2f>>, double[,], double[], double[,], double[], Size, OutputArray, OutputArray, OutputArray, OutputArray, CalibrationFlags, TermCriteria?)
finds intrinsic and extrinsic parameters of a stereo camera
public static double StereoCalibrate(IEnumerable<IEnumerable<Point3f>> objectPoints, IEnumerable<IEnumerable<Point2f>> imagePoints1, IEnumerable<IEnumerable<Point2f>> imagePoints2, double[,] cameraMatrix1, double[] distCoeffs1, double[,] cameraMatrix2, double[] distCoeffs2, Size imageSize, OutputArray R, OutputArray T, OutputArray E, OutputArray F, CalibrationFlags flags = CalibrationFlags.FixIntrinsic, TermCriteria? criteria = null)
Parameters
objectPointsIEnumerable<IEnumerable<Point3f>>Vector of vectors of the calibration pattern points.
imagePoints1IEnumerable<IEnumerable<Point2f>>Vector of vectors of the projections of the calibration pattern points, observed by the first camera.
imagePoints2IEnumerable<IEnumerable<Point2f>>Vector of vectors of the projections of the calibration pattern points, observed by the second camera.
cameraMatrix1double[,]Input/output first camera matrix
distCoeffs1double[]Input/output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The output vector length depends on the flags.
cameraMatrix2double[,]Input/output second camera matrix. The parameter is similar to cameraMatrix1 .
distCoeffs2double[]Input/output lens distortion coefficients for the second camera. The parameter is similar to distCoeffs1 .
imageSizeSizeSize of the image used only to initialize intrinsic camera matrix.
ROutputArrayOutput rotation matrix between the 1st and the 2nd camera coordinate systems.
TOutputArrayOutput translation vector between the coordinate systems of the cameras.
EOutputArrayOutput essential matrix.
FOutputArrayOutput fundamental matrix.
flagsCalibrationFlagsDifferent flags that may be zero or a combination of the CalibrationFlag values
criteriaTermCriteria?Termination criteria for the iterative optimization algorithm.
Returns
StereoRectify(InputArray, InputArray, InputArray, InputArray, Size, InputArray, InputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, StereoRectificationFlags, double, Size, out Rect, out Rect)
computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters
public static void StereoRectify(InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, Size imageSize, InputArray R, InputArray T, OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, StereoRectificationFlags flags, double alpha, Size newImageSize, out Rect validPixROI1, out Rect validPixROI2)
Parameters
cameraMatrix1InputArrayFirst camera matrix.
distCoeffs1InputArrayFirst camera distortion parameters.
cameraMatrix2InputArraySecond camera matrix.
distCoeffs2InputArraySecond camera distortion parameters.
imageSizeSizeSize of the image used for stereo calibration.
RInputArrayRotation matrix between the coordinate systems of the first and the second cameras.
TInputArrayTranslation vector between coordinate systems of the cameras.
R1OutputArrayOutput 3x3 rectification transform (rotation matrix) for the first camera.
R2OutputArrayOutput 3x3 rectification transform (rotation matrix) for the second camera.
P1OutputArrayOutput 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.
P2OutputArrayOutput 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.
QOutputArrayOutput 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).
flagsStereoRectificationFlagsOperation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.
alphadoubleFree scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.
newImageSizeSizeNew image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.
validPixROI1RectOptional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.
validPixROI2RectOptional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.
StereoRectify(InputArray, InputArray, InputArray, InputArray, Size, InputArray, InputArray, OutputArray, OutputArray, OutputArray, OutputArray, OutputArray, StereoRectificationFlags, double, Size?)
computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters
public static void StereoRectify(InputArray cameraMatrix1, InputArray distCoeffs1, InputArray cameraMatrix2, InputArray distCoeffs2, Size imageSize, InputArray R, InputArray T, OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, StereoRectificationFlags flags = StereoRectificationFlags.ZeroDisparity, double alpha = -1, Size? newImageSize = null)
Parameters
cameraMatrix1InputArrayFirst camera matrix.
distCoeffs1InputArrayFirst camera distortion parameters.
cameraMatrix2InputArraySecond camera matrix.
distCoeffs2InputArraySecond camera distortion parameters.
imageSizeSizeSize of the image used for stereo calibration.
RInputArrayRotation matrix between the coordinate systems of the first and the second cameras.
TInputArrayTranslation vector between coordinate systems of the cameras.
R1OutputArrayOutput 3x3 rectification transform (rotation matrix) for the first camera.
R2OutputArrayOutput 3x3 rectification transform (rotation matrix) for the second camera.
P1OutputArrayOutput 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.
P2OutputArrayOutput 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.
QOutputArrayOutput 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).
flagsStereoRectificationFlagsOperation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.
alphadoubleFree scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.
newImageSizeSize?New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.
StereoRectify(double[,], double[], double[,], double[], Size, double[,], double[], out double[,], out double[,], out double[,], out double[,], out double[,], StereoRectificationFlags, double, Size, out Rect, out Rect)
computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters
public static void StereoRectify(double[,] cameraMatrix1, double[] distCoeffs1, double[,] cameraMatrix2, double[] distCoeffs2, Size imageSize, double[,] R, double[] T, out double[,] R1, out double[,] R2, out double[,] P1, out double[,] P2, out double[,] Q, StereoRectificationFlags flags, double alpha, Size newImageSize, out Rect validPixROI1, out Rect validPixROI2)
Parameters
cameraMatrix1double[,]First camera matrix.
distCoeffs1double[]First camera distortion parameters.
cameraMatrix2double[,]Second camera matrix.
distCoeffs2double[]Second camera distortion parameters.
imageSizeSizeSize of the image used for stereo calibration.
Rdouble[,]Rotation matrix between the coordinate systems of the first and the second cameras.
Tdouble[]Translation vector between coordinate systems of the cameras.
R1double[,]Output 3x3 rectification transform (rotation matrix) for the first camera.
R2double[,]Output 3x3 rectification transform (rotation matrix) for the second camera.
P1double[,]Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.
P2double[,]Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.
Qdouble[,]Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).
flagsStereoRectificationFlagsOperation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.
alphadoubleFree scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.
newImageSizeSizeNew image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.
validPixROI1RectOptional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.
validPixROI2RectOptional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller.
StereoRectify(double[,], double[], double[,], double[], Size, double[,], double[], out double[,], out double[,], out double[,], out double[,], out double[,], StereoRectificationFlags, double, Size?)
computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters
public static void StereoRectify(double[,] cameraMatrix1, double[] distCoeffs1, double[,] cameraMatrix2, double[] distCoeffs2, Size imageSize, double[,] R, double[] T, out double[,] R1, out double[,] R2, out double[,] P1, out double[,] P2, out double[,] Q, StereoRectificationFlags flags = StereoRectificationFlags.ZeroDisparity, double alpha = -1, Size? newImageSize = null)
Parameters
cameraMatrix1double[,]First camera matrix.
distCoeffs1double[]First camera distortion parameters.
cameraMatrix2double[,]Second camera matrix.
distCoeffs2double[]Second camera distortion parameters.
imageSizeSizeSize of the image used for stereo calibration.
Rdouble[,]Rotation matrix between the coordinate systems of the first and the second cameras.
Tdouble[]Translation vector between coordinate systems of the cameras.
R1double[,]Output 3x3 rectification transform (rotation matrix) for the first camera.
R2double[,]Output 3x3 rectification transform (rotation matrix) for the second camera.
P1double[,]Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera.
P2double[,]Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera.
Qdouble[,]Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ).
flagsStereoRectificationFlagsOperation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.
alphadoubleFree scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases.
newImageSizeSize?New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion.
StereoRectifyUncalibrated(InputArray, InputArray, InputArray, Size, OutputArray, OutputArray, double)
computes the rectification transformation for an uncalibrated stereo camera (zero distortion is assumed)
public static bool StereoRectifyUncalibrated(InputArray points1, InputArray points2, InputArray F, Size imgSize, OutputArray H1, OutputArray H2, double threshold = 5)
Parameters
points1InputArrayArray of feature points in the first image.
points2InputArrayThe corresponding points in the second image. The same formats as in findFundamentalMat() are supported.
FInputArrayInput fundamental matrix. It can be computed from the same set of point pairs using findFundamentalMat() .
imgSizeSizeSize of the image.
H1OutputArrayOutput rectification homography matrix for the first image.
H2OutputArrayOutput rectification homography matrix for the second image.
thresholddoubleOptional threshold used to filter out the outliers. If the parameter is greater than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which |points2[i]^T * F * points1[i]| > threshold ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers.
Returns
StereoRectifyUncalibrated(IEnumerable<Point2d>, IEnumerable<Point2d>, double[,], Size, out double[,], out double[,], double)
computes the rectification transformation for an uncalibrated stereo camera (zero distortion is assumed)
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static bool StereoRectifyUncalibrated(IEnumerable<Point2d> points1, IEnumerable<Point2d> points2, double[,] F, Size imgSize, out double[,] H1, out double[,] H2, double threshold = 5)
Parameters
points1IEnumerable<Point2d>Array of feature points in the first image.
points2IEnumerable<Point2d>The corresponding points in the second image. The same formats as in findFundamentalMat() are supported.
Fdouble[,]Input fundamental matrix. It can be computed from the same set of point pairs using findFundamentalMat() .
imgSizeSizeSize of the image.
H1double[,]Output rectification homography matrix for the first image.
H2double[,]Output rectification homography matrix for the second image.
thresholddoubleOptional threshold used to filter out the outliers. If the parameter is greater than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which |points2[i]^T * F * points1[i]| > threshold ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers.
Returns
Stylization(InputArray, OutputArray, float, float)
Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features.
public static void Stylization(InputArray src, OutputArray dst, float sigmaS = 60, float sigmaR = 0.45)
Parameters
srcInputArrayInput 8-bit 3-channel image.
dstOutputArrayOutput image with the same size and type as src.
sigmaSfloatRange between 0 to 200.
sigmaRfloatRange between 0 to 1.
Subtract(InputArray, InputArray, OutputArray, InputArray?, int)
Calculates per-element difference between two arrays or array and a scalar
public static void Subtract(InputArray src1, InputArray src2, OutputArray dst, InputArray? mask = null, int dtype = -1)
Parameters
src1InputArrayThe first source array
src2InputArrayThe second source array. It must have the same size and same type as src1
dstOutputArrayThe destination array; it will have the same size and same type as src1
maskInputArrayThe optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]
dtypeint
Subtract(InputArray, Scalar, OutputArray, InputArray?, int)
Calculates per-element difference between two arrays or array and a scalar
public static void Subtract(InputArray src1, Scalar src2, OutputArray dst, InputArray? mask = null, int dtype = -1)
Parameters
src1InputArrayThe first source array
src2ScalarThe second source array. It must have the same size and same type as src1
dstOutputArrayThe destination array; it will have the same size and same type as src1
maskInputArrayThe optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]
dtypeint
Subtract(Scalar, InputArray, OutputArray, InputArray?, int)
Calculates per-element difference between two arrays or array and a scalar
public static void Subtract(Scalar src1, InputArray src2, OutputArray dst, InputArray? mask = null, int dtype = -1)
Parameters
src1ScalarThe first source array
src2InputArrayThe second source array. It must have the same size and same type as src1
dstOutputArrayThe destination array; it will have the same size and same type as src1
maskInputArrayThe optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null]
dtypeint
Sum(InputArray)
computes sum of array elements
public static Scalar Sum(InputArray src)
Parameters
srcInputArrayThe source array; must have 1 to 4 channels
Returns
TextureFlattening(InputArray, InputArray?, OutputArray, float, float, int)
By retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge Detector is used.
public static void TextureFlattening(InputArray src, InputArray? mask, OutputArray dst, float lowThreshold = 30, float highThreshold = 45, int kernelSize = 3)
Parameters
srcInputArrayInput 8-bit 3-channel image.
maskInputArrayInput 8-bit 1 or 3-channel image.
dstOutputArrayOutput image with the same size and type as src.
lowThresholdfloatRange from 0 to 100.
highThresholdfloatValue > 100.
kernelSizeintThe size of the Sobel kernel to be used.
Threshold(InputArray, OutputArray, double, double, ThresholdTypes)
Applies a fixed-level threshold to each array element.
public static double Threshold(InputArray src, OutputArray dst, double thresh, double maxval, ThresholdTypes type)
Parameters
srcInputArrayinput array (single-channel, 8-bit or 32-bit floating point).
dstOutputArrayoutput array of the same size and type as src.
threshdoublethreshold value.
maxvaldoublemaximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.
typeThresholdTypesthresholding type (see the details below).
Returns
- double
the computed threshold value when type == OTSU
Trace(InputArray)
computes trace of a matrix
public static Scalar Trace(InputArray mtx)
Parameters
mtxInputArrayThe source matrix
Returns
Transform(InputArray, OutputArray, InputArray)
performs affine transformation of each element of multi-channel input matrix
public static void Transform(InputArray src, OutputArray dst, InputArray m)
Parameters
srcInputArrayThe source array; must have as many channels (1 to 4) as mtx.cols or mtx.cols-1
dstOutputArrayThe destination array; will have the same size and depth as src and as many channels as mtx.rows
mInputArrayThe transformation matrix
Transpose(InputArray, OutputArray)
transposes the matrix
public static void Transpose(InputArray src, OutputArray dst)
Parameters
srcInputArrayThe source array
dstOutputArrayThe destination array of the same type as src
TriangulatePoints(InputArray, InputArray, InputArray, InputArray, OutputArray)
Reconstructs points by triangulation.
public static void TriangulatePoints(InputArray projMatr1, InputArray projMatr2, InputArray projPoints1, InputArray projPoints2, OutputArray points4D)
Parameters
projMatr1InputArray3x4 projection matrix of the first camera.
projMatr2InputArray3x4 projection matrix of the second camera.
projPoints1InputArray2xN array of feature points in the first image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
projPoints2InputArray2xN array of corresponding points in the second image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
points4DOutputArray4xN array of reconstructed points in homogeneous coordinates.
TriangulatePoints(double[,], double[,], IEnumerable<Point2d>, IEnumerable<Point2d>)
Reconstructs points by triangulation.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static Vec4d[] TriangulatePoints(double[,] projMatr1, double[,] projMatr2, IEnumerable<Point2d> projPoints1, IEnumerable<Point2d> projPoints2)
Parameters
projMatr1double[,]3x4 projection matrix of the first camera.
projMatr2double[,]3x4 projection matrix of the second camera.
projPoints1IEnumerable<Point2d>2xN array of feature points in the first image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
projPoints2IEnumerable<Point2d>2xN array of corresponding points in the second image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
Returns
- Vec4d[]
4xN array of reconstructed points in homogeneous coordinates.
Undistort(InputArray, OutputArray, InputArray, InputArray, InputArray?)
corrects lens distortion for the given camera matrix and distortion coefficients
public static void Undistort(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray? newCameraMatrix = null)
Parameters
srcInputArrayInput (distorted) image.
dstOutputArrayOutput (corrected) image that has the same size and type as src .
cameraMatrixInputArrayInput camera matrix
distCoeffsInputArrayInput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
newCameraMatrixInputArrayCamera matrix of the distorted image. By default, it is the same as cameraMatrix but you may additionally scale and shift the result by using a different matrix.
UndistortPoints(InputArray, OutputArray, InputArray, InputArray, InputArray?, InputArray?)
Computes the ideal point coordinates from the observed point coordinates.
public static void UndistortPoints(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray? r = null, InputArray? p = null)
Parameters
srcInputArrayObserved point coordinates, 1xN or Nx1 2-channel (CV_32FC2 or CV_64FC2).
dstOutputArrayOutput ideal point coordinates after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.
cameraMatrixInputArrayCamera matrix
distCoeffsInputArrayInput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
rInputArrayRectification transformation in the object space (3x3 matrix). R1 or R2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity transformation is used.
pInputArrayNew camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity new camera matrix is used.
UndistortPointsIter(InputArray, OutputArray, InputArray, InputArray, InputArray?, InputArray?, TermCriteria?)
Computes the ideal point coordinates from the observed point coordinates.
public static void UndistortPointsIter(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray? r = null, InputArray? p = null, TermCriteria? termCriteria = null)
Parameters
srcInputArrayObserved point coordinates, 1xN or Nx1 2-channel (CV_32FC2 or CV_64FC2).
dstOutputArrayOutput ideal point coordinates after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.
cameraMatrixInputArrayCamera matrix
distCoeffsInputArrayInput vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed.
rInputArrayRectification transformation in the object space (3x3 matrix). R1 or R2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity transformation is used.
pInputArrayNew camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity new camera matrix is used.
termCriteriaTermCriteria?
UseOptimized()
Returns the current optimization status. The function returns the current optimization status, which is controlled by cv::setUseOptimized().
public static bool UseOptimized()
Returns
VConcat(InputArray, InputArray, OutputArray)
Applies vertical concatenation to given matrices.
public static void VConcat(InputArray src1, InputArray src2, OutputArray dst)
Parameters
src1InputArrayfirst input array to be considered for vertical concatenation.
src2InputArraysecond input array to be considered for vertical concatenation.
dstOutputArrayoutput array. It has the same number of cols and depth as the src1 and src2, and the sum of rows of the src1 and src2.
VConcat(IEnumerable<Mat>, OutputArray)
Applies vertical concatenation to given matrices.
[SuppressMessage("Maintainability", "CA1508: Avoid dead conditional code")]
public static void VConcat(IEnumerable<Mat> src, OutputArray dst)
Parameters
srcIEnumerable<Mat>input array or vector of matrices. all of the matrices must have the same number of cols and the same depth.
dstOutputArrayoutput array. It has the same number of cols and depth as the src, and the sum of rows of the src.
ValidateDisparity(InputOutputArray, InputArray, int, int, int)
validates disparity using the left-right check. The matrix "cost" should be computed by the stereo correspondence algorithm
public static void ValidateDisparity(InputOutputArray disparity, InputArray cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp = 1)
Parameters
disparityInputOutputArraycostInputArrayminDisparityintnumberOfDisparitiesintdisp12MaxDispint
WaitKey(int)
Waits for a pressed key.
public static int WaitKey(int delay = 0)
Parameters
delayintDelay in milliseconds. 0 is the special value that means ”forever”
Returns
- int
Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
WaitKeyEx(int)
Waits for a pressed key. Similar to #waitKey, but returns full key code. Key code is implementation specific and depends on used backend: QT/GTK/Win32/etc
public static int WaitKeyEx(int delay = 0)
Parameters
delayintDelay in milliseconds. 0 is the special value that means ”forever”
Returns
- int
Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
WarpAffine(InputArray, OutputArray, InputArray, Size, InterpolationFlags, BorderTypes, Scalar?)
Applies an affine transformation to an image.
public static void WarpAffine(InputArray src, OutputArray dst, InputArray m, Size dsize, InterpolationFlags flags = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)
Parameters
srcInputArrayinput image.
dstOutputArrayoutput image that has the size dsize and the same type as src.
mInputArray2x3 transformation matrix.
dsizeSizesize of the output image.
flagsInterpolationFlagscombination of interpolation methods and the optional flag WARP_INVERSE_MAP that means that M is the inverse transformation (dst -> src) .
borderModeBorderTypespixel extrapolation method; when borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function.
borderValueScalar?value used in case of a constant border; by default, it is 0.
WarpPerspective(InputArray, OutputArray, InputArray, Size, InterpolationFlags, BorderTypes, Scalar?)
Applies a perspective transformation to an image.
public static void WarpPerspective(InputArray src, OutputArray dst, InputArray m, Size dsize, InterpolationFlags flags = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)
Parameters
srcInputArrayinput image.
dstOutputArrayoutput image that has the size dsize and the same type as src.
mInputArray3x3 transformation matrix.
dsizeSizesize of the output image.
flagsInterpolationFlagscombination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation (dst -> src).
borderModeBorderTypespixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE).
borderValueScalar?value used in case of a constant border; by default, it equals 0.
WarpPerspective(InputArray, OutputArray, float[,], Size, InterpolationFlags, BorderTypes, Scalar?)
Applies a perspective transformation to an image.
public static void WarpPerspective(InputArray src, OutputArray dst, float[,] m, Size dsize, InterpolationFlags flags = InterpolationFlags.Linear, BorderTypes borderMode = BorderTypes.Constant, Scalar? borderValue = null)
Parameters
srcInputArrayinput image.
dstOutputArrayoutput image that has the size dsize and the same type as src.
mfloat[,]3x3 transformation matrix.
dsizeSizesize of the output image.
flagsInterpolationFlagscombination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation (dst -> src).
borderModeBorderTypespixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE).
borderValueScalar?value used in case of a constant border; by default, it equals 0.
WarpPolar(InputArray, OutputArray, Size, Point2f, double, InterpolationFlags, WarpPolarMode)
Remaps an image to polar or semilog-polar coordinates space.
public static void WarpPolar(InputArray src, OutputArray dst, Size dsize, Point2f center, double maxRadius, InterpolationFlags interpolationFlags, WarpPolarMode warpPolarMode)
Parameters
srcInputArraySource image.
dstOutputArrayDestination image. It will have same type as src.
dsizeSizeThe destination image size (see description for valid options).
centerPoint2fThe transformation center.
maxRadiusdoubleThe radius of the bounding circle to transform. It determines the inverse magnitude scale parameter too.
interpolationFlagsInterpolationFlagsinterpolation methods.
warpPolarModeWarpPolarModeinterpolation methods.
Remarks
- The function can not operate in-place.
- To calculate magnitude and angle in degrees #cartToPolar is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.
- This function uses #remap. Due to current implementation limitations the size of an input and output images should be less than 32767x32767.
Watershed(InputArray, InputOutputArray)
Performs a marker-based image segmentation using the watershed algorithm.
public static void Watershed(InputArray image, InputOutputArray markers)
Parameters
imageInputArrayInput 8-bit 3-channel image.
markersInputOutputArrayInput/output 32-bit single-channel image (map) of markers. It should have the same size as image.