Table of Contents

Class CvXImgProc

Namespace
OpenCvSharp.XImgProc
Assembly
OpenCvSharp.dll

cv::ximgproc functions

public static class CvXImgProc
Inheritance
CvXImgProc
Inherited Members

Methods

AMFilter(InputArray, InputArray, OutputArray, double, double, bool)

Simple one-line Adaptive Manifold Filter call.

public static void AMFilter(InputArray joint, InputArray src, OutputArray dst, double sigmaS, double sigmaR, bool adjustOutliers = false)

Parameters

joint InputArray

joint (also called as guided) image or array of images with any numbers of channels.

src InputArray

filtering image with any numbers of channels.

dst OutputArray

output image.

sigmaS double

spatial standard deviation.

sigmaR double

color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

adjustOutliers bool

optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

AnisotropicDiffusion(InputArray, OutputArray, float, float, int)

Performs anisotropic diffusian on an image. The function applies Perona-Malik anisotropic diffusion to an image.

public static void AnisotropicDiffusion(InputArray src, OutputArray dst, float alpha, float k, int niters)

Parameters

src InputArray

Grayscale Source image.

dst OutputArray

Destination image of the same size and the same number of channels as src.

alpha float

The amount of time to step forward by on each iteration (normally, it's between 0 and 1).

k float

sensitivity to the edges

niters int

The number of iterations

BilateralTextureFilter(InputArray, OutputArray, int, int, double, double)

Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.

public static void BilateralTextureFilter(InputArray src, OutputArray dst, int fr = 3, int numIter = 1, double sigmaAlpha = -1, double sigmaAvg = -1)

Parameters

src InputArray

Source image whose depth is 8-bit UINT or 32-bit FLOAT

dst OutputArray

Destination image of the same size and type as src.

fr int

Radius of kernel to be used for filtering. It should be positive integer

numIter int

Number of iterations of algorithm, It should be positive integer

sigmaAlpha double

Controls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition.When the value is negative, it is automatically calculated.

sigmaAvg double

Range blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper.

BrightEdges(Mat, Mat, int, int, int)

public static void BrightEdges(Mat original, Mat edgeView, int contrast = 1, int shortRange = 3, int longRange = 9)

Parameters

original Mat
edgeView Mat
contrast int
shortRange int
longRange int

ColorMatchTemplate(InputArray, InputArray, OutputArray)

Compares a color template against overlapped color image regions.

public static void ColorMatchTemplate(InputArray img, InputArray templ, OutputArray result)

Parameters

img InputArray

Image where the search is running. It must be 3 channels image

templ InputArray

Searched template. It must be not greater than the source image and have 3 channels

result OutputArray

Map of comparison results. It must be single-channel 64-bit floating-point

CovarianceEstimation(InputArray, OutputArray, int, int)

Computes the estimated covariance matrix of an image using the sliding window forumlation.

public static void CovarianceEstimation(InputArray src, OutputArray dst, int windowRows, int windowCols)

Parameters

src InputArray

The source image. Input image must be of a complex type.

dst OutputArray

The destination estimated covariance matrix. Output matrix will be size (windowRowswindowCols, windowRowswindowCols).

windowRows int

The number of rows in the window.

windowCols int

The number of cols in the window.

Remarks

The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner.Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.

CreateAMFilter(double, double, bool)

Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.

public static AdaptiveManifoldFilter CreateAMFilter(double sigmaS, double sigmaR, bool adjustOutliers = false)

Parameters

sigmaS double

spatial standard deviation.

sigmaR double

color space standard deviation, it is similar to the sigma in the color space into bilateralFilter.

adjustOutliers bool

optional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.

Returns

AdaptiveManifoldFilter

CreateDTFilter(InputArray, double, double, EdgeAwareFiltersList, int)

Factory method, create instance of DTFilter and produce initialization routines.

public static DTFilter CreateDTFilter(InputArray guide, double sigmaSpatial, double sigmaColor, EdgeAwareFiltersList mode = EdgeAwareFiltersList.DTF_NC, int numIters = 3)

Parameters

guide InputArray

guided image (used to build transformed distance, which describes edge structure of guided image).

sigmaSpatial double

sigma_H parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.

sigmaColor double

sigma_r parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.

mode EdgeAwareFiltersList

one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

numIters int

optional number of iterations used for filtering, 3 is quite enough.

Returns

DTFilter

CreateEdgeBoxes(float, float, float, float, int, float, float, float, float, float, float, float)

Creates a EdgeBoxes

public static EdgeBoxes CreateEdgeBoxes(float alpha = 0.65, float beta = 0.75, float eta = 1, float minScore = 0.01, int maxBoxes = 10000, float edgeMinMag = 0.1, float edgeMergeThr = 0.5, float clusterMinMag = 0.5, float maxAspectRatio = 3, float minBoxArea = 1000, float gamma = 2, float kappa = 1.5)

Parameters

alpha float

step size of sliding window search.

beta float

nms threshold for object proposals.

eta float

adaptation rate for nms threshold.

minScore float

min score of boxes to detect.

maxBoxes int

max number of boxes to detect.

edgeMinMag float

edge min magnitude. Increase to trade off accuracy for speed.

edgeMergeThr float

edge merge threshold. Increase to trade off accuracy for speed.

clusterMinMag float

cluster min magnitude. Increase to trade off accuracy for speed.

maxAspectRatio float

max aspect ratio of boxes.

minBoxArea float

minimum area of boxes.

gamma float

affinity sensitivity.

kappa float

scale sensitivity.

Returns

EdgeBoxes

CreateFastGlobalSmootherFilter(InputArray, double, double, double, int)

Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.

public static FastGlobalSmootherFilter CreateFastGlobalSmootherFilter(InputArray guide, double lambda, double sigmaColor, double lambdaAttenuation = 0.25, int numIter = 3)

Parameters

guide InputArray

image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

lambda double

parameter defining the amount of regularization

sigmaColor double

parameter, that is similar to color space sigma in bilateralFilter.

lambdaAttenuation double

internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

numIter int

number of iterations used for filtering, 3 is usually enough.

Returns

FastGlobalSmootherFilter

CreateFastLineDetector(int, float, double, double, int, bool)

Creates a smart pointer to a FastLineDetector object and initializes it

public static FastLineDetector CreateFastLineDetector(int lengthThreshold = 10, float distanceThreshold = 1.4142135, double cannyTh1 = 50, double cannyTh2 = 50, int cannyApertureSize = 3, bool doMerge = false)

Parameters

lengthThreshold int

Segment shorter than this will be discarded

distanceThreshold float

A point placed from a hypothesis line segment farther than this will be regarded as an outlier

cannyTh1 double

First threshold for hysteresis procedure in Canny()

cannyTh2 double

Second threshold for hysteresis procedure in Canny()

cannyApertureSize int

Aperture size for the sobel operator in Canny()

doMerge bool

If true, incremental merging of segments will be performed

Returns

FastLineDetector

CreateGuidedFilter(InputArray, int, double)

Factory method, create instance of GuidedFilter and produce initialization routines.

public static GuidedFilter CreateGuidedFilter(InputArray guide, int radius, double eps)

Parameters

guide InputArray

guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

radius int

radius of Guided Filter.

eps double

regularization term of Guided Filter. eps^2 is similar to the sigma in the color space into bilateralFilter.

Returns

GuidedFilter

CreateQuaternionImage(InputArray, OutputArray)

creates a quaternion image.

public static void CreateQuaternionImage(InputArray img, OutputArray qimg)

Parameters

img InputArray

Source 8-bit, 32-bit or 64-bit image, with 3-channel image.

qimg OutputArray

result CV_64FC4 a quaternion image( 4 chanels zero channel and B,G,R).

CreateRFFeatureGetter()

Creates a RFFeatureGetter

public static RFFeatureGetter CreateRFFeatureGetter()

Returns

RFFeatureGetter

CreateStructuredEdgeDetection(string, RFFeatureGetter?)

Creates a StructuredEdgeDetection

public static StructuredEdgeDetection CreateStructuredEdgeDetection(string model, RFFeatureGetter? howToGetFeatures = null)

Parameters

model string

name of the file where the model is stored

howToGetFeatures RFFeatureGetter

optional object inheriting from RFFeatureGetter. You need it only if you would like to train your own forest, pass null otherwise

Returns

StructuredEdgeDetection

CreateSuperpixelLSC(InputArray, int, float)

Class implementing the LSC (Linear Spectral Clustering) superpixels.

The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler.It preallocate some buffers for future computing iterations over the given image.An example of LSC is illustrated in the following picture. For enhanced results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.

public static SuperpixelLSC CreateSuperpixelLSC(InputArray image, int regionSize = 10, float ratio = 0.075)

Parameters

image InputArray

image Image to segment

regionSize int

Chooses an average superpixel size measured in pixels

ratio float

Chooses the enforcement of superpixel compactness factor of superpixel

Returns

SuperpixelLSC

CreateSuperpixelSEEDS(int, int, int, int, int, int, int, bool)

Initializes a SuperpixelSEEDS object.

The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels.It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.

The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization.The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image.The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.

public static SuperpixelSEEDS CreateSuperpixelSEEDS(int imageWidth, int imageHeight, int imageChannels, int numSuperpixels, int numLevels, int prior = 2, int histogramBins = 5, bool doubleStep = false)

Parameters

imageWidth int

Image width.

imageHeight int

Image height.

imageChannels int

Number of channels of the image.

numSuperpixels int

Desired number of superpixels. Note that the actual number may be smaller due to restrictions(depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.

numLevels int

Number of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.

prior int

enable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range[0, 5].

histogramBins int

Number of histogram bins.

doubleStep bool

If true, iterate each block level twice for higher accuracy.

Returns

SuperpixelSEEDS

DTFilter(InputArray, InputArray, OutputArray, double, double, EdgeAwareFiltersList, int)

Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.

public static void DTFilter(InputArray guide, InputArray src, OutputArray dst, double sigmaSpatial, double sigmaColor, EdgeAwareFiltersList mode = EdgeAwareFiltersList.DTF_NC, int numIters = 3)

Parameters

guide InputArray

guided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

src InputArray

filtering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.

dst OutputArray

destination image

sigmaSpatial double

sigma_H parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.

sigmaColor double

sigma_r parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.

mode EdgeAwareFiltersList

one form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.

numIters int

optional number of iterations used for filtering, 3 is quite enough.

EdgePreservingFilter(InputArray, OutputArray, int, double)

Smoothes an image using the Edge-Preserving filter.

public static void EdgePreservingFilter(InputArray src, OutputArray dst, int d, double threshold)

Parameters

src InputArray

Source 8-bit 3-channel image.

dst OutputArray

Destination image of the same size and type as src.

d int

Diameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3.

threshold double

Threshold, which distinguishes between noise, outliers, and data.

FastBilateralSolverFilter(InputArray, InputArray, InputArray, OutputArray, double, double, double, double, int, double)

Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.

public static void FastBilateralSolverFilter(InputArray guide, InputArray src, InputArray confidence, OutputArray dst, double sigmaSpatial = 8, double sigmaLuma = 8, double sigmaChroma = 8, double lambda = 128, int numIter = 25, double maxTol = 1E-05)

Parameters

guide InputArray

image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

src InputArray

source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

confidence InputArray

confidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.

dst OutputArray

destination image.

sigmaSpatial double

parameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.

sigmaLuma double

parameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.

sigmaChroma double

parameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.

lambda double

smoothness strength parameter for solver.

numIter int

number of iterations used for solver, 25 is usually enough.

maxTol double

convergence tolerance used for solver.

FastGlobalSmootherFilter(InputArray, InputArray, OutputArray, double, double, double, int)

Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.

public static void FastGlobalSmootherFilter(InputArray guide, InputArray src, OutputArray dst, double lambda, double sigmaColor, double lambdaAttenuation = 0.25, int numIter = 3)

Parameters

guide InputArray

image serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.

src InputArray

source image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.

dst OutputArray

destination image.

lambda double

parameter defining the amount of regularization

sigmaColor double

parameter, that is similar to color space sigma in bilateralFilter.

lambdaAttenuation double

internal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.

numIter int

number of iterations used for filtering, 3 is usually enough.

FastHoughTransform(InputArray, OutputArray, MatType, AngleRangeOption, HoughOP, HoughDeskewOption)

Calculates 2D Fast Hough transform of an image.

public static void FastHoughTransform(InputArray src, OutputArray dst, MatType dstMatDepth, AngleRangeOption angleRange = AngleRangeOption.ARO_315_135, HoughOP op = HoughOP.FHT_ADD, HoughDeskewOption makeSkew = HoughDeskewOption.DESKEW)

Parameters

src InputArray

The source (input) image.

dst OutputArray

The destination image, result of transformation.

dstMatDepth MatType

The depth of destination image

angleRange AngleRangeOption

The part of Hough space to calculate, see cv::AngleRangeOption

op HoughOP

The operation to be applied, see cv::HoughOp

makeSkew HoughDeskewOption

Specifies to do or not to do image skewing, see cv::HoughDeskewOption

GradientDericheX(InputArray, OutputArray, double, double)

Applies X Deriche filter to an image.

public static void GradientDericheX(InputArray op, OutputArray dst, double alpha, double omega)

Parameters

op InputArray

Source 8-bit or 16bit image, 1-channel or 3-channel image.

dst OutputArray

result CV_32FC image with same number of channel than _op.

alpha double

double see paper

omega double

double see paper

GradientDericheY(InputArray, OutputArray, double, double)

Applies Y Deriche filter to an image.

public static void GradientDericheY(InputArray op, OutputArray dst, double alpha, double omega)

Parameters

op InputArray

Source 8-bit or 16bit image, 1-channel or 3-channel image.

dst OutputArray

result CV_32FC image with same number of channel than _op.

alpha double

double see paper

omega double

double see paper

GradientPaillouX(InputArray, OutputArray, double, double)

Applies Paillou filter to an image.

public static void GradientPaillouX(InputArray op, OutputArray dst, double alpha, double omega)

Parameters

op InputArray

Source CV_8U(S) or CV_16U(S), 1-channel or 3-channels image.

dst OutputArray

Result CV_32F image with same number of channel than op.

alpha double

double see paper

omega double

double see paper

GradientPaillouY(InputArray, OutputArray, double, double)

Applies Paillou filter to an image.

public static void GradientPaillouY(InputArray op, OutputArray dst, double alpha, double omega)

Parameters

op InputArray

Source CV_8U(S) or CV_16U(S), 1-channel or 3-channels image.

dst OutputArray

Result CV_32F image with same number of channel than op.

alpha double

double see paper

omega double

double see paper

GuidedFilter(InputArray, InputArray, OutputArray, int, double, int)

Simple one-line Guided Filter call.

If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.

public static void GuidedFilter(InputArray guide, InputArray src, OutputArray dst, int radius, double eps, int dDepth = -1)

Parameters

guide InputArray

guided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.

src InputArray

filtering image with any numbers of channels.

dst OutputArray

output image.

radius int

radius of Guided Filter.

eps double

regularization term of Guided Filter. eps^2 is similar to the sigma in the color space into bilateralFilter.

dDepth int

optional depth of the output image.

HoughPoint2Line(Point, InputArray, AngleRangeOption, HoughDeskewOption, RulesOption)

Calculates coordinates of line segment corresponded by point in Hough space.

public static Vec4i HoughPoint2Line(Point houghPoint, InputArray srcImgInfo, AngleRangeOption angleRange = AngleRangeOption.ARO_315_135, HoughDeskewOption makeSkew = HoughDeskewOption.DESKEW, RulesOption rules = RulesOption.IGNORE_BORDERS)

Parameters

houghPoint Point

Point in Hough space.

srcImgInfo InputArray

The source (input) image of Hough transform.

angleRange AngleRangeOption

The part of Hough space where point is situated, see cv::AngleRangeOption

makeSkew HoughDeskewOption

Specifies to do or not to do image skewing, see cv::HoughDeskewOption

rules RulesOption

Specifies strictness of line segment calculating, see cv::RulesOption

Returns

Vec4i

Coordinates of line segment corresponded by point in Hough space.

Remarks

If rules parameter set to RO_STRICT then returned line cut along the border of source image. If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image.

JointBilateralFilter(InputArray, InputArray, OutputArray, int, double, double, BorderTypes)

Applies the joint bilateral filter to an image.

public static void JointBilateralFilter(InputArray joint, InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace, BorderTypes borderType = BorderTypes.Default)

Parameters

joint InputArray

Joint 8-bit or floating-point, 1-channel or 3-channel image.

src InputArray

Source 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.

dst OutputArray

Destination image of the same size and type as src.

d int

Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace.

sigmaColor double

Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood(see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.

sigmaSpace double

Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough(see sigmaColor). When d>0 , it specifies the neighborhood size regardless of sigmaSpace.Otherwise, d is proportional to sigmaSpace.

borderType BorderTypes

L0Smooth(InputArray, OutputArray, double, double)

Global image smoothing via L0 gradient minimization.

public static void L0Smooth(InputArray src, OutputArray dst, double lambda = 0.02, double kappa = 2)

Parameters

src InputArray

source image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.

dst OutputArray

destination image.

lambda double

parameter defining the smooth term weight.

kappa double

parameter defining the increasing factor of the weight of the gradient data term.

NiblackThreshold(InputArray, OutputArray, double, ThresholdTypes, int, double, LocalBinarizationMethods, double)

Applies Niblack thresholding to input image.

public static void NiblackThreshold(InputArray src, OutputArray dst, double maxValue, ThresholdTypes type, int blockSize, double k, LocalBinarizationMethods binarizationMethod = LocalBinarizationMethods.Niblack, double r = 128)

Parameters

src InputArray

Source 8-bit single-channel image.

dst OutputArray

Destination image of the same size and the same type as src.

maxValue double

Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.

type ThresholdTypes

Thresholding type, see cv::ThresholdTypes.

blockSize int

Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.

k double

The user-adjustable parameter used by Niblack and inspired techniques.For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.

binarizationMethod LocalBinarizationMethods

Binarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods.

r double

The user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation.

Remarks

The function transforms a grayscale image to a binary image according to the formulae:

  • THRESH_BINARY \f[dst(x, y) = \fork{\texttt{maxValue } }{if (src(x, y) > T(x, y))}{0}{otherwise}\f]
  • ** THRESH_BINARY_INV** \f[dst(x, y) = \fork{0}{if (src(x, y) > T(x, y))}{\texttt{maxValue}}{otherwise}\f] where \f$T(x, y)\f$ is a threshold calculated individually for each pixel. The threshold value \f$T(x, y)\f$ is the mean minus \f$ delta \f$ times standard deviation of \f$\texttt{blockSize} \times\texttt{blockSize}\f$ neighborhood of \f$(x, y)\f$. The function can't process the image in-place.

PeiLinNormalization(InputArray)

Calculates an affine transformation that normalize given image using Pei&Lin Normalization.

public static double[,] PeiLinNormalization(InputArray i)

Parameters

i InputArray

Given transformed image.

Returns

double[,]

Transformation matrix corresponding to inversed image transformation

PeiLinNormalization(InputArray, OutputArray)

Calculates an affine transformation that normalize given image using Pei&Lin Normalization.

public static void PeiLinNormalization(InputArray i, OutputArray t)

Parameters

i InputArray

Given transformed image.

t OutputArray

Inversed image transformation.

QConj(InputArray, OutputArray)

calculates conjugate of a quaternion image.

public static void QConj(InputArray qimg, OutputArray qcimg)

Parameters

qimg InputArray

quaternion image.

qcimg OutputArray

conjugate of qimg

QDft(InputArray, OutputArray, DftFlags, bool)

Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.

public static void QDft(InputArray img, OutputArray qimg, DftFlags flags, bool sideLeft)

Parameters

img InputArray

quaternion image.

qimg OutputArray

quaternion image in dual space.

flags DftFlags

quaternion image in dual space. only DFT_INVERSE flags is supported

sideLeft bool

true the hypercomplex exponential is to be multiplied on the left (false on the right ).

QMultiply(InputArray, InputArray, OutputArray)

Calculates the per-element quaternion product of two arrays

public static void QMultiply(InputArray src1, InputArray src2, OutputArray dst)

Parameters

src1 InputArray

quaternion image.

src2 InputArray

quaternion image.

dst OutputArray

product dst(I)=src1(I) . src2(I)

QUnitary(InputArray, OutputArray)

divides each element by its modulus.

public static void QUnitary(InputArray qimg, OutputArray qnimg)

Parameters

qimg InputArray

quaternion image.

qnimg OutputArray

conjugate of qimg

RollingGuidanceFilter(InputArray, OutputArray, int, double, double, int, BorderTypes)

Applies the rolling guidance filter to an image.

public static void RollingGuidanceFilter(InputArray src, OutputArray dst, int d = -1, double sigmaColor = 25, double sigmaSpace = 3, int numOfIter = 4, BorderTypes borderType = BorderTypes.Default)

Parameters

src InputArray

8-bit or floating-point, 1-channel or 3-channel image.

dst OutputArray

Destination image of the same size and type as src.

d int

Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace.

sigmaColor double

Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood(see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.

sigmaSpace double

Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough(see sigmaColor). When d>0 , it specifies the neighborhood size regardless of sigmaSpace.Otherwise, d is proportional to sigmaSpace.

numOfIter int

Number of iterations of joint edge-preserving filtering applied on the source image.

borderType BorderTypes

Thinning(InputArray, OutputArray, ThinningTypes)

Applies a binary blob thinning operation, to achieve a skeletization of the input image. The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.

public static void Thinning(InputArray src, OutputArray dst, ThinningTypes thinningType = ThinningTypes.ZHANGSUEN)

Parameters

src InputArray

Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.

dst OutputArray

Destination image of the same size and the same type as src. The function can work in-place.

thinningType ThinningTypes

Value that defines which thinning algorithm should be used.

WeightedMedianFilter(InputArray, InputArray, OutputArray, int, double, WMFWeightType, Mat?)

Applies weighted median filter to an image.

public static void WeightedMedianFilter(InputArray joint, InputArray src, OutputArray dst, int r, double sigma = 25.5, WMFWeightType weightType = WMFWeightType.EXP, Mat? mask = null)

Parameters

joint InputArray

Joint 8-bit, 1-channel or 3-channel image.

src InputArray

Source 8-bit or floating-point, 1-channel or 3-channel image.

dst OutputArray

Destination image.

r int

Radius of filtering kernel, should be a positive integer.

sigma double

Filter range standard deviation for the joint image.

weightType WMFWeightType

The type of weight definition, see WMFWeightType

mask Mat

A 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram.This is useful for applications like optical flow occlusion handling.

Remarks

For more details about this implementation, please see @cite zhang2014100+