Class CvXImgProc
- Namespace
- OpenCvSharp.XImgProc
- Assembly
- OpenCvSharp.dll
cv::ximgproc functions
public static class CvXImgProc
- Inheritance
-
CvXImgProc
- Inherited Members
Methods
AMFilter(InputArray, InputArray, OutputArray, double, double, bool)
Simple one-line Adaptive Manifold Filter call.
public static void AMFilter(InputArray joint, InputArray src, OutputArray dst, double sigmaS, double sigmaR, bool adjustOutliers = false)
Parameters
jointInputArrayjoint (also called as guided) image or array of images with any numbers of channels.
srcInputArrayfiltering image with any numbers of channels.
dstOutputArrayoutput image.
sigmaSdoublespatial standard deviation.
sigmaRdoublecolor space standard deviation, it is similar to the sigma in the color space into bilateralFilter.
adjustOutliersbooloptional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.
AnisotropicDiffusion(InputArray, OutputArray, float, float, int)
Performs anisotropic diffusian on an image. The function applies Perona-Malik anisotropic diffusion to an image.
public static void AnisotropicDiffusion(InputArray src, OutputArray dst, float alpha, float k, int niters)
Parameters
srcInputArrayGrayscale Source image.
dstOutputArrayDestination image of the same size and the same number of channels as src.
alphafloatThe amount of time to step forward by on each iteration (normally, it's between 0 and 1).
kfloatsensitivity to the edges
nitersintThe number of iterations
BilateralTextureFilter(InputArray, OutputArray, int, int, double, double)
Applies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see @cite Cho2014.
public static void BilateralTextureFilter(InputArray src, OutputArray dst, int fr = 3, int numIter = 1, double sigmaAlpha = -1, double sigmaAvg = -1)
Parameters
srcInputArraySource image whose depth is 8-bit UINT or 32-bit FLOAT
dstOutputArrayDestination image of the same size and type as src.
frintRadius of kernel to be used for filtering. It should be positive integer
numIterintNumber of iterations of algorithm, It should be positive integer
sigmaAlphadoubleControls the sharpness of the weight transition from edges to smooth/texture regions, where a bigger value means sharper transition.When the value is negative, it is automatically calculated.
sigmaAvgdoubleRange blur parameter for texture blurring. Larger value makes result to be more blurred. When the value is negative, it is automatically calculated as described in the paper.
BrightEdges(Mat, Mat, int, int, int)
public static void BrightEdges(Mat original, Mat edgeView, int contrast = 1, int shortRange = 3, int longRange = 9)
Parameters
ColorMatchTemplate(InputArray, InputArray, OutputArray)
Compares a color template against overlapped color image regions.
public static void ColorMatchTemplate(InputArray img, InputArray templ, OutputArray result)
Parameters
imgInputArrayImage where the search is running. It must be 3 channels image
templInputArraySearched template. It must be not greater than the source image and have 3 channels
resultOutputArrayMap of comparison results. It must be single-channel 64-bit floating-point
CovarianceEstimation(InputArray, OutputArray, int, int)
Computes the estimated covariance matrix of an image using the sliding window forumlation.
public static void CovarianceEstimation(InputArray src, OutputArray dst, int windowRows, int windowCols)
Parameters
srcInputArrayThe source image. Input image must be of a complex type.
dstOutputArrayThe destination estimated covariance matrix. Output matrix will be size (windowRowswindowCols, windowRowswindowCols).
windowRowsintThe number of rows in the window.
windowColsintThe number of cols in the window.
Remarks
The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner.Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix.
CreateAMFilter(double, double, bool)
Factory method, create instance of AdaptiveManifoldFilter and produce some initialization routines.
public static AdaptiveManifoldFilter CreateAMFilter(double sigmaS, double sigmaR, bool adjustOutliers = false)
Parameters
sigmaSdoublespatial standard deviation.
sigmaRdoublecolor space standard deviation, it is similar to the sigma in the color space into bilateralFilter.
adjustOutliersbooloptional, specify perform outliers adjust operation or not, (Eq. 9) in the original paper.
Returns
CreateDTFilter(InputArray, double, double, EdgeAwareFiltersList, int)
Factory method, create instance of DTFilter and produce initialization routines.
public static DTFilter CreateDTFilter(InputArray guide, double sigmaSpatial, double sigmaColor, EdgeAwareFiltersList mode = EdgeAwareFiltersList.DTF_NC, int numIters = 3)
Parameters
guideInputArrayguided image (used to build transformed distance, which describes edge structure of guided image).
sigmaSpatialdoublesigma_H parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.
sigmaColordoublesigma_r parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.
modeEdgeAwareFiltersListone form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.
numItersintoptional number of iterations used for filtering, 3 is quite enough.
Returns
CreateEdgeBoxes(float, float, float, float, int, float, float, float, float, float, float, float)
Creates a EdgeBoxes
public static EdgeBoxes CreateEdgeBoxes(float alpha = 0.65, float beta = 0.75, float eta = 1, float minScore = 0.01, int maxBoxes = 10000, float edgeMinMag = 0.1, float edgeMergeThr = 0.5, float clusterMinMag = 0.5, float maxAspectRatio = 3, float minBoxArea = 1000, float gamma = 2, float kappa = 1.5)
Parameters
alphafloatstep size of sliding window search.
betafloatnms threshold for object proposals.
etafloatadaptation rate for nms threshold.
minScorefloatmin score of boxes to detect.
maxBoxesintmax number of boxes to detect.
edgeMinMagfloatedge min magnitude. Increase to trade off accuracy for speed.
edgeMergeThrfloatedge merge threshold. Increase to trade off accuracy for speed.
clusterMinMagfloatcluster min magnitude. Increase to trade off accuracy for speed.
maxAspectRatiofloatmax aspect ratio of boxes.
minBoxAreafloatminimum area of boxes.
gammafloataffinity sensitivity.
kappafloatscale sensitivity.
Returns
CreateFastGlobalSmootherFilter(InputArray, double, double, double, int)
Factory method, create instance of FastGlobalSmootherFilter and execute the initialization routines.
public static FastGlobalSmootherFilter CreateFastGlobalSmootherFilter(InputArray guide, double lambda, double sigmaColor, double lambdaAttenuation = 0.25, int numIter = 3)
Parameters
guideInputArrayimage serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
lambdadoubleparameter defining the amount of regularization
sigmaColordoubleparameter, that is similar to color space sigma in bilateralFilter.
lambdaAttenuationdoubleinternal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
numIterintnumber of iterations used for filtering, 3 is usually enough.
Returns
CreateFastLineDetector(int, float, double, double, int, bool)
Creates a smart pointer to a FastLineDetector object and initializes it
public static FastLineDetector CreateFastLineDetector(int lengthThreshold = 10, float distanceThreshold = 1.4142135, double cannyTh1 = 50, double cannyTh2 = 50, int cannyApertureSize = 3, bool doMerge = false)
Parameters
lengthThresholdintSegment shorter than this will be discarded
distanceThresholdfloatA point placed from a hypothesis line segment farther than this will be regarded as an outlier
cannyTh1doubleFirst threshold for hysteresis procedure in Canny()
cannyTh2doubleSecond threshold for hysteresis procedure in Canny()
cannyApertureSizeintAperture size for the sobel operator in Canny()
doMergeboolIf true, incremental merging of segments will be performed
Returns
CreateGuidedFilter(InputArray, int, double)
Factory method, create instance of GuidedFilter and produce initialization routines.
public static GuidedFilter CreateGuidedFilter(InputArray guide, int radius, double eps)
Parameters
guideInputArrayguided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.
radiusintradius of Guided Filter.
epsdoubleregularization term of Guided Filter. eps^2 is similar to the sigma in the color space into bilateralFilter.
Returns
CreateQuaternionImage(InputArray, OutputArray)
creates a quaternion image.
public static void CreateQuaternionImage(InputArray img, OutputArray qimg)
Parameters
imgInputArraySource 8-bit, 32-bit or 64-bit image, with 3-channel image.
qimgOutputArrayresult CV_64FC4 a quaternion image( 4 chanels zero channel and B,G,R).
CreateRFFeatureGetter()
Creates a RFFeatureGetter
public static RFFeatureGetter CreateRFFeatureGetter()
Returns
CreateStructuredEdgeDetection(string, RFFeatureGetter?)
Creates a StructuredEdgeDetection
public static StructuredEdgeDetection CreateStructuredEdgeDetection(string model, RFFeatureGetter? howToGetFeatures = null)
Parameters
modelstringname of the file where the model is stored
howToGetFeaturesRFFeatureGetteroptional object inheriting from RFFeatureGetter. You need it only if you would like to train your own forest, pass null otherwise
Returns
CreateSuperpixelLSC(InputArray, int, float)
Class implementing the LSC (Linear Spectral Clustering) superpixels.
The function initializes a SuperpixelLSC object for the input image. It sets the parameters of superpixel algorithm, which are: region_size and ruler.It preallocate some buffers for future computing iterations over the given image.An example of LSC is illustrated in the following picture. For enhanced results it is recommended for color images to preprocess image with little gaussian blur with a small 3 x 3 kernel and additional conversion into CieLAB color space.
public static SuperpixelLSC CreateSuperpixelLSC(InputArray image, int regionSize = 10, float ratio = 0.075)
Parameters
imageInputArrayimage Image to segment
regionSizeintChooses an average superpixel size measured in pixels
ratiofloatChooses the enforcement of superpixel compactness factor of superpixel
Returns
CreateSuperpixelSEEDS(int, int, int, int, int, int, int, bool)
Initializes a SuperpixelSEEDS object.
The function initializes a SuperpixelSEEDS object for the input image. It stores the parameters of the image: image_width, image_height and image_channels.It also sets the parameters of the SEEDS superpixel algorithm, which are: num_superpixels, num_levels, use_prior, histogram_bins and double_step.
The number of levels in num_levels defines the amount of block levels that the algorithm use in the optimization.The initialization is a grid, in which the superpixels are equally distributed through the width and the height of the image.The larger blocks correspond to the superpixel size, and the levels with smaller blocks are formed by dividing the larger blocks into 2 x 2 blocks of pixels, recursively until the smaller block level. An example of initialization of 4 block levels is illustrated in the following figure.
public static SuperpixelSEEDS CreateSuperpixelSEEDS(int imageWidth, int imageHeight, int imageChannels, int numSuperpixels, int numLevels, int prior = 2, int histogramBins = 5, bool doubleStep = false)
Parameters
imageWidthintImage width.
imageHeightintImage height.
imageChannelsintNumber of channels of the image.
numSuperpixelsintDesired number of superpixels. Note that the actual number may be smaller due to restrictions(depending on the image size and num_levels). Use getNumberOfSuperpixels() to get the actual number.
numLevelsintNumber of block levels. The more levels, the more accurate is the segmentation, but needs more memory and CPU time.
priorintenable 3x3 shape smoothing term if >0. A larger value leads to smoother shapes. prior must be in the range[0, 5].
histogramBinsintNumber of histogram bins.
doubleStepboolIf true, iterate each block level twice for higher accuracy.
Returns
DTFilter(InputArray, InputArray, OutputArray, double, double, EdgeAwareFiltersList, int)
Simple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage.
public static void DTFilter(InputArray guide, InputArray src, OutputArray dst, double sigmaSpatial, double sigmaColor, EdgeAwareFiltersList mode = EdgeAwareFiltersList.DTF_NC, int numIters = 3)
Parameters
guideInputArrayguided image (also called as joint image) with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.
srcInputArrayfiltering image with unsigned 8-bit or floating-point 32-bit depth and up to 4 channels.
dstOutputArraydestination image
sigmaSpatialdoublesigma_H parameter in the original article, it's similar to the sigma in the coordinate space into bilateralFilter.
sigmaColordoublesigma_r parameter in the original article, it's similar to the sigma in the color space into bilateralFilter.
modeEdgeAwareFiltersListone form three modes DTF_NC, DTF_RF and DTF_IC which corresponds to three modes for filtering 2D signals in the article.
numItersintoptional number of iterations used for filtering, 3 is quite enough.
EdgePreservingFilter(InputArray, OutputArray, int, double)
Smoothes an image using the Edge-Preserving filter.
public static void EdgePreservingFilter(InputArray src, OutputArray dst, int d, double threshold)
Parameters
srcInputArraySource 8-bit 3-channel image.
dstOutputArrayDestination image of the same size and type as src.
dintDiameter of each pixel neighborhood that is used during filtering. Must be greater or equal 3.
thresholddoubleThreshold, which distinguishes between noise, outliers, and data.
FastBilateralSolverFilter(InputArray, InputArray, InputArray, OutputArray, double, double, double, double, int, double)
Simple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations.
public static void FastBilateralSolverFilter(InputArray guide, InputArray src, InputArray confidence, OutputArray dst, double sigmaSpatial = 8, double sigmaLuma = 8, double sigmaChroma = 8, double lambda = 128, int numIter = 25, double maxTol = 1E-05)
Parameters
guideInputArrayimage serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
srcInputArraysource image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.
confidenceInputArrayconfidence image with unsigned 8-bit or floating-point 32-bit confidence and 1 channel.
dstOutputArraydestination image.
sigmaSpatialdoubleparameter, that is similar to spatial space sigma (bandwidth) in bilateralFilter.
sigmaLumadoubleparameter, that is similar to luma space sigma (bandwidth) in bilateralFilter.
sigmaChromadoubleparameter, that is similar to chroma space sigma (bandwidth) in bilateralFilter.
lambdadoublesmoothness strength parameter for solver.
numIterintnumber of iterations used for solver, 25 is usually enough.
maxToldoubleconvergence tolerance used for solver.
FastGlobalSmootherFilter(InputArray, InputArray, OutputArray, double, double, double, int)
Simple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations.
public static void FastGlobalSmootherFilter(InputArray guide, InputArray src, OutputArray dst, double lambda, double sigmaColor, double lambdaAttenuation = 0.25, int numIter = 3)
Parameters
guideInputArrayimage serving as guide for filtering. It should have 8-bit depth and either 1 or 3 channels.
srcInputArraysource image for filtering with unsigned 8-bit or signed 16-bit or floating-point 32-bit depth and up to 4 channels.
dstOutputArraydestination image.
lambdadoubleparameter defining the amount of regularization
sigmaColordoubleparameter, that is similar to color space sigma in bilateralFilter.
lambdaAttenuationdoubleinternal parameter, defining how much lambda decreases after each iteration. Normally, it should be 0.25. Setting it to 1.0 may lead to streaking artifacts.
numIterintnumber of iterations used for filtering, 3 is usually enough.
FastHoughTransform(InputArray, OutputArray, MatType, AngleRangeOption, HoughOP, HoughDeskewOption)
Calculates 2D Fast Hough transform of an image.
public static void FastHoughTransform(InputArray src, OutputArray dst, MatType dstMatDepth, AngleRangeOption angleRange = AngleRangeOption.ARO_315_135, HoughOP op = HoughOP.FHT_ADD, HoughDeskewOption makeSkew = HoughDeskewOption.DESKEW)
Parameters
srcInputArrayThe source (input) image.
dstOutputArrayThe destination image, result of transformation.
dstMatDepthMatTypeThe depth of destination image
angleRangeAngleRangeOptionThe part of Hough space to calculate, see cv::AngleRangeOption
opHoughOPThe operation to be applied, see cv::HoughOp
makeSkewHoughDeskewOptionSpecifies to do or not to do image skewing, see cv::HoughDeskewOption
GradientDericheX(InputArray, OutputArray, double, double)
Applies X Deriche filter to an image.
public static void GradientDericheX(InputArray op, OutputArray dst, double alpha, double omega)
Parameters
opInputArraySource 8-bit or 16bit image, 1-channel or 3-channel image.
dstOutputArrayresult CV_32FC image with same number of channel than _op.
alphadoubledouble see paper
omegadoubledouble see paper
GradientDericheY(InputArray, OutputArray, double, double)
Applies Y Deriche filter to an image.
public static void GradientDericheY(InputArray op, OutputArray dst, double alpha, double omega)
Parameters
opInputArraySource 8-bit or 16bit image, 1-channel or 3-channel image.
dstOutputArrayresult CV_32FC image with same number of channel than _op.
alphadoubledouble see paper
omegadoubledouble see paper
GradientPaillouX(InputArray, OutputArray, double, double)
Applies Paillou filter to an image.
public static void GradientPaillouX(InputArray op, OutputArray dst, double alpha, double omega)
Parameters
opInputArraySource CV_8U(S) or CV_16U(S), 1-channel or 3-channels image.
dstOutputArrayResult CV_32F image with same number of channel than op.
alphadoubledouble see paper
omegadoubledouble see paper
GradientPaillouY(InputArray, OutputArray, double, double)
Applies Paillou filter to an image.
public static void GradientPaillouY(InputArray op, OutputArray dst, double alpha, double omega)
Parameters
opInputArraySource CV_8U(S) or CV_16U(S), 1-channel or 3-channels image.
dstOutputArrayResult CV_32F image with same number of channel than op.
alphadoubledouble see paper
omegadoubledouble see paper
GuidedFilter(InputArray, InputArray, OutputArray, int, double, int)
Simple one-line Guided Filter call.
If you have multiple images to filter with the same guided image then use GuidedFilter interface to avoid extra computations on initialization stage.
public static void GuidedFilter(InputArray guide, InputArray src, OutputArray dst, int radius, double eps, int dDepth = -1)
Parameters
guideInputArrayguided image (or array of images) with up to 3 channels, if it have more then 3 channels then only first 3 channels will be used.
srcInputArrayfiltering image with any numbers of channels.
dstOutputArrayoutput image.
radiusintradius of Guided Filter.
epsdoubleregularization term of Guided Filter. eps^2 is similar to the sigma in the color space into bilateralFilter.
dDepthintoptional depth of the output image.
HoughPoint2Line(Point, InputArray, AngleRangeOption, HoughDeskewOption, RulesOption)
Calculates coordinates of line segment corresponded by point in Hough space.
public static Vec4i HoughPoint2Line(Point houghPoint, InputArray srcImgInfo, AngleRangeOption angleRange = AngleRangeOption.ARO_315_135, HoughDeskewOption makeSkew = HoughDeskewOption.DESKEW, RulesOption rules = RulesOption.IGNORE_BORDERS)
Parameters
houghPointPointPoint in Hough space.
srcImgInfoInputArrayThe source (input) image of Hough transform.
angleRangeAngleRangeOptionThe part of Hough space where point is situated, see cv::AngleRangeOption
makeSkewHoughDeskewOptionSpecifies to do or not to do image skewing, see cv::HoughDeskewOption
rulesRulesOptionSpecifies strictness of line segment calculating, see cv::RulesOption
Returns
- Vec4i
Coordinates of line segment corresponded by point in Hough space.
Remarks
If rules parameter set to RO_STRICT then returned line cut along the border of source image. If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image.
JointBilateralFilter(InputArray, InputArray, OutputArray, int, double, double, BorderTypes)
Applies the joint bilateral filter to an image.
public static void JointBilateralFilter(InputArray joint, InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaSpace, BorderTypes borderType = BorderTypes.Default)
Parameters
jointInputArrayJoint 8-bit or floating-point, 1-channel or 3-channel image.
srcInputArraySource 8-bit or floating-point, 1-channel or 3-channel image with the same depth as joint image.
dstOutputArrayDestination image of the same size and type as src.
dintDiameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace.
sigmaColordoubleFilter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood(see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.
sigmaSpacedoubleFilter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough(see sigmaColor). When d>0 , it specifies the neighborhood size regardless of sigmaSpace.Otherwise, d is proportional to sigmaSpace.
borderTypeBorderTypes
L0Smooth(InputArray, OutputArray, double, double)
Global image smoothing via L0 gradient minimization.
public static void L0Smooth(InputArray src, OutputArray dst, double lambda = 0.02, double kappa = 2)
Parameters
srcInputArraysource image for filtering with unsigned 8-bit or signed 16-bit or floating-point depth.
dstOutputArraydestination image.
lambdadoubleparameter defining the smooth term weight.
kappadoubleparameter defining the increasing factor of the weight of the gradient data term.
NiblackThreshold(InputArray, OutputArray, double, ThresholdTypes, int, double, LocalBinarizationMethods, double)
Applies Niblack thresholding to input image.
public static void NiblackThreshold(InputArray src, OutputArray dst, double maxValue, ThresholdTypes type, int blockSize, double k, LocalBinarizationMethods binarizationMethod = LocalBinarizationMethods.Niblack, double r = 128)
Parameters
srcInputArraySource 8-bit single-channel image.
dstOutputArrayDestination image of the same size and the same type as src.
maxValuedoubleNon-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types.
typeThresholdTypesThresholding type, see cv::ThresholdTypes.
blockSizeintSize of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on.
kdoubleThe user-adjustable parameter used by Niblack and inspired techniques.For Niblack, this is normally a value between 0 and 1 that is multiplied with the standard deviation and subtracted from the mean.
binarizationMethodLocalBinarizationMethodsBinarization method to use. By default, Niblack's technique is used. Other techniques can be specified, see cv::ximgproc::LocalBinarizationMethods.
rdoubleThe user-adjustable parameter used by Sauvola's technique. This is the dynamic range of standard deviation.
Remarks
The function transforms a grayscale image to a binary image according to the formulae:
- THRESH_BINARY \f[dst(x, y) = \fork{\texttt{maxValue } }{if (src(x, y) > T(x, y))}{0}{otherwise}\f]
- ** THRESH_BINARY_INV** \f[dst(x, y) = \fork{0}{if (src(x, y) > T(x, y))}{\texttt{maxValue}}{otherwise}\f] where \f$T(x, y)\f$ is a threshold calculated individually for each pixel. The threshold value \f$T(x, y)\f$ is the mean minus \f$ delta \f$ times standard deviation of \f$\texttt{blockSize} \times\texttt{blockSize}\f$ neighborhood of \f$(x, y)\f$. The function can't process the image in-place.
PeiLinNormalization(InputArray)
Calculates an affine transformation that normalize given image using Pei&Lin Normalization.
public static double[,] PeiLinNormalization(InputArray i)
Parameters
iInputArrayGiven transformed image.
Returns
- double[,]
Transformation matrix corresponding to inversed image transformation
PeiLinNormalization(InputArray, OutputArray)
Calculates an affine transformation that normalize given image using Pei&Lin Normalization.
public static void PeiLinNormalization(InputArray i, OutputArray t)
Parameters
iInputArrayGiven transformed image.
tOutputArrayInversed image transformation.
QConj(InputArray, OutputArray)
calculates conjugate of a quaternion image.
public static void QConj(InputArray qimg, OutputArray qcimg)
Parameters
qimgInputArrayquaternion image.
qcimgOutputArrayconjugate of qimg
QDft(InputArray, OutputArray, DftFlags, bool)
Performs a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array.
public static void QDft(InputArray img, OutputArray qimg, DftFlags flags, bool sideLeft)
Parameters
imgInputArrayquaternion image.
qimgOutputArrayquaternion image in dual space.
flagsDftFlagsquaternion image in dual space. only DFT_INVERSE flags is supported
sideLeftbooltrue the hypercomplex exponential is to be multiplied on the left (false on the right ).
QMultiply(InputArray, InputArray, OutputArray)
Calculates the per-element quaternion product of two arrays
public static void QMultiply(InputArray src1, InputArray src2, OutputArray dst)
Parameters
src1InputArrayquaternion image.
src2InputArrayquaternion image.
dstOutputArrayproduct dst(I)=src1(I) . src2(I)
QUnitary(InputArray, OutputArray)
divides each element by its modulus.
public static void QUnitary(InputArray qimg, OutputArray qnimg)
Parameters
qimgInputArrayquaternion image.
qnimgOutputArrayconjugate of qimg
RollingGuidanceFilter(InputArray, OutputArray, int, double, double, int, BorderTypes)
Applies the rolling guidance filter to an image.
public static void RollingGuidanceFilter(InputArray src, OutputArray dst, int d = -1, double sigmaColor = 25, double sigmaSpace = 3, int numOfIter = 4, BorderTypes borderType = BorderTypes.Default)
Parameters
srcInputArray8-bit or floating-point, 1-channel or 3-channel image.
dstOutputArrayDestination image of the same size and type as src.
dintDiameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace.
sigmaColordoubleFilter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood(see sigmaSpace) will be mixed together, resulting in larger areas of semi-equal color.
sigmaSpacedoubleFilter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough(see sigmaColor). When d>0 , it specifies the neighborhood size regardless of sigmaSpace.Otherwise, d is proportional to sigmaSpace.
numOfIterintNumber of iterations of joint edge-preserving filtering applied on the source image.
borderTypeBorderTypes
Thinning(InputArray, OutputArray, ThinningTypes)
Applies a binary blob thinning operation, to achieve a skeletization of the input image. The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen.
public static void Thinning(InputArray src, OutputArray dst, ThinningTypes thinningType = ThinningTypes.ZHANGSUEN)
Parameters
srcInputArraySource 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values.
dstOutputArrayDestination image of the same size and the same type as src. The function can work in-place.
thinningTypeThinningTypesValue that defines which thinning algorithm should be used.
WeightedMedianFilter(InputArray, InputArray, OutputArray, int, double, WMFWeightType, Mat?)
Applies weighted median filter to an image.
public static void WeightedMedianFilter(InputArray joint, InputArray src, OutputArray dst, int r, double sigma = 25.5, WMFWeightType weightType = WMFWeightType.EXP, Mat? mask = null)
Parameters
jointInputArrayJoint 8-bit, 1-channel or 3-channel image.
srcInputArraySource 8-bit or floating-point, 1-channel or 3-channel image.
dstOutputArrayDestination image.
rintRadius of filtering kernel, should be a positive integer.
sigmadoubleFilter range standard deviation for the joint image.
weightTypeWMFWeightTypeThe type of weight definition, see WMFWeightType
maskMatA 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram.This is useful for applications like optical flow occlusion handling.
Remarks
For more details about this implementation, please see @cite zhang2014100+