Class HOGDescriptor
- Namespace
- OpenCvSharp
- Assembly
- OpenCvSharp.dll
HOG (Histogram-of-Oriented-Gradients) Descriptor and Object Detector
public class HOGDescriptor : DisposableCvObject, IDisposable, ICvPtrHolder
- Inheritance
-
HOGDescriptor
- Implements
- Inherited Members
Constructors
HOGDescriptor()
Default constructor
public HOGDescriptor()
HOGDescriptor(Size?, Size?, Size?, Size?, int, int, double, HistogramNormType, double, bool, int)
Creates the HOG descriptor and detector.
public HOGDescriptor(Size? winSize = null, Size? blockSize = null, Size? blockStride = null, Size? cellSize = null, int nbins = 9, int derivAperture = 1, double winSigma = -1, HistogramNormType histogramNormType = HistogramNormType.L2Hys, double l2HysThreshold = 0.2, bool gammaCorrection = true, int nlevels = 64)
Parameters
winSizeSize?Detection window size. Align to block size and block stride.
blockSizeSize?Block size in pixels. Align to cell size. Only (16,16) is supported for now.
blockStrideSize?Block stride. It must be a multiple of cell size.
cellSizeSize?Cell size. Only (8, 8) is supported for now.
nbinsintNumber of bins. Only 9 bins per cell are supported for now.
derivApertureintwinSigmadoubleGaussian smoothing window parameter.
histogramNormTypeHistogramNormTypel2HysThresholddoubleL2-Hys normalization method shrinkage.
gammaCorrectionboolFlag to specify whether the gamma correction preprocessing is required or not.
nlevelsintMaximum number of detection window increases.
HOGDescriptor(string)
Construct from a file containing HOGDescriptor properties and coefficients for the linear SVM classifier.
public HOGDescriptor(string fileName)
Parameters
fileNamestringThe file name containing HOGDescriptor properties and coefficients for the linear SVM classifier.
Fields
DaimlerPeopleDetector
This field returns 1981 SVM coeffs obtained from daimler's base. To use these coeffs the detection window size should be (48,96)
public static readonly float[] DaimlerPeopleDetector
Field Value
- float[]
DefaultNlevels
public const int DefaultNlevels = 64
Field Value
DefaultPeopleDetector
Returns coefficients of the classifier trained for people detection (for default window size).
public static readonly float[] DefaultPeopleDetector
Field Value
- float[]
L2Hys
public const int L2Hys = 0
Field Value
Properties
BlockSize
Block size in pixels. Align to cell size. Default value is Size(16,16).
public Size BlockSize { get; set; }
Property Value
BlockStride
Block stride. It must be a multiple of cell size. Default value is Size(8,8).
public Size BlockStride { get; set; }
Property Value
CellSize
Cell size. Default value is Size(8,8).
public Size CellSize { get; set; }
Property Value
DerivAperture
public int DerivAperture { get; set; }
Property Value
GammaCorrection
Flag to specify whether the gamma correction preprocessing is required or not.
public bool GammaCorrection { get; set; }
Property Value
HistogramNormType
HistogramNormType
public HistogramNormType HistogramNormType { get; set; }
Property Value
L2HysThreshold
L2-Hys normalization method shrinkage.
public double L2HysThreshold { get; set; }
Property Value
NLevels
Maximum number of detection window increases. Default value is 64
public int NLevels { get; set; }
Property Value
Nbins
Number of bins used in the calculation of histogram of gradients. Default value is 9.
public int Nbins { get; set; }
Property Value
SignedGradient
Indicates signed gradient will be used or not
public bool SignedGradient { get; set; }
Property Value
WinSigma
Gaussian smoothing window parameter.
public double WinSigma { get; set; }
Property Value
WinSize
Detection window size. Align to block size and block stride. Default value is Size(64,128).
public Size WinSize { get; set; }
Property Value
Methods
CheckDetectorSize()
public bool CheckDetectorSize()
Returns
Compute(Mat, Size?, Size?, Point[]?)
Computes HOG descriptors of given image.
public virtual float[] Compute(Mat img, Size? winStride = null, Size? padding = null, Point[]? locations = null)
Parameters
imgMatMatrix of the type CV_8U containing an image where HOG features will be calculated.
winStrideSize?Window stride. It must be a multiple of block stride.
paddingSize?Padding
locationsPoint[]Vector of Point
Returns
- float[]
Matrix of the type CV_32F
ComputeGradient(Mat, Mat, Mat, Size?, Size?)
Computes gradients and quantized gradient orientations.
public virtual void ComputeGradient(Mat img, Mat grad, Mat angleOfs, Size? paddingTL = null, Size? paddingBR = null)
Parameters
imgMatMatrix contains the image to be computed
gradMatMatrix of type CV_32FC2 contains computed gradients
angleOfsMatMatrix of type CV_8UC2 contains quantized gradient orientations
paddingTLSize?Padding from top-left
paddingBRSize?Padding from bottom-right
Detect(Mat, double, Size?, Size?, Point[]?)
Performs object detection without a multi-scale window.
public virtual Point[] Detect(Mat img, double hitThreshold = 0, Size? winStride = null, Size? padding = null, Point[]? searchLocations = null)
Parameters
imgMatSource image. CV_8UC1 and CV_8UC4 types are supported for now.
hitThresholddoubleThreshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here.
winStrideSize?Window stride. It must be a multiple of block stride.
paddingSize?Mock parameter to keep the CPU interface compatibility. It must be (0,0).
searchLocationsPoint[]
Returns
- Point[]
Left-top corner points of detected objects boundaries.
Detect(Mat, out double[], double, Size?, Size?, Point[]?)
Performs object detection without a multi-scale window.
public virtual Point[] Detect(Mat img, out double[] weights, double hitThreshold = 0, Size? winStride = null, Size? padding = null, Point[]? searchLocations = null)
Parameters
imgMatSource image. CV_8UC1 and CV_8UC4 types are supported for now.
weightsdouble[]hitThresholddoubleThreshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specfied in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here.
winStrideSize?Window stride. It must be a multiple of block stride.
paddingSize?Mock parameter to keep the CPU interface compatibility. It must be (0,0).
searchLocationsPoint[]
Returns
- Point[]
Left-top corner points of detected objects boundaries.
DetectMultiScale(Mat, double, Size?, Size?, double, int)
Performs object detection with a multi-scale window.
public virtual Rect[] DetectMultiScale(Mat img, double hitThreshold = 0, Size? winStride = null, Size? padding = null, double scale = 1.05, int groupThreshold = 2)
Parameters
imgMatSource image. CV_8UC1 and CV_8UC4 types are supported for now.
hitThresholddoubleThreshold for the distance between features and SVM classifying plane.
winStrideSize?Window stride. It must be a multiple of block stride.
paddingSize?Mock parameter to keep the CPU interface compatibility. It must be (0,0).
scaledoubleCoefficient of the detection window increase.
groupThresholdintCoefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping.
Returns
- Rect[]
Detected objects boundaries.
DetectMultiScale(Mat, out double[], double, Size?, Size?, double, int)
Performs object detection with a multi-scale window.
public virtual Rect[] DetectMultiScale(Mat img, out double[] foundWeights, double hitThreshold = 0, Size? winStride = null, Size? padding = null, double scale = 1.05, int groupThreshold = 2)
Parameters
imgMatSource image. CV_8UC1 and CV_8UC4 types are supported for now.
foundWeightsdouble[]hitThresholddoubleThreshold for the distance between features and SVM classifying plane.
winStrideSize?Window stride. It must be a multiple of block stride.
paddingSize?Mock parameter to keep the CPU interface compatibility. It must be (0,0).
scaledoubleCoefficient of the detection window increase.
groupThresholdintCoefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping.
Returns
- Rect[]
Detected objects boundaries.
DetectMultiScaleROI(Mat, out Rect[], out DetectionROI[], double, int)
evaluate specified ROI and return confidence value for each location in multiple scales
public void DetectMultiScaleROI(Mat img, out Rect[] foundLocations, out DetectionROI[] locations, double hitThreshold = 0, int groupThreshold = 0)
Parameters
imgMatMatrix of the type CV_8U or CV_8UC3 containing an image where objects are detected.
foundLocationsRect[]Vector of rectangles where each rectangle contains the detected object.
locationsDetectionROI[]Vector of DetectionROI
hitThresholddoubleThreshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here.
groupThresholdintMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
DetectROI(Mat, Point[], out Point[], out double[], double, Size?, Size?)
evaluate specified ROI and return confidence value for each location
public void DetectROI(Mat img, Point[] locations, out Point[] foundLocations, out double[] confidences, double hitThreshold = 0, Size? winStride = null, Size? padding = null)
Parameters
imgMatMatrix of the type CV_8U or CV_8UC3 containing an image where objects are detected.
locationsPoint[]Vector of Point
foundLocationsPoint[]Vector of Point where each Point is detected object's top-left point.
confidencesdouble[]confidences
hitThresholddoubleThreshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specified in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here
winStrideSize?winStride
paddingSize?padding
DisposeUnmanaged()
Releases unmanaged resources
protected override void DisposeUnmanaged()
GetDaimlerPeopleDetector()
This method returns 1981 SVM coeffs obtained from daimler's base. To use these coeffs the detection window size should be (48,96)
public static float[] GetDaimlerPeopleDetector()
Returns
- float[]
GetDefaultPeopleDetector()
Returns coefficients of the classifier trained for people detection (for default window size).
public static float[] GetDefaultPeopleDetector()
Returns
- float[]
GetDescriptorSize()
public int GetDescriptorSize()
Returns
GetWinSigma()
public double GetWinSigma()
Returns
GroupRectangles(out Rect[], out double[], int, double)
Groups the object candidate rectangles.
public void GroupRectangles(out Rect[] rectList, out double[] weights, int groupThreshold, double eps)
Parameters
rectListRect[]Input/output vector of rectangles. Output vector includes retained and grouped rectangles. (The Python list is not modified in place.)
weightsdouble[]Input/output vector of weights of rectangles. Output vector includes weights of retained and grouped rectangles. (The Python list is not modified in place.)
groupThresholdintMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
epsdoubleRelative difference between sides of the rectangles to merge them into a group.
Load(string, string?)
loads HOGDescriptor parameters and coefficients for the linear SVM classifier from a file.
public virtual bool Load(string fileName, string? objName = null)
Parameters
fileNamestringPath of the file to read.
objNamestringThe optional name of the node to read (if empty, the first top-level node will be used).
Returns
Save(string, string?)
saves HOGDescriptor parameters and coefficients for the linear SVM classifier to a file
public virtual void Save(string fileName, string? objName = null)
Parameters
SetSVMDetector(float[])
Sets coefficients for the linear SVM classifier.
public virtual void SetSVMDetector(float[] svmDetector)
Parameters
svmDetectorfloat[]coefficients for the linear SVM classifier.