Training constructor
Namespace: OpenCvSharp.CPlusPlus
Assembly: OpenCvSharp.CPlusPlus (in OpenCvSharp.CPlusPlus.dll) Version: 1.0.0.0 (1.0.0.0)
Syntax
public CvDTreeParams( int maxDepth, int minSampleCount, float regressionAccuracy, bool useSurrogates, int maxCategories, int cvFolds, bool use1seRule, bool truncatePrunedTree, float[] priors )
Parameters
- maxDepth
- Type: SystemInt32
This parameter specifies the maximum possible depth of the tree. That is the training algorithms attempts to split a node while its depth is less than max_depth. The actual depth may be smaller if the other termination criteria are met (see the outline of the training procedure in the beginning of the section), and/or if the tree is pruned. - minSampleCount
- Type: SystemInt32
A node is not split if the number of samples directed to the node is less than the parameter value. - regressionAccuracy
- Type: SystemSingle
Another stop criteria - only for regression trees. As soon as the estimated node value differs from the node training samples responses by less than the parameter value, the node is not split further. - useSurrogates
- Type: SystemBoolean
If true, surrogate splits are built. Surrogate splits are needed to handle missing measurements and for variable importance estimation. - maxCategories
- Type: SystemInt32
If a discrete variable, on which the training procedure tries to make a split, takes more than max_categories values, the precise best subset estimation may take a very long time (as the algorithm is exponential). Instead, many decision trees engines (including ML) try to find sub-optimal split in this case by clustering all the samples into max_categories clusters (i.e. some categories are merged together). Note that this technique is used only in N(>2)-class classification problems. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases. - cvFolds
- Type: SystemInt32
If this parameter is >1, the tree is pruned using cv_folds-fold cross validation. - use1seRule
- Type: SystemBoolean
If true, the tree is truncated a bit more by the pruning procedure. That leads to compact, and more resistant to the training data noise, but a bit less accurate decision tree. - truncatePrunedTree
- Type: SystemBoolean
If true, the cut off nodes (with Tn≤CvDTree::pruned_tree_idx) are physically removed from the tree. Otherwise they are kept, and by decreasing CvDTree::pruned_tree_idx (e.g. setting it to -1) it is still possible to get the results from the original un-pruned (or pruned less aggressively) tree. - priors
- Type: SystemSingle
The array of a priori class probabilities, sorted by the class label value. The parameter can be used to tune the decision tree preferences toward a certain class. For example, if users want to detect some rare anomaly occurrence, the training base will likely contain much more normal cases than anomalies, so a very good classification performance will be achieved just by considering every case as normal. To avoid this, the priors can be specified, where the anomaly probability is artificially increased (up to 0.5 or even greater), so the weight of the misclassified anomalies becomes much bigger, and the tree is adjusted properly.
See Also