Each fold experiment tests the accuracy of the model by taking a stratified sample of the data, using a part of size 1/N for testing and the rest for training. This process is repeated N times (the default is N = 3).
A fold evaluation does not depend upon the data types in the data set. So, in addition to being used for grid experiment evaluation, it can also be used to evaluate any model at any time. It is present in the Model Actions menu of a classification model, for example:
The only specification needed for a Fold evaluation is the number of folds (N). A threshold is also needed, but during grid exhaustive search or auto tune the threshold comes from a threshold search. If you perform a fold evaluation on an existing model, the model’s threshold will be used, but can be overridden. A seed is an optional parameter that affects the random fold selection. Use the same seed if you want the same results.
Please sign in to leave a comment.