Cross Validation

1. What is Cross Validation?

Cross validation is essential for machine learning evaluation, and it assesses how the statistical analysis results are generalized to an independent data set. Assuming that the machine learning data takes a form shown in the image below, let's take a closer look at how and why cross validation is needed in machine learning.

As shown in the image above, labeled data is composed of train and test set. If you check the model's performance and update parameters using only the given test set, repeatedly, the model will perform well only for the that test set. In other words, the model will become overfitted to the test set and show limited prediction/classification performances with actual datasets. This is where cross validation come in to solve these problems.In cross validation, instead of using a single test set, it makes use of the entire dataset to validate the model. And to reduce variability, the results from multiple validations are combined to estimate the model’s performance. Click AI also uses cross validation to solve overfitting problems in machine learning.

2. Types of Cross Validation 

  ✓   K-fold Cross Validation
K-fold cross validation is the most commonly used method. It is a resampling procedure used to evaluate machine learning models from limited sets of data. In this process, parameter K refers to the number of groups which given datasets will be divide into. To be more specific, different test sets are allocated for each iteration by dividing the data into K number of groups, and a total of K number of data fold sets are formed. In the example picture below, there are 4 training folds and 1 test fold, and there are a total of 5 data fold sets. Therefore, a total of K number of iterations are required to train the model, and the final verification result is generally calculated by averaging the verification results from each data fold set.

 
 ✓  Stratified K-fold Cross Validation
Stratified K-fold cross validation is a modified method of K-fold cross validation, and it is often used for classification models. This method is useful when the distribution of labels is unbalanced for each class. For example, if a set of data folds is organized in the order of the index of the samples while the distribution of labels is unbalanced, it may cause errors in the validating process. Here, cross validation method comes in to take the distribution of data labels throughout the data set, and manages to get the distribution of each train or test fold to reflect the entire data set's distribution. Therefore, data label value is required in a function that makes up the set of data folds.


 ✓ Leave-P-Out (LPO) Cross Validation
As the name suggests, leave-p-out cross validation is a method of selecting p number of samples from the entire data and using them for model validation. If you have n number of data samples available, it will use n-p samples to train the model, and p samples as test sets. So in this method, the number of possible cases for a test set , which is also the number of iterations required to train and validate, can be calculated with .

[n: number of data samples, c: number of possible cases for a test set]

As with the K-fold cross validation method, the final validation results are typically averaged over each set of data folds. The model will be trained and validated for all possible combinations, and a large p-value may cause problems in the computation process. Hence, LPO cross-validation can be a computationally time-intensive method due to the large number of configurable data fold sets.

  ✓Leave-One-Out (LOO) Cross Validation
Leave-one-out cross validation is a specific case where p = 1 in a LPO cross validation method. It is often a more effective method than LPO cross validation, because it produces better results while it requires less computation time. The advantage is that it is able to train increased amount of data as the number of test sets is small. Also, since it only requires a single data set for model validation, it becomes possible to use all of the other data sets to train the model. 

[n: number of sample data]

지금 바로 전문가와 상담하세요.

지금 바로 시작해보세요.