Precision and Recall

1. What is Precision and Recall?

Precision and recall are indicators used to evaluate the performance of artificial intelligence models using unbalanced datasets. This is used to calculate other important indicators such as F1 score, precision-recall curve, and AUC. 

 - True Positive(TP): Predicted positive and it is true.
 - True Negative(TN): Predicted negative and it is true.
 - False Positive (FP): Predicted positive and it is false.
 - False Negative(FN): Predicted negative and it is false.

2. Details about Precision and Recall

 - Precision
Precision is the ratio of correctly predicted positive observations to the total predicted positive observations. It is also known as PPV or Positive Predictive Value, and could be represented with an equation shown below. 
As an example, precision would be the ratio of cases that correctly predicted 'dog' to the all cases that predicted 'dog' regardless of their correctness. The higher the value of the indicator, the better the AI performance.

 - Recall
Recall is the ratio of correctly predicted positive observations to the all observations in actual class - yes. It is also referred to as sensitivity, TPR, True Positive Rate, or Hit Rate, and could be represented with an equation shown below.
As an example, recall would be the ratio of cases that is actually a 'dog' to the all cases that predicted 'dog' regardless of their correctness. The higher the value of the indicator, the better the AI performance.

Both precision and recall are similar in a sense that they represent the proportion of true cases that accurately predicted true (TP), but They differ in that they are calculated in terms of the 'model' and the 'actual answer'. Due to these characteristics of the two indicators, precision and recall can be used complementarily to evaluate the performance of artificial intelligence.

3. Precision-Recall Curve (PRC)

PRC is an index used to evaluate the performance of the artificial intelligence according to the change of the confidence level and threshold for each detection result of the artificial intelligence. Confidence Level refers to the level of confidence the algorithm has about the results detected by artificial intelligence. The AI user can set a threshold for the level of confidence in the results. At each threshold level, the values of precision and recall vary, and the change of these values is expressed as a curve, which is the precision-recall curve.
A simple example is given by the case where an artificial intelligence detects a 'cat' in 5 images from a dataset that actually has 8 images of a 'cat'. If each of the 5 detected cases is as shown in the table above, you can sort them according to the confidence level.
If the threshold for the confidence level is set to a very high 95% among the sorted detection results, only B will satisfy this condition. Therefore, the precision is 1 at this point since one out of one case predicted as a 'cat' is actually a 'cat'.

However, only one out of the eight 'cat' images was successfully detected by the artificial intelligence, so the value of the recall is 0.125. By following these calculations until the threshold reaches 0% and the calculating precision and recall at each points, the table above is completed

Based on the table above, you can draw a precision-recall curve with the x-axis as the recall and the y-axis as the precision. These curves allow you to visualize and confirm the change in the precision value as the recall value changes.

4. Precision and Recall with CLICK AI

CLICK AI platform provides precision and recall information from each model's 'See details' tab.

지금 바로 전문가와 상담하세요.

지금 바로 시작해보세요.