📖 WIPIVERSE

🔍 Currently registered entries: 103,667건

F-score

The F-score, also known as the F1-score or F-measure, is a measure of a test's accuracy. It considers both the precision (how many selected items are relevant) and the recall (how many relevant items are selected). It is particularly useful when there is an uneven distribution of classes (i.e., a skewed class distribution).

Formally, the F-score is the harmonic mean of precision and recall. It ranges from 0 to 1, where 1 is the best possible value and 0 is the worst. The general formula for the F-score is:

F = 2 * (precision * recall) / (precision + recall)

Precision is defined as the number of true positives (TP) divided by the total number of predicted positives (TP + FP), where TP represents correctly predicted positive cases and FP represents incorrectly predicted positive cases (false positives).

Recall is defined as the number of true positives (TP) divided by the total number of actual positives (TP + FN), where FN represents incorrectly predicted negative cases (false negatives).

Variations of the F-score exist that weight precision and recall differently. The most common is the Fβ-score, which uses a parameter β to control the weight assigned to recall versus precision. When β = 1, the Fβ-score reduces to the standard F1-score. A higher β value favors recall, while a lower β value favors precision. The formula for Fβ is:

Fβ = (1 + β²) * (precision * recall) / (β² * precision + recall)

The F-score is commonly used in information retrieval, machine learning, and natural language processing to evaluate the performance of classification models, especially in scenarios where both false positives and false negatives have different costs or consequences. It provides a balanced perspective, considering both the model's ability to avoid incorrect positive predictions and its ability to identify all relevant positive instances.