Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
F-score
Statistical measure of a test's accuracy

The F-score or F-measure is a key metric in binary classification and information retrieval that combines precision—the ratio of true positives to all predicted positives—and recall—the ratio of true positives to all actual positives. Precision is also called positive predictive value, while recall is known as sensitivity. The most common variant, the F1 score, is the harmonic mean of precision and recall, balancing both equally. More flexible Fβ scores allow weighting precision or recall differently. An F-score ranges from 0, indicating no predictive power, to 1.0, representing perfect precision and recall.

Related Image Collections Add Image
We don't have any YouTube videos related to F-score yet.
We don't have any PDF documents related to F-score yet.
We don't have any Books related to F-score yet.
We don't have any archived web articles related to F-score yet.

Etymology

The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992).1

Definition

The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:2

F 1 = 2 r e c a l l − 1 + p r e c i s i o n − 1 = 2 p r e c i s i o n ⋅ r e c a l l p r e c i s i o n + r e c a l l = 2 T P 2 T P + F P + F N {\displaystyle F_{1}={\frac {2}{\mathrm {recall} ^{-1}+\mathrm {precision} ^{-1}}}=2{\frac {\mathrm {precision} \cdot \mathrm {recall} }{\mathrm {precision} +\mathrm {recall} }}={\frac {2\mathrm {TP} }{2\mathrm {TP} +\mathrm {FP} +\mathrm {FN} }}}

With precision = TP / (TP + FP) and recall = TP / (TP + FN), it follows that the numerator of F1 is the sum of their numerators and the denominator of F1 is the sum of their denominators.

To see it as a harmonic mean, note that F 1 − 1 = 1 2 ( r e c a l l − 1 + p r e c i s i o n − 1 ) {\displaystyle F_{1}^{-1}={\frac {1}{2}}(\mathrm {recall} ^{-1}+\mathrm {precision} ^{-1})} .

Fβ score

A more general F score, F β {\displaystyle F_{\beta }} , that uses a positive real factor β {\displaystyle \beta } , where β {\displaystyle \beta } is chosen such that recall is considered β {\displaystyle \beta } times as important as precision, is:

F β = β 2 + 1 ( β 2 ⋅ r e c a l l − 1 ) + p r e c i s i o n − 1 = ( 1 + β 2 ) ⋅ p r e c i s i o n ⋅ r e c a l l ( β 2 ⋅ p r e c i s i o n ) + r e c a l l {\displaystyle F_{\beta }={\frac {\beta ^{2}+1}{(\beta ^{2}\cdot \mathrm {recall} ^{-1})+\mathrm {precision} ^{-1}}}={\frac {(1+\beta ^{2})\cdot \mathrm {precision} \cdot \mathrm {recall} }{(\beta ^{2}\cdot \mathrm {precision} )+\mathrm {recall} }}}

To see that it as a weighted harmonic mean, note that F β − 1 = 1 β + β − 1 ( β ⋅ r e c a l l − 1 + β − 1 ⋅ p r e c i s i o n − 1 ) {\displaystyle F_{\beta }^{-1}={\frac {1}{\beta +\beta ^{-1}}}(\beta \cdot \mathrm {recall} ^{-1}+\beta ^{-1}\cdot \mathrm {precision} ^{-1})} .

In terms of Type I and type II errors this becomes:

F β = ( 1 + β 2 ) ⋅ T P ( 1 + β 2 ) ⋅ T P + β 2 ⋅ F N + F P {\displaystyle F_{\beta }={\frac {(1+\beta ^{2})\cdot \mathrm {TP} }{(1+\beta ^{2})\cdot \mathrm {TP} +\beta ^{2}\cdot \mathrm {FN} +\mathrm {FP} }}\,}

Two commonly used values for β {\displaystyle \beta } are 2, which weighs recall higher than precision, and 1/2, which weighs recall lower than precision.

The F-measure was derived so that F β {\displaystyle F_{\beta }} "measures the effectiveness of retrieval with respect to a user who attaches β {\displaystyle \beta } times as much importance to recall as precision".3 It is based on Van Rijsbergen's effectiveness measure

E = 1 − ( α p + 1 − α r ) − 1 {\displaystyle E=1-\left({\frac {\alpha }{p}}+{\frac {1-\alpha }{r}}\right)^{-1}}

Their relationship is: F β = 1 − E {\displaystyle F_{\beta }=1-E} where α = 1 1 + β 2 {\displaystyle \alpha ={\frac {1}{1+\beta ^{2}}}}

Diagnostic testing

This is related to the field of binary classification where recall is often termed "sensitivity".

Predicted conditionSources: 4567891011
  • view
  • talk
  • edit
Total population = P + NPredicted positivePredicted negativeInformedness, bookmaker informedness (BM) = TPR + TNR − 1Prevalence threshold (PT) = ⁠√TPR × FPR − FPR/TPR − FPR⁠
Actual conditionPositive (P) 12True positive (TP), hit13False negative (FN), miss, underestimationTrue positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power = ⁠TP/P⁠ = 1 − FNRFalse negative rate (FNR), miss rate type II error 14 = ⁠FN/P⁠ = 1 − TPR
Negative (N)15False positive (FP), false alarm, overestimationTrue negative (TN), correct rejection16False positive rate (FPR), probability of false alarm, fall-out type I error 17 = ⁠FP/N⁠ = 1 − TNRTrue negative rate (TNR), specificity (SPC), selectivity = ⁠TN/N⁠ = 1 − FPR
Prevalence = ⁠P/P + N⁠Positive predictive value (PPV), precision = ⁠TP/TP + FP⁠ = 1 − FDRNegative predictive value (NPV) = ⁠TN/TN + FN⁠ = 1 − FORPositive likelihood ratio (LR+) = ⁠TPR/FPR⁠Negative likelihood ratio (LR−) = ⁠FNR/TNR⁠
Accuracy (ACC) = ⁠TP + TN/P + N⁠False discovery rate (FDR) = ⁠FP/TP + FP⁠ = 1 − PPVFalse omission rate (FOR) = ⁠FN/TN + FN⁠ = 1 − NPVMarkedness (MK), deltaP (Δp) = PPV + NPV − 1Diagnostic odds ratio (DOR) = ⁠LR+/LR−⁠
Balanced accuracy (BA) = ⁠TPR + TNR/2⁠F1 score = ⁠2 PPV × TPR/PPV + TPR⁠ = ⁠2 TP/2 TP + FP + FN⁠Fowlkes–Mallows index (FM) = √PPV × TPRphi or Matthews correlation coefficient (MCC) = √TPR × TNR × PPV × NPV - √FNR × FPR × FOR × FDRThreat score (TS), critical success index (CSI), Jaccard index = ⁠TP/TP + FN + FP⁠

Dependence of the F-score on class imbalance

Precision-recall curve, and thus the F β {\displaystyle F_{\beta }} score, explicitly depends on the ratio r {\displaystyle r} of positive to negative test cases.18 This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al., 202019 ) is to use a standard class ratio r 0 {\displaystyle r_{0}} when making such comparisons.

Applications

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.20 It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class.

Earlier works focused primarily on the F1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall21 and so F β {\displaystyle F_{\beta }} is seen in wide application.

The F-score is also used in machine learning.22 However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier.23

The F-score has been widely used in the natural language processing literature,24 such as in the evaluation of named entity recognition and word segmentation.

Properties

The F1 score is the Dice coefficient of the set of retrieved items and the set of relevant items.25

  • The F1-score of a classifier which always predicts the positive class converges to 1 as the probability of the positive class increases.
  • The F1-score of a classifier which always predicts the positive class is equal to 2 * proportion_of_positive_class / ( 1 + proportion_of_positive_class ), since the recall is 1, and the precision is equal to the proportion of the positive class.26
  • If the scoring model is uninformative (cannot distinguish between the positive and negative class) then the optimal threshold is 0 so that the positive class is always predicted.
  • F1 score is concave in the true positive rate.27

Criticism

David Hand and others criticize the widespread use of the F1 score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.28

According to Davide Chicco and Giuseppe Jurman, the F1 score is less truthful and informative than the Matthews correlation coefficient (MCC) in binary evaluation classification.29

David M W Powers has pointed out that F1 ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures Informedness and Markedness for the two directions, noting that their geometric mean is correlation.30

Another source of critique of F1 is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by the P4 metric definition, which is sometimes indicated as a symmetrical extension of F1.31

Finally, Ferrer32 and Dyrland et al.33 argue that the expected cost (or its counterpart, the expected utility) is the only principled metric for evaluation of classification decisions, having various advantages over the F-score and the MCC. Both works show that the F-score can result in wrong conclusions about the absolute and relative quality of systems.

Difference from Fowlkes–Mallows index

While the F-measure is the harmonic mean of recall and precision, the Fowlkes–Mallows index is their geometric mean.34

Extension to multi-class classification

The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). A common method is to average the F-score over each class, aiming at a balanced measurement of performance.35

Macro F1

Macro F1 is a macro-averaged F1 score aiming at a balanced performance measurement. To calculate macro F1, two different averaging-formulas have been used: the F1 score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F1 scores, where the latter exhibits more desirable properties.36

Micro F1

Micro F1 is the harmonic mean of micro precision and micro recall. In single-label multi-class classification, micro precision equals micro recall, thus micro F1 is equal to both. However, contrary to a common misconception, micro F1 does not generally equal accuracy, because accuracy takes true negatives into account while micro F1 does not.37

See also

References

  1. Sasaki, Y. (2007). "The truth of the F-measure" (PDF). Teach Tutor Mater. Vol. 1, no. 5. pp. 1–5. https://nicolasshu.com/assets/pdf/Sasaki_2007_The%20Truth%20of%20the%20F-measure.pdf

  2. Aziz Taha, Abdel (2015). "Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool". BMC Medical Imaging. 15 (29): 1–28. doi:10.1186/s12880-015-0068-x. PMC 4533825. PMID 26263899. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4533825

  3. Van Rijsbergen, C. J. (1979). Information Retrieval (2nd ed.). Butterworth-Heinemann. http://www.dcs.gla.ac.uk/Keith/Preface.html

  4. Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010. S2CID 2027090. http://people.inf.elte.hu/kiss/11dwhdm/roc.pdf

  5. Provost, Foster; Tom Fawcett (2013-08-01). "Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking". O'Reilly Media, Inc. https://www.researchgate.net/publication/256438799

  6. Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63. https://www.researchgate.net/publication/228529307

  7. Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8. 978-0-387-30164-8

  8. Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17. https://www.cawcr.gov.au/projects/verification/

  9. Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941312

  10. Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 13. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7863449

  11. Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003. https://doi.org/10.1016%2Fj.aci.2018.08.003

  12. the number of real positive cases in the data

  13. A test result that correctly indicates the presence of a condition or characteristic

  14. Type II error: A test result which wrongly indicates that a particular condition or attribute is absent

  15. the number of real negative cases in the data

  16. A test result that correctly indicates the absence of a condition or characteristic

  17. Type I error: A test result which wrongly indicates that a particular condition or attribute is present

  18. Brabec, Jan; Komárek, Tomáš; Franc, Vojtěch; Machlica, Lukáš (2020). "On model evaluation under non-constant class imbalance". International Conference on Computational Science. Springer. pp. 74–87. arXiv:2001.05571. doi:10.1007/978-3-030-50423-6_6. /wiki/ArXiv_(identifier)

  19. Siblini, W.; Fréry, J.; He-Guelton, L.; Oblé, F.; Wang, Y. Q. (2020). "Master your metrics with calibration". In M. Berthold; A. Feelders; G. Krempl (eds.). Advances in Intelligent Data Analysis XVIII. Springer. pp. 457–469. arXiv:1909.02827. doi:10.1007/978-3-030-44584-3_36. /wiki/ArXiv_(identifier)

  20. Beitzel., Steven M. (2006). On Understanding and Classifying Web Queries (Ph.D. thesis). IIT. CiteSeerX 10.1.1.127.634. /wiki/CiteSeerX_(identifier)

  21. X. Li; Y.-Y. Wang; A. Acero (July 2008). Learning query intent from regularized click graphs. Proceedings of the 31st SIGIR Conference. p. 339. doi:10.1145/1390334.1390393. ISBN 9781605581644. S2CID 8482989. 9781605581644

  22. See, e.g., the evaluation of the [1]. https://dl.acm.org/citation.cfm?id=1119195

  23. Powers, David M. W (2015). "What the F-measure doesn't measure". arXiv:1503.06410 [cs.IR]. /wiki/ArXiv_(identifier)

  24. Derczynski, L. (2016). Complementarity, F-score, and NLP Evaluation. Proceedings of the International Conference on Language Resources and Evaluation. https://www.aclweb.org/anthology/L16-1040

  25. Manning, Christopher (April 1, 2009). An Introduction to Information Retrieval (PDF). Exercise 8.7: Cambridge University Press. p. 200. Retrieved 18 July 2022.{{cite book}}: CS1 maint: location (link) https://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf

  26. "What is the baseline of the F1 score for a binary classifier?". https://stats.stackexchange.com/q/390541

  27. Zachary Chase Lipton; Elkan, Charles; Narayanaswamy, Balakrishnan (2014). "Thresholding Classifiers to Maximize F1 Score". arXiv:1402.1892 [stat.ML]. /wiki/ArXiv_(identifier)

  28. Hand, David (May 2018). "A note on using the F-measure for evaluating record linkage algorithms - Dimensions". app.dimensions.ai. 28 (3): 539–547. doi:10.1007/s11222-017-9746-6. hdl:10044/1/46235. S2CID 38782128. Retrieved 2018-12-08. https://app.dimensions.ai/details/publication/pub.1084928040

  29. Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (6): 6. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941312

  30. Powers, David M W (2011). "Evaluation: From Precision, Recall and F-Score to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63. hdl:2328/27165. /wiki/Hdl_(identifier)

  31. Sitarz, Mikolaj (2023). "Extending F1 Metric, Probabilistic Approach". Advances in Artificial Intelligence and Machine Learning. 03 (2): 1025–1038. arXiv:2210.11997. doi:10.54364/AAIML.2023.1161. /wiki/ArXiv_(identifier)

  32. Ferrer L (February 2025). "No Need for Ad-hoc Substitutes: The Expected Cost is a Principled All-purpose Classification Metric". Transactions on Machine Learning Research. https://openreview.net/pdf?id=5PPbvCExZs

  33. Dyrland K, Lundervold AS, Porta Mana P (May 2022). "Does the evaluation stand up to evaluation? A first-principle approach to the evaluation of classifiers". arXiv:2302.12006. /wiki/ArXiv_(identifier)

  34. Tharwat A (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003. https://doi.org/10.1016%2Fj.aci.2018.08.003

  35. Opitz, Juri (2024). "A Closer Look at Classification Evaluation Metrics and a Critical Reflection of Common Evaluation Practice". Transactions of the Association for Computational Linguistics. 12: 820–836. arXiv:2404.16958. doi:10.1162/tacl_a_00675. https://doi.org/10.1162/tacl_a_00675

  36. J. Opitz; S. Burst (2019). "Macro F1 and Macro F1". arXiv:1911.03347 [stat.ML]. /wiki/ArXiv_(identifier)

  37. Brownlee, Jason (7 September 2021). "4.3 – Micro F1 Score". Imbalanced Classification with Python: Better Metrics, Balance Skewed Classes, Cost-Sensitive Learning. Machine Learning Mastery. p. 40. ISBN 979-8468452240. 979-8468452240