Binary classification measure defined with \(P\) as precision() and \(R\) as recall() as $$ (1 + \beta^2) \frac{P \cdot R}{(\beta^2 P) + R}. $$ It measures the effectiveness of retrieval with respect to a user who attaches \(\beta\) times as much importance to recall as precision. For \(\beta = 1\), this measure is called "F1" score.

fbeta(truth, response, positive, beta = 1, na_value = NaN, ...)

Arguments

truth

:: factor()
True (observed) labels. Must have the exactly same two levels and the same length as response.

response

:: factor()
Predicted response labels. Must have the exactly same two levels and the same length as truth.

positive

:: character(1)
Name of the positive class.

beta

:: numeric(1)
Parameter to give either precision or recall more weight. Default is 1, resulting in balanced weights.

na_value

:: numeric(1)
Value that should be returned if the measure is not defined for the input (as described in the note). Default is NaN.

...

:: any
Additional arguments. Currently ignored.

Value

Performance value as numeric(1).

Note

This measure is undefined if

  • TP = 0

  • precision or recall is undefined, i.e. TP + FP = 0 or TP + FN = 0.

Meta Information

  • Type: "binary"

  • Range: \([0, 1]\)

  • Minimize: FALSE

  • Required prediction: response

References

Sasaki Y, others (2007). “The truth of the F-measure.” Teach Tutor mater, 1(5), 1--5. https://www.cs.odu.edu/~mukka/cs795sum10dm/Lecturenotes/Day3/F-measure-YS-26Oct07.pdf.

Rijsbergen CJV (1979). Information Retrieval, 2nd edition. Butterworth-Heinemann, Newton, MA, USA. ISBN 408709294.

See also

Other Binary Classification Measures: auc(), bbrier(), dor(), fdr(), fnr(), fn(), fomr(), fpr(), fp(), mcc(), npv(), ppv(), tnr(), tn(), tpr(), tp()

Examples

set.seed(1) lvls = c("a", "b") truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls) response = factor(sample(lvls, 10, replace = TRUE), levels = lvls) fbeta(truth, response, positive = "a")
#> [1] 0.5