Binary classification measure defined with \(P\) as `precision()`

and \(R\) as
`recall()`

as $$
(1 + \beta^2) \frac{P \cdot R}{(\beta^2 P) + R}.
$$
It measures the effectiveness of retrieval with respect to a user who attaches \(\beta\) times
as much importance to recall as precision.
For \(\beta = 1\), this measure is called "F1" score.

fbeta(truth, response, positive, beta = 1, na_value = NaN, ...)

## Arguments

truth |
:: `factor()`
True (observed) labels.
Must have the exactly same two levels and the same length as `response` . |

response |
:: `factor()`
Predicted response labels.
Must have the exactly same two levels and the same length as `truth` . |

positive |
:: `character(1)`
Name of the positive class. |

beta |
:: `numeric(1)`
Parameter to give either precision or recall more weight.
Default is 1, resulting in balanced weights. |

na_value |
:: `numeric(1)`
Value that should be returned if the measure is not defined for the input
(as described in the note). Default is `NaN` . |

... |
:: `any`
Additional arguments. Currently ignored. |

## Value

Performance value as `numeric(1)`

.

## Note

This measure is undefined if

## References

Sasaki Y, others (2007).
“The truth of the F-measure.”
*Teach Tutor mater*, **1**(5), 1--5.
https://www.cs.odu.edu/~mukka/cs795sum10dm/Lecturenotes/Day3/F-measure-YS-26Oct07.pdf.

Rijsbergen CJV (1979).
*Information Retrieval*, 2nd edition.
Butterworth-Heinemann, Newton, MA, USA.
ISBN 408709294.

## See also

Other Binary Classification Measures:
`auc()`

,
`bbrier()`

,
`dor()`

,
`fdr()`

,
`fnr()`

,
`fn()`

,
`fomr()`

,
`fpr()`

,
`fp()`

,
`mcc()`

,
`npv()`

,
`ppv()`

,
`tnr()`

,
`tn()`

,
`tpr()`

,
`tp()`

## Examples

#> [1] 0.5