Skip to contents

Measure to compare true observed labels with predicted labels in binary classification tasks.

Usage

gpr(truth, response, positive, na_value = NaN, ...)

Arguments

truth

(factor())
True (observed) labels. Must have the exactly same two levels and the same length as response.

response

(factor())
Predicted response labels. Must have the exactly same two levels and the same length as truth.

positive

(character(1))
Name of the positive class.

na_value

(numeric(1))
Value that should be returned if the measure is not defined for the input (as described in the note). Default is NaN.

...

(any)
Additional arguments. Currently ignored.

Value

Performance value as numeric(1).

Details

Calculates the geometric mean of precision() P and recall() R as $$ \sqrt{\mathrm{P} \mathrm{R}}. $$

This measure is undefined if precision or recall is undefined, i.e. if TP + FP = 0 or if TP + FN = 0.

Meta Information

  • Type: "binary"

  • Range: \([0, 1]\)

  • Minimize: FALSE

  • Required prediction: response

References

He H, Garcia EA (2009). “Learning from Imbalanced Data.” IEEE Transactions on knowledge and data engineering, 21(9), 1263--1284. doi:10.1109/TKDE.2008.239 .

See also

Other Binary Classification Measures: auc(), bbrier(), dor(), fbeta(), fdr(), fnr(), fn(), fomr(), fpr(), fp(), gmean(), mcc(), npv(), ppv(), prauc(), tnr(), tn(), tpr(), tp()

Examples

set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
gpr(truth, response, positive = "a")
#> [1] 0.5