Skip to contents

Measure to compare true observed labels with predicted labels in binary classification tasks.

Usage

fp(truth, response, positive, ...)

Arguments

truth

(factor())
True (observed) labels. Must have the exactly same two levels and the same length as response.

response

(factor())
Predicted response labels. Must have the exactly same two levels and the same length as truth.

positive

(character(1))
Name of the positive class.

...

(any)
Additional arguments. Currently ignored.

Value

Performance value as numeric(1).

Details

This measure counts the false positives (type 1 error), i.e. the number of predictions indicating a positive class label while in fact it is negative. This is sometimes also called a "false alarm".

Meta Information

  • Type: "binary"

  • Range: \([0, \infty)\)

  • Minimize: TRUE

  • Required prediction: response

See also

Other Binary Classification Measures: auc(), bbrier(), dor(), fbeta(), fdr(), fn(), fnr(), fomr(), fpr(), gmean(), gpr(), npv(), ppv(), prauc(), tn(), tnr(), tp(), tpr()

Examples

set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
fp(truth, response, positive = "a")
#> [1] 3