Classification measure counting the false positives (type 1 error), i.e. the number of predictions indicating a positive class label while in fact it is negative.

fp(truth, response, positive, ...)

truth | :: |
---|---|

response | :: |

positive | :: |

... | :: |

Performance value as `numeric(1)`

.

Type:

`"binary"`

Range: \([0, \infty)\)

Minimize:

`TRUE`

Required prediction:

`response`

https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram

Other Binary Classification Measures:
`auc()`

,
`bbrier()`

,
`dor()`

,
`fbeta()`

,
`fdr()`

,
`fnr()`

,
`fn()`

,
`fomr()`

,
`fpr()`

,
`mcc()`

,
`npv()`

,
`ppv()`

,
`tnr()`

,
`tn()`

,
`tpr()`

,
`tp()`

set.seed(1) lvls = c("a", "b") truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls) response = factor(sample(lvls, 10, replace = TRUE), levels = lvls) fp(truth, response, positive = "a")#> [1] 3