Skip to contents

Measure to compare true observed labels with predicted labels in multiclass classification tasks.

Usage

acc(truth, response, sample_weights = NULL, ...)

Arguments

truth

(factor())
True (observed) labels. Must have the same levels and length as response.

response

(factor())
Predicted response labels. Must have the same levels and length as truth.

sample_weights

(numeric())
Vector of non-negative and finite sample weights. Must have the same length as truth. The vector gets automatically normalized to sum to one. Defaults to equal sample weights.

...

(any)
Additional arguments. Currently ignored.

Value

Performance value as numeric(1).

Details

The Classification Accuracy is defined as $$ \frac{1}{n} \sum_{i=1}^n w_i \mathbf{1} \left( t_i = r_i \right), $$ where \(w_i\) are normalized weights for all observations \(x_i\).

Meta Information

  • Type: "classif"

  • Range: \([0, 1]\)

  • Minimize: FALSE

  • Required prediction: response

See also

Other Classification Measures: bacc(), ce(), logloss(), mauc_aunu(), mbrier(), mcc(), zero_one()

Examples

set.seed(1)
lvls = c("a", "b", "c")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
acc(truth, response)
#> [1] 0.2