Skip to contents

Measure to compare true observed labels with predicted probabilities in binary classification tasks.

Usage

bbrier(truth, prob, positive, sample_weights = NULL, ...)

Arguments

truth

(factor())
True (observed) labels. Must have the exactly same two levels and the same length as response.

prob

(numeric())
Predicted probability for positive class. Must have exactly same length as truth.

positive

(character(1))
Name of the positive class.

sample_weights

(numeric())
Vector of non-negative and finite sample weights. Must have the same length as truth. The vector gets automatically normalized to sum to one. Defaults to equal sample weights.

...

(any)
Additional arguments. Currently ignored.

Value

Performance value as numeric(1).

Details

The Binary Brier Score is defined as $$ \frac{1}{n} \sum_{i=1}^n w_i (I_i - p_i)^2. $$

Note that this (more common) definition of the Brier score is equivalent to the original definition of the multi-class Brier score (see mbrier()) divided by 2.

Meta Information

  • Type: "binary"

  • Range: \([0, 1]\)

  • Minimize: TRUE

  • Required prediction: prob

References

https://en.wikipedia.org/wiki/Brier_score

Brier GW (1950). “Verification of forecasts expressed in terms of probability.” Monthly Weather Review, 78(1), 1--3. doi:10.1175/1520-0493(1950)078<0001:vofeit>2.0.co;2 .

See also

Other Binary Classification Measures: auc(), dor(), fbeta(), fdr(), fnr(), fn(), fomr(), fpr(), fp(), mcc(), npv(), ppv(), prauc(), tnr(), tn(), tpr(), tp()

Examples

set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
prob = runif(10)
bbrier(truth, prob, positive = "a")
#> [1] 0.2812546