Measure to compare true observed labels with predicted labels in binary classification tasks.
Usage
tpr(truth, response, positive, na_value = NaN, ...)
recall(truth, response, positive, na_value = NaN, ...)
sensitivity(truth, response, positive, na_value = NaN, ...)Arguments
- truth
(
factor())
True (observed) labels. Must have the exactly same two levels and the same length asresponse.- response
(
factor())
Predicted response labels. Must have the exactly same two levels and the same length astruth.- positive
(
character(1))
Name of the positive class.- na_value
(
numeric(1))
Value that should be returned if the measure is not defined for the input (as described in the note). Default isNaN.- ...
(
any)
Additional arguments. Currently ignored.
Details
The True Positive Rate is defined as $$ \frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FN}}. $$ This is also know as "recall", "sensitivity", or "probability of detection".
This measure is undefined if TP + FN = 0.
References
https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram
Goutte C, Gaussier E (2005). “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation.” In Lecture Notes in Computer Science, 345–359. doi:10.1007/978-3-540-31865-1_25 .