Based on a confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:

  • "tp": True Positives.

  • "fn": False Negatives.

  • "fp": False Positives.

  • "tn": True Negatives.

  • "tpr": True Positive Rate.

  • "fnr": False Negative Rate.

  • "fpr": False Positive Rate.

  • "tnr": True Negative Rate.

  • "ppv": Positive Predictive Value.

  • "fdr": False Discovery Rate.

  • "for": False Omission Rate.

  • "npv": Negative Predictive Value.

  • "precision": Alias for "ppv".

  • "recall": Alias for "tpr".

  • "sensitivity": Alias for "tpr".

  • "specificity": Alias for "tnr".

If the denominator is 0, the score is returned as NA.

MeasureClassifConfusion

confusion_measures(m, type = NULL)

Arguments

m

(matrix())
Confusion matrix, e.g. as returned by field confusion of PredictionClassif. Truth is in columns, predicted response is in rows.

type

(character())
Selects the measure to use. See description.

Format

R6::R6Class() inheriting from MeasureClassif.

Examples

task = mlr_tasks$get("wine") learner = mlr_learners$get("classif.rpart") e = Experiment$new(task, learner)$train()$predict()
#> INFO [mlr3] Training learner 'classif.rpart' on task 'wine' ... #> INFO [mlr3] Predicting with model of learner 'classif.rpart' on task 'wine' ...
m = e$prediction$confusion confusion_measures(m, type = c("precision", "recall"))
#> precision recall #> 0.9661017 0.9661017