Based on a confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on

  • "tp": True Positives.

  • "fn": False Negatives.

  • "fp": False Positives.

  • "tn": True Negatives.

  • "tpr": True Positive Rate.

  • "fnr": False Negative Rate.

  • "fpr": False Positive Rate.

  • "tnr": True Negative Rate.

  • "ppv": Positive Predictive Value.

  • "fdr": False Discovery Rate.

  • "for": False Omission Rate.

  • "npv": Negative Predictive Value.

  • "precision": Alias for "ppv".

  • "recall": Alias for "tpr".

  • "sensitivity": Alias for "tpr".

  • "specificity": Alias for "tnr".

If the denominator is 0, the score is returned as NA.


confusion_measures(m, type = NULL)



Confusion matrix, e.g. as returned by field confusion of PredictionClassif. Truth is in columns, predicted response is in rows.


Selects the measure to use. See description.


R6::R6Class() inheriting from MeasureClassif.


task = mlr_tasks$get("wine") e = Experiment$new("wine", "classif.rpart")$train()$predict()
#> INFO [mlr3] Training learner 'classif.rpart' on task 'wine' ... #> INFO [mlr3] Predicting with model of learner 'classif.rpart' on task 'wine' ...
m = e$prediction$confusion confusion_measures(m, type = c("precision", "recall"))
#> precision recall #> 0.9661017 0.9661017