Based on a confusion matrix for binary classification problems, allows to calculate various performance measures. Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:

  • "tp": True Positives.

  • "fn": False Negatives.

  • "fp": False Positives.

  • "tn": True Negatives.

  • "tpr": True Positive Rate.

  • "fnr": False Negative Rate.

  • "fpr": False Positive Rate.

  • "tnr": True Negative Rate.

  • "ppv": Positive Predictive Value.

  • "fdr": False Discovery Rate.

  • "for": False Omission Rate.

  • "npv": Negative Predictive Value.

  • "precision": Alias for "ppv".

  • "recall": Alias for "tpr".

  • "sensitivity": Alias for "tpr".

  • "specificity": Alias for "tnr".

If the denominator is 0, the score is returned as NA.

MeasureClassifConfusion

confusion_measures(m, type = NULL)

Arguments

m

:: matrix()
Confusion matrix, e.g. as returned by field confusion of PredictionClassif. Truth is in columns, predicted response is in rows.

type

:: character()
Selects the measure to use. See description.

Format

R6::R6Class() inheriting from MeasureClassif.

Examples

task = mlr_tasks$get("german_credit") learner = mlr_learners$get("classif.rpart") p = learner$train(task)$predict(task) p$confusion
#> truth #> response good bad #> good 627 130 #> bad 73 170
round(confusion_measures(p$confusion), 2)
#> tp fn fp tn tpr fnr fpr tnr ppv fdr for #> 627.00 73.00 130.00 170.00 0.90 0.10 0.43 0.57 0.83 0.17 0.30 #> npv #> 0.70