Based on a confusion matrix for binary classification problems, allows to calculate various performance measures.
Implemented are the following measures based on https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram:

`"tp"`

: True Positives.

`"fn"`

: False Negatives.

`"fp"`

: False Positives.

`"tn"`

: True Negatives.

`"tpr"`

: True Positive Rate.

`"fnr"`

: False Negative Rate.

`"fpr"`

: False Positive Rate.

`"tnr"`

: True Negative Rate.

`"ppv"`

: Positive Predictive Value.

`"fdr"`

: False Discovery Rate.

`"for"`

: False Omission Rate.

`"npv"`

: Negative Predictive Value.

`"precision"`

: Alias for `"ppv"`

.

`"recall"`

: Alias for `"tpr"`

.

`"sensitivity"`

: Alias for `"tpr"`

.

`"specificity"`

: Alias for `"tnr"`

.

If the denominator is 0, the score is returned as `NA`

.

MeasureClassifConfusion
confusion_measures(m, type = NULL)

## Arguments

m |
(`matrix()` )
Confusion matrix, e.g. as returned by field `confusion` of PredictionClassif.
Truth is in columns, predicted response is in rows. |

type |
(`character()` )
Selects the measure to use. See description. |

`R6::R6Class()`

inheriting from MeasureClassif.

## Examples

#> INFO [mlr3] Training learner 'classif.rpart' on task 'wine' ...
#> INFO [mlr3] Predicting with model of learner 'classif.rpart' on task 'wine' ...

#> precision recall
#> 0.9661017 0.9661017