Measures are classes around tailored around two functions:
score which quantifies the performance by comparing true and predicted response.
aggregator which combines multiple performance values returned by
calculate to a single numeric value.
In addition to these two functions, meta-information about the performance measure is stored.
m = Measure$new(id, task_type, range, minimize, predict_type = "response", task_properties = character(), na_score = FALSE, packages = character())
Identifier for the measure.
Type of the task the measure can operator on. E.g.,
TRUE if good predictions correspond to small values,
FALSE if good predictions correspond to large values.
If set to
NA, tuning with this measure is not possible.
Function to aggregate individual performance values
x is a numeric vector.
NULL, defaults to
Is the measure expected to return
NA in some cases? Default is
Identifier of the measure.
TRUE if the best value is reached via minimization and
FALSE by maximization.
Stores the names of required packages.
Stores the feasible range of the measure.
Aggregates multiple performance scores into a single score using the
aggregator function of the measure.
Operates on a ResampleResult as returned by resample.
score(prediction, task = NULL, learner = NULL)
(Prediction, Task, Learner) ->
Takes a Prediction and calculates a numeric score. If the measure if flagged with the properties
"requires_learner", you must additionally
pass the respective Task or the Learner for the measure to extract information from these objects.