Measures are classes around tailored around two functions:
score which quantifies the performance by comparing true and predicted response.
aggregator which combines multiple performance scores returned by
calculate to a single numeric value.
In addition to these two functions, meta-information about the performance measure is stored.
m = Measure$new(id, task_type = NA, range = c(-Inf, Inf), minimize = NA, average = "macro", aggregator = NULL, properties = character(), predict_type = "response", predict_sets = "test", task_properties = character(), packages = character(), man = NA_character_)
Identifier for the measure.
Type of the task the measure can operator on. E.g.,
TRUE if good predictions correspond to small values,
FALSE if good predictions correspond to large values.
If set to
NA (default), tuning this measure is not possible.
"macro", calculates the individual performances scores for each Prediction and then uses the
function defined in
aggregator to average them to a single number.
If set to
"micro", the individual Prediction objects are first combined into a single new Prediction object which is then used to assess the performance.
aggregator is not used in this case.
Function to aggregate individual performance scores
x is a numeric vector.
NULL, defaults to
Prediction sets to operate on, used in
aggregate() to extract the matching
predict_sets from the ResampleResult.
Multiple predict sets are calculated by the respective Learner during
Must be a non-empty subset of
If multiple sets are provided, these are first combined to a single prediction object.
String in the format
[pkg]::[topic] pointing to a manual page for this object.
All variables passed to the constructor.
Aggregates multiple performance scores into a single score using the
aggregator function of the measure.
Operates on the Predictions of ResampleResult with matching
score(prediction, task = NULL, learner = NULL, train_set = NULL)
((named list of) Prediction, Task, Learner,
Takes a Prediction (or a list of Prediction objects named with valid
and calculates a numeric score.
If the measure if flagged with the properties
"requires_train_set", you must additionally
pass the respective Task, the trained Learner or the training set indices.
This is handled internally during
Opens the corresponding help page referenced by