Runs a benchmark on arbitrary combinations of tasks (Task), learners (Learner), and resampling strategies (Resampling), possibly in parallel.

benchmark(design, store_models = FALSE)

Arguments

design

(data.frame())
Data frame (or data.table::data.table()) with three columns: "task", "learner", and "resampling". Each row defines a resampling by providing a Task, Learner and an instantiated Resampling strategy. The helper function benchmark_grid() can assist in generating an exhaustive design (see examples) and instantiate the Resamplings per Task.

store_models

(logical(1))
Keep the fitted model after the test set has been predicted? Set to TRUE if you want to further analyse the models or want to extract information like variable importance.

Value

BenchmarkResult.

Note

The fitted models are discarded after the predictions have been scored in order to reduce memory consumption. If you need access to the models for later analysis, set store_models to TRUE.

Parallelization

This function can be parallelized with the future package. One job is one resampling iteration, and all jobs are send to an apply function from future.apply in a single batch. To select a parallel backend, use future::plan().

Progress Bars

This function supports progress bars via the package progressr. Simply wrap the function in progressr::with_progress() to enable them. We recommend to use package progress as backend; enable with progressr::handlers("progress").

Logging

The mlr3 uses the lgr package for logging. lgr supports multiple log levels which can be queried with getOption("lgr.log_levels").

To suppress output and reduce verbosity, you can lower the log from the default level "info" to "warn":

lgr::get_logger("mlr3")$set_threshold("warn")

To get additional log output for debugging, increase the log level to "debug" or "trace":

lgr::get_logger("mlr3")$set_threshold("debug")

To log to a file or a data base, see the documentation of lgr::lgr-package.

Examples

# benchmarking with benchmark_grid() tasks = lapply(c("iris", "sonar"), tsk) learners = lapply(c("classif.featureless", "classif.rpart"), lrn) resamplings = rsmp("cv", folds = 3) design = benchmark_grid(tasks, learners, resamplings) print(design)
#> task learner resampling #> 1: <TaskClassif[45]> <LearnerClassifFeatureless[33]> <ResamplingCV[19]> #> 2: <TaskClassif[45]> <LearnerClassifRpart[33]> <ResamplingCV[19]> #> 3: <TaskClassif[45]> <LearnerClassifFeatureless[33]> <ResamplingCV[19]> #> 4: <TaskClassif[45]> <LearnerClassifRpart[33]> <ResamplingCV[19]>
set.seed(123) bmr = benchmark(design) ## Data of all resamplings head(as.data.table(bmr))
#> uhash task #> 1: 68e7fee8-dfba-43ba-af01-87382fe3fab1 <TaskClassif[45]> #> 2: 68e7fee8-dfba-43ba-af01-87382fe3fab1 <TaskClassif[45]> #> 3: 68e7fee8-dfba-43ba-af01-87382fe3fab1 <TaskClassif[45]> #> 4: 83ab9c67-7c91-42b5-977e-8916833a5e03 <TaskClassif[45]> #> 5: 83ab9c67-7c91-42b5-977e-8916833a5e03 <TaskClassif[45]> #> 6: 83ab9c67-7c91-42b5-977e-8916833a5e03 <TaskClassif[45]> #> learner resampling iteration #> 1: <LearnerClassifFeatureless[33]> <ResamplingCV[19]> 1 #> 2: <LearnerClassifFeatureless[33]> <ResamplingCV[19]> 2 #> 3: <LearnerClassifFeatureless[33]> <ResamplingCV[19]> 3 #> 4: <LearnerClassifRpart[33]> <ResamplingCV[19]> 1 #> 5: <LearnerClassifRpart[33]> <ResamplingCV[19]> 2 #> 6: <LearnerClassifRpart[33]> <ResamplingCV[19]> 3 #> prediction #> 1: <PredictionClassif[19]> #> 2: <PredictionClassif[19]> #> 3: <PredictionClassif[19]> #> 4: <PredictionClassif[19]> #> 5: <PredictionClassif[19]> #> 6: <PredictionClassif[19]>
## Aggregated performance values aggr = bmr$aggregate() print(aggr)
#> nr resample_result task_id learner_id resampling_id iters #> 1: 1 <ResampleResult[21]> iris classif.featureless cv 3 #> 2: 2 <ResampleResult[21]> iris classif.rpart cv 3 #> 3: 3 <ResampleResult[21]> sonar classif.featureless cv 3 #> 4: 4 <ResampleResult[21]> sonar classif.rpart cv 3 #> classif.ce #> 1: 0.72000000 #> 2: 0.05333333 #> 3: 0.53878537 #> 4: 0.30296756
## Extract predictions of first resampling result rr = aggr$resample_result[[1]] as.data.table(rr$prediction())
#> row_id truth response #> 1: 1 setosa setosa #> 2: 2 setosa setosa #> 3: 3 setosa setosa #> 4: 17 setosa setosa #> 5: 24 setosa setosa #> --- #> 146: 132 virginica versicolor #> 147: 144 virginica versicolor #> 148: 145 virginica versicolor #> 149: 146 virginica versicolor #> 150: 148 virginica versicolor
# Benchmarking with a custom design: # - fit classif.featureless on iris with a 3-fold CV # - fit classif.rpart on sonar using a holdout tasks = list(tsk("iris"), tsk("sonar")) learners = list(lrn("classif.featureless"), lrn("classif.rpart")) resamplings = list(rsmp("cv", folds = 3), rsmp("holdout")) design = data.table::data.table( task = tasks, learner = learners, resampling = resamplings ) ## Instantiate resamplings design$resampling = Map( function(task, resampling) resampling$clone()$instantiate(task), task = design$task, resampling = design$resampling ) ## Run benchmark bmr = benchmark(design) print(bmr)
#> <BenchmarkResult> of 4 rows with 2 resampling runs #> nr task_id learner_id resampling_id iters warnings errors #> 1 iris classif.featureless cv 3 0 0 #> 2 sonar classif.rpart holdout 1 0 0
## Get the training set of the 2nd iteration of the featureless learner on iris rr = bmr$aggregate()[learner_id == "classif.featureless"]$resample_result[[1]] rr$resampling$train_set(2)
#> [1] 3 4 6 8 15 19 21 23 24 25 35 39 43 45 46 47 48 49 #> [19] 52 53 57 59 63 66 67 71 72 76 79 81 83 85 88 89 92 94 #> [37] 95 97 107 108 119 120 121 127 131 137 138 140 143 149 5 7 12 13 #> [55] 16 17 18 20 26 30 31 33 36 40 41 42 44 50 51 55 56 58 #> [73] 61 62 70 73 74 77 78 80 87 90 99 100 101 104 105 111 114 118 #> [91] 124 126 128 129 133 135 144 145 146 150