Given we detected some form of bias during bias auditing, we are often interested in obtaining fair(er) models. There are several ways to achieve this, such as collecting additional data or finding and fixing errors in the data, but given there are no biases in the labelling process one other option is to debias models using either preprocessing, postprocessing and inprocessing methods. mlr3fairness
provides some operators as PipeOp
s for mlr3pipelines
. If you are not familiar with mlr3pipelines, the mlr3 book
We again showcase debiasing using the adult_train
task:
mlr3fairness
implements 2 reweighing-based algorithms: reweighing_wts
and reweighing_os
. reweighing_wts
adds observation weights to a Task
that can counteract imbalances between the conditional probabilities \(P(Y | pta)\).
key | input.type.train | input.type.predict | output.type.train | output.type.predict |
---|---|---|---|---|
EOd | TaskClassif | TaskClassif | NULL | PredictionClassif |
reweighing_os | TaskClassif | TaskClassif | TaskClassif | TaskClassif |
reweighing_wts | TaskClassif | TaskClassif | TaskClassif | TaskClassif |
We fist instantiate the PipeOp
:
and directly add the weights:
Often we directly combine the PipeOp
with a Learner
to automate the preprocessing (see learner_rw
). Below we instantiate a small benchmark
set.seed(4321)
learner = lrn("classif.rpart", cp = 0.005)
learner_rw = as_learner(po("reweighing_wts") %>>% learner)
grd = benchmark_grid(list(task), list(learner, learner_rw), rsmp("cv", folds=3))
bmr = benchmark(grd)
#> INFO [14:29:31.930] [mlr3] Running benchmark with 6 resampling iterations
#> INFO [14:29:31.994] [mlr3] Applying learner 'classif.rpart' on task 'adult_train' (iter 1/3)
#> INFO [14:29:32.185] [mlr3] Applying learner 'classif.rpart' on task 'adult_train' (iter 2/3)
#> INFO [14:29:32.360] [mlr3] Applying learner 'classif.rpart' on task 'adult_train' (iter 3/3)
#> INFO [14:29:32.531] [mlr3] Applying learner 'reweighing_wts.classif.rpart' on task 'adult_train' (iter 1/3)
#> INFO [14:29:32.789] [mlr3] Applying learner 'reweighing_wts.classif.rpart' on task 'adult_train' (iter 2/3)
#> INFO [14:29:33.049] [mlr3] Applying learner 'reweighing_wts.classif.rpart' on task 'adult_train' (iter 3/3)
#> INFO [14:29:33.311] [mlr3] Finished benchmark
We can now compute the metrics for our benchmark and see if reweighing actually improved fairness, measured via True Positive Rate (TPR) and classification accuracy (ACC):
bmr$aggregate(msrs(c("fairness.tpr", "fairness.acc")))
#> nr resample_result task_id learner_id
#> 1: 1 <ResampleResult[21]> adult_train classif.rpart
#> 2: 2 <ResampleResult[21]> adult_train reweighing_wts.classif.rpart
#> resampling_id iters fairness.tpr fairness.acc
#> 1: cv 3 0.07494903 0.1162688
#> 2: cv 3 0.01151982 0.1054431
Our model became way fairer wrt. TPR but minimally worse wrt. accuracy!