An Empirical Study of Modular Bias Mitigators and Ensembles


More information here

Bias mitigators can reduce algorithmic bias in machine learning models, but their effect on fairness is often not stable across different data splits. A popular approach to train more stable models is ensemble learning. We built an open-source library enabling the modular composition of 10 mitigators, 4 ensembles, and their corresponding hyperparameters. We empirically explored the space of combinations on 13 datasets and distilled the results into a guidance diagram for practitioners.