Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Advancing Bias Mitigation in Machine Learning Models: The Use of Feature-Wise Mixing Across Diverse Classifiers for Contextual Bias Mitigation

Booth Id:

Systems Software


Finalist Names:
Tomar, Yash (School: West Lafayette Junior/Senior High School)

In the realm of machine learning, biased models stemming from inadequate representation in training datasets pose a significant challenge to real life applications of these models. Despite a range of existing bias mitigation techniques, such as data pre-processing, their effectiveness is often hindered by limited dataset diversity, thus exacerbating biases. This study introduces feature-wise mixing as a novel approach to address this issue, aiming to diversify contextual factors. Economic datasets spanning three locations since 1960, featuring variables like tea prices, GDP per capita, and inflation rates, were compiled and augmented using this technique. Five regression classifiers including Support Vector Regression (SVR), K-Nearest Neighbors (KNN), Decision Trees, Neural Networks, and Random Forests were employed for evaluation, with mean squared error (MSE) utilized as the primary metric. Results indicate generally reduced MSE for models trained on mixed datasets, with the exception of Neural Networks. Validation through 10-fold cross-validation further supports these findings, revealing a notable discrepancy in the performance of Neural Networks. This study underscores the significance of feature-wise mixing as a bias-aware pre-processing strategy for mitigating biases in machine learning models, advocating for continued exploration into its impact on model performance and interpretability.