PREVENTING BIAS IN AI-BASED DECISION-MAKING: ANALYZING TECHNIQUES TO REMOVE UNFAIR PREJUDICE IN ALGORITHMIC OUTCOMES

Authors

  • Kaleshwar Aryasomayajula Author

DOI:

https://doi.org/10.46121/pspc.50.2.6

Keywords:

Algorithmic bias, fairness in AI, bias mitigation, discriminatory AI, ethical artificial intelligence, algorithmic fairness, machine learning bias

Abstract

Artificial intelligence systems increasingly influence critical decisions affecting human lives, from loan approvals to criminal sentencing. However, these systems often perpetuate and amplify existing societal biases, leading to discriminatory outcomes against marginalized groups. This research examines the sources of bias in AI decision-making systems and evaluates techniques for detecting and mitigating unfair prejudice in algorithmic outcomes. We analyze both pre-processing methods that address biased training data and post-processing techniques that adjust model outputs to ensure fairness. Through experimental evaluation of multiple bias mitigation strategies across different application domains, we demonstrate that hybrid approaches combining data correction with algorithmic fairness constraints achieve superior results compared to single-method interventions. Our findings reveal that while technical solutions can significantly reduce measurable bias, they require careful calibration to balance fairness with accuracy. The research provides practical guidance for organizations deploying AI systems in high-stakes contexts where fairness is essential for ethical and legal compliance.

Downloads

Published

2022-05-30