A NATIONAL FRAMEWORK FOR EXPLAINABLE AND BIAS-RESISTANT AI IN U.S. HEALTHCARE DECISION SYSTEMS

Authors

  • Sonia Nashid, Ispita Jahan, Rifat Chowdhury, Tahmina Akter Bhuya Mita Author

DOI:

https://doi.org/10.46121/pspc.54.1.45

Keywords:

Artificial intelligence, healthcare systems, algorithmic bias, explainable AI, health equity, clinical decision support, regulatory framework, algorithmic accountability

Abstract

Artificial intelligence has rapidly transformed healthcare decision-making in the United States, yet concerns about algorithmic bias, transparency, and accountability remain inadequately addressed. This research proposes a comprehensive national framework for implementing explainable and bias-resistant AI systems across U.S. healthcare institutions. The study examines current AI deployment practices, identifies critical vulnerabilities in existing systems, and develops policy recommendations grounded in ethical AI principles. Through analysis of healthcare AI implementations from 2019-2024 and stakeholder surveys involving 280 healthcare administrators, clinicians, and patients, the research reveals that 67% of deployed AI systems lack adequate explainability mechanisms, while 58% show evidence of demographic bias in decision outputs. The proposed framework integrates technical standards for algorithmic transparency, continuous bias monitoring protocols, regulatory oversight mechanisms, and patient rights protections. Findings indicate that structured governance combining federal regulatory standards with institutional accountability measures can substantially improve AI fairness and trustworthiness. This framework addresses the urgent need for systematic approaches to ensure AI-driven healthcare decisions serve all populations equitably while maintaining clinical effectiveness.

Downloads

Published

2026-01-30