A VERIFIABLE ARCHITECTURE FOR TRUSTWORTHY AI IN AUTOMATED CLINICAL AND HEALTHCARE ADMINISTRATIVE DECISION SYSTEMS
DOI:
https://doi.org/10.46121/pspc.51.1.3Keywords:
Trustworthy AI, Healthcare AI, Clinical Decision Support, AI Verification, Explainable AI, Medical AI Safety, Healthcare AutomationAbstract
Healthcare systems increasingly deploy artificial intelligence for clinical decision support and administrative automation, yet the opacity of AI models raises critical concerns about patient safety, regulatory compliance, and accountability. This research develops a comprehensive verifiable architecture that ensures trustworthiness, transparency, and auditability in AI-driven healthcare decision systems. Through systematic analysis of regulatory requirements, clinical workflows, and existing AI deployment challenges, we identify fundamental gaps in current approaches that treat AI models as black boxes without verification mechanisms. Our architecture introduces layered verification including pre-deployment validation, runtime monitoring, decision provenance tracking, and continuous compliance checking. The framework implements explainability requirements specific to healthcare contexts, ensuring clinicians understand AI reasoning in terms meaningful for patient care rather than abstract technical metrics. Validation across three healthcare organizations deploying AI for diagnosis support, treatment recommendations, and claims processing demonstrates that our architecture enables 94% decision traceability, reduces unsafe AI recommendations by 78%, and achieves full regulatory compliance verification. The research contributes both theoretical foundations for verifiable healthcare AI and practical implementation patterns enabling safe, trustworthy deployment of AI systems affecting patient health and healthcare operations.

