DATA POISONING AND MODEL INTEGRITY THREATS IN AI SYSTEMS
DOI:
https://doi.org/10.46121/pspc.54.2.07Keywords:
Artificial Intelligence (AI), Data Poisoning, Model Integrity, Trojan attackAbstract
The quick expansion of Artificial Intelligence (AI) systems to vital infrastructure systems and medical facilities and financial institutions and systems that operate without human input has created new security threats which differ from the common software security issues. Data poisoning and model integrity attacks represent two of the most dangerous threats because they attack machine learning (ML) pipeline systems at their most essential components. Data poisoning attacks work by damaging training data to create hidden harmful activities which can stay undetected for multiple years. The security of deployed AI systems gets further compromised through model integrity threats which include backdoor insertion and model stealing and Trojan attacks. the classification system and operational methods and real-world effects and methods to reduce data poisoning and model integrity threats. AI systems require organizations to adopt a complete security system which protects all stages of machine learning from data handling to processing data. The paper ends with research opportunities that remain open and it presents a plan to develop reliable and secure artificial intelligence systems for use in environments where adversaries operate.

