Securing an AI Ecosystem: The Next Big Cybersecurity Challenge

Varun Prasad
Author: Varun Prasad, CISA, CISM, CCSK, CIPM, PMP
Date Published: 23 May 2023
Related: Artificial Intelligence Fundamentals Certificate

This year has been rightly touted as the year of AI. Ever since ChatGPT has been permeating every aspect of our lives, from kitchen tables to boardrooms and even classrooms, there has been a huge buzz around the larger promise of AI and increased awareness and interest around related technologies. As enterprises are looking for ways to integrate AI into their business processes or products to help consumers, the security and integrity of an AI system becomes crucial and is of paramount importance.

Security in this context and for this discussion refers to the information security risks around developing, operating and maintaining an AI system and not around the ethics and privacy concerns related to AI (topics like data quality, bias in decision-making and data usage, and sharing warrants are a separate and larger discussion). Similar to any other application, tampering with the learning models or compromise of the decision-making logic will directly impact and corrupt the final result, rendering these systems virtually unreliable. The unique attributes of AI systems require special considerations during the risk management process to identify distinct threats and formulate appropriate controls to mitigate them.

AI security typically begins with a discussion on the security of training data—risks around data contamination, integrity of the data preparation process and access to input sources. However, these risk areas are also applicable to big data-based data warehouse applications and not something new to just AI. Challenges such as access aggregation and data commingling are known risks and need to be addressed in these systems as well. Let us look at few risks and key controls that are specific to AI systems and need special attention.

Learning models at the core of AI

The learning models are at the core of an AI system, making the storage and security around them very important. Learning models are typically stored in AI registries, in object stores on the cloud or as code in repositories. Access to registries must be tightly controlled using the principle of least privilege and integrated with existing identify authentication providers. Customized IAM or bucket policies must be created to assign granular permissions to object stores housing learning models and parameters. These policies are dynamically assigned to enable access to the object stores as and when needed. 

Repository security is an area that generally lacks attention to detail and requires stronger controls. A repo-authorization matrix should be created to define the required access level to repos for engineers and operations personnel based on the sensitivity of the code held in the repos. This matrix can be used as the basis to provision access to repos. Lastly, access to learning models must be included as an item in the periodic user access reviews to ensure access is restricted to appropriate personnel only.

Change management in an AI context

Feature velocity is a key metric that is tracked closely by product managers and is always improved on in today’s SaaS world. Though it is important to quickly roll out new features to maintain a competitive edge, in an AI application, product accuracy, precision and repeatability metrics would be more relevant and meaningful. Product development methodologies have to be tweaked to include extensive design and threat model reviews and testing procedures to have a high degree of confidence that the system output is reliable and repeatable. Change management processes must be enhanced to require detailed review and testing requirements depending on the type and nature of the change. For instance, changes to learning models or decision-making algorithms need more scrutiny and validation when compared to other types of changes or bug fixes. Most software development processes for cloud native applications nowadays involves a code review by a peer or senior team member and a combination of automated smoke and sanity tests. However, based on the criticality of changes to AI systems, a more thorough evaluation and a series of detailed functional and regression tests must be done prior to deploying to production.

Additionally, similar to traditional software supply chain attacks, AI systems are subject to model supply chain attacks. In order to mitigate these, static application scans and software composition analysis must be integrated into CI/CD pipelines to help identify vulnerabilities and release clean and secure applications.

AI monitoring: going beyond routine security

AI systems are complex and have to be constantly monitored to ensure they are operating as intended, identifying anomalies, detecting security issues and proactively looking to address issues before they adversely impact business outcomes. AI monitoring goes beyond the routine security and operational monitoring and also looks into the performance of the machine learning models and workings of the algorithms underlying the decision-making process. This tracks metrics like (not limited to) data precision, output bias, data drift and outliers, and it facilitates early detection of problems in the system. I believe AI monitoring must be used in tandem with traditional SIEM tools to have a complete view of the performance and security posture.

Almost all AI-based frameworks and principles published by organizations and think tanks mention the safety, security and reliability of AI systems as an important prerequisite. Hence, the cybersecurity posture of these systems is expected to be more mature and stronger when compared to other applications. As AI continues to evolve as a technology, more risks and threats will emerge, and we must constantly keep improving the control environment to address them. In the months to come, I foresee control frameworks tailored to AI systems will become available to provide us with more guidance and tools to secure an AI ecosystem.

Editor’s note: Learn more about AI with ISACA’s AI Fundamentals Certificate.