Artificial Intelligence: Privacy and Information Security Risk Considerations

 Esanju Maseka
Author: Esanju Maseka, CISA
Date Published: 11 October 2023

As organizations strive to enhance the efficiency of operations through the use of artificial intelligence and other cutting-edge technology, it is important to include risk management as an area for strategic focus. With ever-evolving business and operational risks, organizations need to have directed resources focused on monitoring and realigning their risk posture, particularly as it pertains to privacy and security risk.

Organizations seeking to benefit from artificial intelligence and mitigate the risks associated with its use need to develop an all-inclusive, multidisciplinary approach that incorporates the expertise of different professionals ranging from IT staff to legal teams to data analytics teams. This will encourage all teams to take ownership and ensure they use the AI tools responsibly.

Privacy Risk

Data privacy and security have been raised as key concerns that organizations need to consider on an ongoing basis as they make us of AI. This is because, for artificial intelligence to build its learning capacity, it requires large volumes of data to better analyze and recognize patterns and processes. Unfortunately, most data that organizations have will contain personal information, increasing the risk of leaks of personal information should an AI process be disrupted.

To cover this risk, some organizations focus on pseudonymizing the data to replace the identifiable attributes with other records. However, malicious actors using AI can re-create personal information from the data. Alternatively, organizations can use anonymized data, which involves encrypting or completely removing personally identifiable information, making it significantly hardly to re-create.

At an individual level, while the use of AI is encouraged to “make life easier,” it is important for end users to realize that all information fed to AI models is absorbed to improve the models. Many of these AI models are widely used by the public. This means that any private or sensitive information is at risk of exposure, as the AI model may use the information shared to generate a result or solution for another person.

Cybersecurity Risk

Traditional security information and event management (SIEM) systems focus on managing and analyzing security event data based on agreed rules. Generative AI serves as a game changer using data-driven algorithms and self-learning capabilities to respond to emerging threats as well.

An often-discussed challenge with SIEM systems is the analyst burnout and staffing challenges, and the resulting knowledge gap that arises. AI is increasingly being used to fill some of these staffing gaps by providing the capability to perform scheduled tasks on an ongoing basis. Oversight should be maintained over the AI-performed activities to ensure alignment. An example of this includes following human-centric principles at every stage of the AI lifecycle, regulated checks on the AI models’ decision-making output and creating negative scenario alerts from AI models such that staff are alerted of adverse results in a timely manner.

Unfortunately, generative AI provides the tools for malicious actors to develop more sophisticated cyber-attacks and exploit system vulnerabilities. Many malicious actors will focus on corrupting the learning model by feeding the AI corrupted data to affect its decision-making process. Others may use generative AI to create phishing attacks that resemble legitimate emails, making it harder for inbuilt security monitoring tools to flag such emails. Left unchecked this could disrupt operations for a business, impact its revenue-making processes and affect overall business continuity.

Misinformation and Discrimination

AI models depend on information injected into them to learn the required processes and outcomes. It is imperative that users involved in feeding this data ensure the integrity of data, but also that it does not have associated biases embedded in the data elements in a world constantly battling racial, gender, sexual and religious discrimination, to name a few. There is a greater burden placed on organizations to ensure they do not teach the AI models to reinforce biases and even generate new ones.

An example of this is how an HR team may introduce an AI model to help reduce time spent processing through thousands of applications. However, if information fed to the AI model consisted of past successful applicants for an IT role that were male and from a set age group, the AI model may be programmed to identify only that category of people, omitting other potential employees. An organization unable to consider the impact of discriminatory practices may face reputational and legal damage.

Equipping Your Team for Success

Overall, while AI could be that cutting-edge game changer a business has been looking for to enhance operations, it could also result in an expensive lawsuit or fine by statutory regulators if not used correctly. Organizations need to ensure they have equipped their teams with the knowledge and skillsets to work with AI models to reap the intended benefits. A key focus of these teams should be on closely monitoring their risk profile and continuously strengthening their governance structures to respond to emerging risks.