Artificial intelligence (AI) is no longer a new concept. Much has been written about this emerging technology and new articles and research are published online every day. Organizations have started to learn and understand its full potential.
While AI can drive operational and cost efficiencies, strategic business transformation, and better and more tailored customer engagement, it is important not to fall into the trap of implementing AI solutions that look impressive and have an initial “wow” factor only to give little thought as to how the new technology will add business value. Limited availability of the right quality and quantity of data, insufficient understanding of AI’s inherent risk, enterprise culture and regulations can all act as real, and in some cases, perceived barriers to widespread adoption of AI in organizations.
The European Union and international regulators have also taken an active interest in AI. In fact, the European Union proposed the world’s first Artificial Intelligence Act in April 2021 to regularize AI, addressing issues such as data-driven or algorithmic social scoring, remote biometric identification, and the use of AI systems in law enforcement, education and employment.
It is, therefore, important for an organization to understand the business risk of AI and whether the benefits of AI outweigh its risk. Prior to this writing, I had multiple discussions with industry experts and peers, and the following are what they cited as the most common sources of risk associated with AI:
- Transparency and accountability—Unlike humans, AI systems lack the ability to make judgments or understand the context of the environment in which they are deployed. Moreover, they are only as effective as the data used to train them and scenarios used to deploy and train systems are limited. Lack of context and judgment and overall learning limitations play a key role in informing risk-based reviews and strategic deployment discussions. Another important issue with AI systems is trust. As an emerging technology, the lack of adequate understanding of AI could potentially give rise to trust and accountability issues with AI systems. It can be challenging for organizations to maintain the necessary level of understanding and control over AI-based decisions, including their appropriateness, fairness, and alignment with the organization’s values and risk appetite.
- Data privacy—The imperviousness of some AI solutions poses practical challenges in relation to certain regulations, such as the EU General Data Protection Regulation (GDPR), as it requires organizations to explain to customers how their personal data are used and what the assumptions and drivers are behind a fully automated decision that has a significant impact on the customer. A major risk involved with regard to data privacy invasion is the ability of attackers to infer the data set used to train the model, thereby compromising data privacy.
- Data quality and availability—The quality of any decision made by an AI solution has a significant dependence on the quality and quantity of the data used. An absence of large sets of high-quality data is a major obstacle to the application of AI solutions. The risk of poor data quality could limit the learning capability of the system and have a negative impact on decisions made in the future. Poor and incomplete data could potentially result in erroneous or poor predictions or a failure to achieve the intended objectives.
- Governance and compliance—A major risk that AI systems face is a lack of governance, compliance and regulatory requirements. There are no established International Standards Organization (ISO) standards that define and help organizations follow and implement a certain set of mandatory requirements. Though some entities and authorities across the globe have established working groups to discuss the business risk and challenges posed by AI and other emerging technologies, AI is still in its early stages and, until specific governance and compliance guidelines are established, the usage, monitoring, and potential applicability of AI will remain limited.
- Unclear legal responsibility—Another potential risk of AI is the question of legal responsibility. If AI systems are designed with ambiguous algorithms, who is legally responsible for the system’s outcome—the organization, the programmer or the system? This risk is not theoretical. In 2018, a self-driving car hit and killed a pedestrian. In that case, the car’s human backup driver was not paying attention and was held responsible when the AI system failed.
Conclusion
The adoption of AI and innovation in general requires organizations to undergo a learning process. This journey is not about avoiding AI risk, but about developing processes and tools to give enterprises confidence that such risk can be effectively identified and managed within the limits set by the organization’s risk culture and appetite. All major players in the organization, including stakeholders and decision makers, should understand that all that glitters is not gold and a detailed risk vs. benefit realization exercise must be carried out through the innovation process. We all have a responsibility to learn more about the risk of AI to help control it. AI is not going away, and its risk will continue to grow and change as technology becomes more advanced and pervasive. Most important, decision makers should be able to answer the question “Can we adequately protect the privacy and security of data used in AI?” taking into account the growing number of data privacy laws and regulations across the globe.
Hafiz Sheikh Adnan Ahmed, CGEIT, CDPSE
Is a certified data privacy officer with hands-on experience in information security, data privacy, business continuity, risk management, information and communication technology (ICT) governance, cloud compliance and digital transformation strategy. He is a certified assessor, auditor, implementer and trainer. He is associated with different ISACA® working groups as a volunteer. He can be reached via email at adnan.gcu@gmail.com and LinkedIn (http://ae.linkedin.com/in/adnanahmed16).