Artificial Intelligence Regulations Gaining Traction

Niral Sutaria
Author: Niral Sutaria, CISA, ACA
Date Published: 29 December 2020

Not long ago, complex artificial intelligence (AI) systems involving deep learning were mainly just theories. There were very few practical use cases. Today, many organizations are using AI to make their processes more efficient. AI has gained traction much faster than other emerging technologies.

Stanford’s Artificial Intelligence Index 2019 Annual Report states that prior to 2012, AI results closely tracked Moore’s Law, with computing doubling every two years. Since 2012, the amount of computation used in AI training runs has doubled every 3.4 months, which means computation capacity in AI moved faster than Moore’s Law. Funding in AI startups has increased, with an average annual growth rate of more than 48 percent between 2010 and 2018. Further, according to a 2019 global AI survey, 58 percent of respondents report that they have embedded at least one AI capability into a process or product in at least one function or business unit. Examples include a self-driving car and a bank-loan approving system.

Such accelerated growth does not go unseen by regulators. Various government bodies are noticing this AI revolution and are creating initial AI strategies and guidelines to regulate AI. This is important because AI systems, unlike other technologies, are used for mimicking human behavior and decision making.

Recent examples of these guidelines and regulations include India’s government, where the Ministry of Electronics and IT organized a Responsible AI for Social Empowerment (RAISE) 2020 Summit to discuss the potential of AI and its responsible use. In another example, the European Parliament adopted proposals on how the European Union (EU) can best regulate AI. Early in 2020, the United States government issued a draft guidance for regulating AI applications. In Africa, the Mauritius government implemented an AI strategy for the country. Many other countries are working on something similar or already have some guidelines in place.

Because AI technology has grown so quickly and government regulations have not always existed, some of the organizations that are using AI systems now likely did not have regulations or strict oversight specific to AI technology when they implemented AI. Hence, it is interesting to see if these organizations are addressing the risk arising from AI systems appropriately. A key risk in an AI model is “explainability,” the ability to explain how an AI model works. As per the aforementioned survey, merely 19 percent of the large organizations surveyed indicated their organizations are mitigating the risk associated with the explainability of AI models, while only 13 percent are mitigating risk related to equity and fairness, such as algorithm bias and discrimination.

The regulations set out by some countries are at their inception stage, and their effectiveness in regulating AI technology is still being tested. Therefore, as audit professionals and IT experts, we need to be actively involved in upgrading our skillset to identify and mitigate risk arising from AI systems.

Editor’s note: For further insights on this topic, read Niral Sutaria’s recent Journal article, “Artificial Intelligence’s Impact on Auditing Emerging Technologies,” ISACA Journal, volume 6, 2020.

ISACA Journal