Explainable Artificial Intelligence (XAI): Useful But Not Uncontested

Guy Pearce
Author: Guy Pearce, CGEIT, CDPSE
Date Published: 6 April 2022

Artificial intelligence (AI) has been referred to as a “black box” technology given the difficulty organizational management, and even specialists, have in explaining what the technology does with its data input. This is specifically true of one of the branches of AI known as statistical artificial intelligence—an outcome of the convergence of statistics with computer science—which consists of techniques such as decision trees and neural networks.

Data scientists and developers of the technology may be among the few that understand the algorithms deployed. In general, only the data scientists designing the algorithm know the content of the black box, with developers merely codifying the algorithms provided to them by the data scientists, and testers merely checking for outputs to their test cases that have been defined by the data scientists. 

Ensuring that artificial intelligence does what it is designed to do and that it does so safely and without causing harm – e.g., the negative outcomes of human decisions or automated actions taken on false positive AI outcomes – is a good reason for explainable AI (XAI). There is also a regional regulatory requirement that organizations performing data processing be able to explain to any customer who asks what their processing is doing with the data and what automated decision-making is being made.

Together with robustness, efficiency and ethics, explainability makes up part of the paradigm of responsible AI. XAI has been the subject of research for many years, with the goal of helping to increase AI explainability for humans. These goals include achieving trustworthiness, confidence, fairness and privacy awareness in AI

XAI is not without its challenges, though.

One of them is that the goal of XAI is in the eye of the beholder. For example, engineering teams might express a goal of XAI as providing visibility into vulnerabilities and flaws, deployment teams might express a goal of XAI as explaining the circumstances under which AI accomplishes its goal, while governance teams might express a goal of XAI as compliance. In other words, to fulfil XAI generically would be very complex given the spectrum of goals for which XAI would need to produce successful outcomes.

There is also the case common to today’s information technology of being careful that too many regulations and controls like XAI don’t stifle AI innovation. Furthermore, in healthcare, it has even been proposed that a black box doesn’t need explanation where the AI outcomes can be clinically validated.

The rapid growth in XAI has resulted in white box (fully describable, explainable and observable) AI becoming more challenging to create. The arguments that white box AI does not perform as effectively as black box AI, or that it’s easy to use black box AI in a manner that is not optimal, has raised the question—in this case in healthcare—about whether it is XAI or AI efficiency that should be the priority in AI development and deployment.

Importantly, the scope of XAI need not be the entire artificial intelligence domain. For example, XAI may be critical in public safety applications, legal applications, educational applications and in geopolitical applications where automated or human decisions based on AI are made, whereas XAI may be deemed less important in informational contexts such as games and email filters.

Yes, the black box is a cause of unease to organizations deploying artificial intelligence from points of view including their reputational, operational and legal risk. It is also unsettling for society at large given recent high-profile cases of data-rich organizations abusing the public’s data in AI for profit at the cost of significant and even irreparable tears in our social fabric. On the other hand, in the large data contexts typical of statistical AI and the increasing need for real-time decision-making and action may drive AI efficiency requirements above XAI requirements. There are also cases in which outcomes validation may supersede the need to explain black box processing.

Ultimately, XAI presents a useful moderating effect on AI exuberance when balanced with appropriate support for AI innovation and performance in the interests of socially responsible economic growth.