AI Practitioners: Our Future Is in Your Hands

AI Practitioners: Our Future Is in Your Hands
Author: Guy Pearce, CGEIT, CDPSE
Date Published: 9 December 2019

Imagine it is sometime in the 22nd century and that the future you is preparing for a complex surgical procedure at the local robot-run hospital, where it has become commonplace for robots to perform sophisticated, repeatable tasks, such as heart surgery, on human patients. This is the first time a robot is tackling a septal myotomy on a human, on you no less. It is still one of the most complicated medical procedures in the world almost 160 years after it was first performed, and it still takes up to 6 grueling hours for a human doctor to do, all the while nothing but a machine keeps you alive.

In the days leading up to the procedure, the chief robot doctor of the facility, Dr. Ava—named after a character in a cult classic film made more than a century before—and all but indistinguishable from a human except for the odd irregular whirring sound occurring whenever she looked up toward the sky, sat you down to share the nature of some of the quite considerable risk factors involved in the procedure. At one point, your eyes wandered to see a few framed diplomas hanging on the wall, including one from the renowned C-3P0 institute, from where Dr. Ava must have learned her diplomacy and her disarmingly reassuring doctor’s bedside manner.

Your eyes are then drawn to one from the Isaac Asimov Institute, named after one of the most famous 20th century scientists and author of the evergreen 1950's classic I, Robot. Recalling his works, you become distracted by thoughts of the 3 laws of robotics, how robots learn and whether they are sufficiently equipped to handle the variability that all too often occurs in complex medical procedures.

It is then that you begin to think about the quality of data required for a robot to learn, especially one performing something as delicate as heart surgery. Quite simply, even a small amount of bad data could mean death on the operating table under a robot; 1 micrometer too far to the right could be all it takes. You then become lost in flashbacks of a century before, from those holographic history “books” or holobooks you so enjoy interacting with, a time when AI practitioners were barely aware—some actively choosing to remain ignorant even—of the fact that data could be a kind of evil beyond their wildest dreams, a state of affairs that caused the nightmare on Earth otherwise known as the Blackening of the late 2030s.

The Blackening was a downstream outcome of the big data hype of a time near the start of the 21st century. It was a time of almost unconstrained data fusion, analytics, machine learning (ML) and robotics by many self-proclaimed “experts” using the primitive technology of the time to increase efficiencies and to supposedly better serve humankind. Little did they want to know that dirty data do to an algorithm what poison does to a man. It kills, sometimes slowly. 

Furthermore, those holobooks taught you about a time around the mid 2010s when many humans had raised concerns about the future of human work and how robots would take over the world. Oh, how that crowd would chant “I told you so” if they were alive today. Warnings were sounded over the need to assess the quality of data for artificial intelligence (AI), including by that budding author Pearce, but the dirty data poison from decades of negligence, ignorance and technological debt leached into our robot helpers, ultimately leading them to run amok against us in scenes akin to that classic fiction Westworld. But alas, the siren’s call of power and profit was too strong. As a species, we did not actually think we would survive much beyond the middle of the 21st century. We were doomed, but there was a kind of h… 

A faint whirring sound from Dr. Ava gently brings you back, and you ponder the Global Artificial Intelligence Act (GAIA) of 2078 and how it gave a new impetus to human life on our pale blue dot. In particular, it required that all production AI instances be able to demonstrate the quality of the data used. Not only that, it required strict evidence of from where the data came, how they were transported and how they were transformed. It required that the data used in AI be described in unambiguous human terms to ensure that data would only be used as intended. In essence, it required data to be tested and to ensure that controls were put in place to prevent poor data from contaminating the combined consciousness of humans and machines. After this demanding mental journey, you found yourself easing into a greater sense of peace and relaxation, a state vital for the success of the medical procedure to come. By the way, the cost of noncompliance with GAIA? Exile to that cold, barren martian moon Deimos.

So to all of you AI practitioners living back there in 2019, please make sure you read my recent Journal article to understand why data intended for AI should be the subject of critical assessment and data audits. Preventing the Blackening of the late 2030s is all in your hands.

Read Guy Pearce's recent Journal article:

"Data Auditing: Building Trust in Artificial Intelligence," ISACA Journal, volume 6, 2019.