Menu Close

What is Explainable AI?

These Excerpts appeared in an article on CMSWire written by Erika Morphy.

Artificial intelligence appears to be creeping into every corner of our lives. And it’s
making some pretty big decisions for us, which begs the question, how is it making these
decisions? An answer that organizations find is many times very complex and not well
understood. Whether for compliance reasons or to eliminate bias, there is a need to make
the decision making capabilities understand. This is where explainable AI {XAI) or
Transparent AI come in.

Let’s begin by looking at this short list of decisions that AI is making for us in the here and
now.

Whether or not a tumor has become cancerous.
Whether or not an insurance claim should be processed or denied.
Whether or not a traveler is approved to go through airport security.
Whether or not a loan should be made.
Whether or not a missile launch is authorized.
Whether or not a self-driving vehicle brakes.

These are complex matters that are well suited to AI’s strength — its ability to process
infinitely greater data than a human can, said Mike Abramsky, CEO of RedTeam Global.
But the decisions AI can make are also reflective of the technology’s weakness, the socalled
“Black Box” problem, Abramsky said. Because deep learning is non transparent,
the system simply can’t explain why it got to the decision. No matter how much you
respect AI’s advance, though, most of us would also like to know how AI came to the
conclusions that it did, if only out of curiosity’s sake. So do proponents of a movement
called explainable AI, and their reasons for wanting to know go far beyond mere
curiosity.

It is possible there is no perfect solution and may end up taking these decisions from AI
systems on faith, RedTeam Global’s Abramsky said.“Today, we drive cars at high speed,
take medicines that have been engineered to cure us, fly through the air on planes-all on
faith in the technology that was used to develop them.”

Read the article >