Elena' s AI Blog

Explainable AI is possible

21 Feb 2024 / 14 minutes to read

Elena Daehnhardt


a cyborg holds a black box, Midjourney 6.0 art, June 2024


Introduction

The complexity of AI, particularly deep learning models, has led to the “black box” criticism, highlighting the lack of understanding about how deep learning models arrive at their decisions. While there’s truth to this concern, having a nuanced view is important.

I think that it is also critical to share the ongoing debate about AI explainability, AI computational effectiveness, and the related regulations succinctly described in the Right to explanation and Explainable artificial intelligence, which are great starting points if you like to study the topic.

This post was inspired by our podcast conversation with Cláudia Lima Costa, a lawyer specialised in AI and data protection. Cláudia asked me an important question about the explainability of AI.

HOW CAN WE BUILD TRUST AND SAFETY AROUND AI?

I had a very affirmative answer. Do you know why?

We will further clarify the explainability problem and related research. I will also share my view on AI explainability, which is complex, however possible.

Explainable AI

I like the Explainable AI definition at IBM.com:

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.

Explainable AI helps in understanding an AI model’s impact, potential biases, accuracy, fairness, transparency and outcomes [3]. It’s crucial for building trust and adopting a responsible approach to AI development. The black box models created directly from data can be challenging to understand, and explainability can help ensure the system is working as expected while meeting regulatory standards [3].

Why Explainable AI is important?

The complexity of AI, particularly deep learning models, has led to the “black box” criticism, highlighting the lack of understanding about how they arrive at their decisions. While there’s truth to this concern, having a nuanced view is important. In this post, I share my view on AI explainability, which is complex but possible.

Indeed, it is essential to have explainable AI to create a safe and reliable user experience. Especially in high-risk AI applications, such as decision support in the medical context, we have to explain the logic behind particular decisions.

Another example is a high-risk application of identifying poisonous and eatable mushrooms; we must explain to the end user why the AI model “thinks” we can eat that mushroom. Otherwise, we have our concerns, right?

The AI Act about it

Another issue is why serious AI companies might need to adapt their processes to create explainable AI. The AI Act was created to ensure AI is used responsibly and safely in the EU. This is, in fact, the first step in AI regulation.

You can see the AI Act proposal at artificialintelligenceact.eu. The act says some AI is too risky and bans things like social scoring (ranking people based on behaviour) and facial recognition for law enforcement. But for “good AI,” it sets guidelines for safe use, for instance:

  1. Thorough testing and evaluation of AI models are absolutely necessary to create production-ready AI applications. Especially for high-risk AI, you must test and check it carefully before releasing it.
  2. Considering the data used in creating AI, is it fair and unbiased?
  3. It is also essential to have human oversight and feedback on the functioning of AI systems.
  4. Transparency behind AI model architecture. This is done by sharing information about the internal workings of the model. However, transparency might be challenging when businesses worry about maintaining their competitive advantage in the market.
  5. Explainable AI, which is complex but possible, as we discuss in this post :) If an AI decides about you, you have the right to understand why.

As we read in the AI act proposal:

EU Parliament approved AI Act on March the 13th 2024. New rules for AI usage still require more steps before taking full effect.

“Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.”

To recap, AI systems should be explainable, and so should the related AI models. This is right when we want to create something reasonable and well-implemented, even though it seems as though.

The AI Act proposal is extensive. I am still reading it. But you have the main points and can agree or disagree with the proposed approach.

We hope that the AI regulations will be supportive of AI development, however not hindering novel solutions and benefits AI brings.

Deep Learning and the “Black-box”

Why is AI expandability under question now? Many people argue that it is impossible to create explainable AI.

Reasons for “black box” perception:

  • Internal complexity: Deep learning models often have millions or even billions of parameters, making their internal workings intricate and complex to interpret.
  • Non-linear relationships: Unlike simpler rules-based systems, AI models learn complex relationships between data points, making it challenging to pinpoint the exact reasoning behind each output.

“A picture is worth a thousand words.” This saying applies when analysing a popular machine learning algorithm - Decision Tree. Despite the algorithm being complex and extensive, we can still understand and explain the logic behind its decisions or outputs.

Consider a decision tree for predicting the survival of the Titanic passengers. As I have explained in my post Machine Learning Tests using the Titanic dataset:, “Decision trees are helpful to visualise features, and the top features in a tree are usually the most important features.”

Decision Tree trained on the Titanic data

However, when we talk about deep learning, we consider deep neural networks, which are really large! See a schematic example of a deep neural network at Wikimedia:

Example of a deep neural network

Does it really mean that deep neural networks are unexplainable? Surely not! I agree it is a complex task. However, I totally disagree that it is impossible.

Everything is technically possible. Sometimes, we must do something additionally, but we CAN explain everything. We should learn how to do it since it is ABSOLUTELY necessary for creating reliable and trustful applications for our safety.

Is Explainability in AI even possible?

How can we implement explainability in AI systems?

Firstly, using a mix of technologies, datasets, and several algorithms working together can help create an explainable AI system. Think about new AI techniques such as RAG.

RAG refers to retrieval-augmented generation combining deep learning and information retrieval technologies when retrieved documents assist in understanding the context of the generated output. This helps to enhance the explainability of the AI model using this combined approach.

Secondly, combining AI with human expertise can enhance transparency and address ethical concerns. It is called the “human-in-the-loop” approach.

Indeed, AI explainability is a HOT topic. We will now get deeper into the related research.

Explainable AI (XAI) field

The explainable AI (XAI) field focuses on developing techniques to understand and explain how AI models work. Various methods exist, like saliency maps, refer to the research paper Efficient Saliency Maps for Explainable AI, and feature attribution, discussed in Gradient backpropagation based feature attribution to enable explainable-ai on the edge, to illuminate the factors influencing model outputs.

Moreover, researchers are designing simpler “Interpretable models” that prioritise explainability while maintaining acceptable performance. Remember the decision tree? It is something like this.

XAI research is advancing, and explainability tools are becoming more sophisticated. However, complete transparency, especially for high-complexity models, remains a challenge. But possible!

However, achieving high performance requires complex models, making perfect explainability less feasible. We should balance focusing on explainable models and creating top-notch, high-performance AI. We consider computing resources becoming more available.

Moreover, the need for explainability varies depending on the application. For critical domains like healthcare, high levels of transparency are crucial. For creativity domains such as image generation - explainability can be an exciting puzzle for training our own intelligence :)

Just remember, everything is possible when we want it! The black box is just an illustration which should not forbid us from developing new explainable models that are complex, efficient, and explainable!

Related research

I’ve selected a few important and recent publications in XAI (besides the aforementioned), with summaries of their main findings in simple terms.

  1. Toward explainable artificial intelligence for precision pathology discuss the limitations of conventional AI and present solutions using explainable AI to make machine learning decisions more transparent. The authors provide an overview of the relevant foundations in pathology and machine learning and present practical examples to help understand what AI can achieve and how it should be done.

  2. Towards explainable artificial intelligence emphasises the importance of transparent decision-making in artificial intelligence and explains recent developments in explainable AI. The authors discuss how, with the explainable AI, we can identify novel patterns and strategies in domains like health and material sciences and understand the reasoning behind the system’s decisions.

  3. Interpreting black-box models: a review on explainable artificial intelligence reviews the current state-of-the-art XAI research, evaluates XAI frameworks, and highlights emerging issues for better explanation, transparency, and prediction accuracy.

  4. What Are We Optimizing For? A Human-centric Evaluation Of Deep Learning-based Recommender Systems evaluates top-performing deep learning-based recommendation algorithms (with exceptional performance on MovieLens-1M dataset) using human-centric measures: Novelty, Diversity, Serendipity, Accuracy, Transparency, Trustworthiness, and Satisfaction.

  5. How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods analyse existing approaches for explaining Deep Neural Networks across different domains and applications. The authors presented the results of a Mechanical Turk survey identifying end-users’ preferred explanation styles and provided a readily available and widely applicable implementation of explanation-by-example through our open-source library ExMatchina.

I recommend keeping up with research conferences like NeurIPS and AAAI, which provide insights into the latest advancements in XAI. Particularly, you can also check out W34: XAI4DRL: eXplainable Artificial Intelligence for Deep Reinforcement Learning

Discussion

AI systems are intricate, and human cognitive abilities may struggle to comprehend all components and interactions. This makes it challenging to form a comprehensive understanding.

However, we don’t always need to fully understand things to find valuable explanations. Different levels of detail can be helpful depending on the context and audience.

For example, focusing on key decision points or high-level trends might be more actionable than striving for an exhaustive understanding of every intricate detail.

Ultimately, whether or not too complex systems are inherently unexplainable depends on several factors, including the specific system, the desired level of explanation, and the capabilities of the explanation methods.

Ongoing research in XAI holds promise for pushing the boundaries of what we can explain, even in highly intricate systems.

Conclusion

While the “black box” concern has merit, AI isn’t entirely unexplainable. Ongoing research and development are moving toward more transparent and interpretable models. It’s essential to consider the application context and specific model characteristics when evaluating AI’s “black box” nature.

Discussing the nuances and ongoing efforts can lead to a more informed conversation about the explainability of AI and its responsible development and application. Explainable AI research is currently a hot topic.

Additionally, the AI Act and similar initiatives can potentially aid in creating transparent and explainable AI systems, and I hope, without stifling progress, to ensure safety and reliability in high-risk applications.

References

1. Right to explanation

2. Explainable artificial intelligence

3. IBM.com

4. The AI act proposal

5. AI Act

6. Machine Learning Tests using the Titanic dataset

7. Efficient Saliency Maps for Explainable AI

8. Gradient backpropagation based feature attribution to enable explainable-ai on the edge

9. Toward explainable artificial intelligence for precision pathology

10. Towards explainable artificial intelligence

11. Interpreting black-box models: a review on explainable artificial intelligence

12. What Are We Optimizing For? A Human-centric Evaluation Of Deep Learning-based Recommender Systems

13. How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods

14. NeurIPS

15. AAAI

16. W34: XAI4DRL: eXplainable Artificial Intelligence for Deep Reinforcement Learning

desktop bg dark

About Elena

Elena, a PhD in Computer Science, simplifies AI concepts and helps you use machine learning.





Citation
Elena Daehnhardt. (2024) 'Explainable AI is possible', daehnhardt.com, 21 February 2024. Available at: https://daehnhardt.com/blog/2024/02/21/explainable-ai-possible/
All Posts