Elena' s AI Blog

Podcast: How can we build trust and safety around AI?

16 Mar 2024

Elena Daehnhardt

A robot gardener, Midjourney 6.0 art, February 2024

Cláudia Lima Costa, an AI lawyer and data protection expert, has produced an exceptional podcast that addresses critical issues of trust and safety in AI systems. I highly recommend checking out Cláudia’s podcasts, featuring fascinating talks on AI in both Portuguese and English.

I was fortunate enough to be invited to a relaxed discussion, during which I shared my views on various topics related to AI, such as AI evolution, AI applications, data sources for training models, copyright, data protection, privacy-preserving techniques, and achieving reliable, explainable, safe, and helpful AI.

HOW CAN WE BUILD TRUST AND SAFETY AROUND AI?

Overall, I am happy with what we have achieved. We did it light, easy-going, and quite technical in simple words :) Besides, it was my first podcast as a quest, and it was fun!

One of the most thoughtful questions that Cláudia asked me was whether explainable AI is possible considering a widely accepted black box idea.

I had a very affirmative answer explaining in simple words that yes, indeed, we can create explainable AI models even though it will take an additional effort, at least with the current state of AI, and with human feedback preferably.

I wanted to reiterate that the statement “complex systems such as deep learning AI are inherently unexplainable” is not necessarily true and can be debated. As a result, I have created a blog post Explainable AI is possible, demonstrating that it is possible while referring to current research in this area.

Please write us what you think and if you have more questions about the podcast or topic suggestions. What is about AI explainability and the black-box problem? You are also welcome to watch other episodes.

Thank you very much for reading.

Have a great weekend.

Best regards, Elena.

All Posts