Elena' s AI Blog

AI Challenges, Questions, Law and Ethics Series

Elena Daehnhardt

Midjourney AI-generated art
Image credit: Illustration created with Midjourney, prompt by the author.
Image prompt

“An illustration representing cloud computing”

AI Challenges, Questions, Law and Ethics

A structured sequence on how AI changes responsibility, trust, law, and human decision-making.

Posts in This Series

Part 1: Why AI will never void humanity?

Why AI will never void humanity? What AI wants badly? I was thinking about these questions while travelling. I will share my initial thoughts with you, my dear reader.

Part 2: Living with AI in Pursuit of Happiness

This blog is not about coding or AI; it is about living with AI in human society, striving for happiness and building on technological advances.

Part 3: Explainable AI is possible

The complexity of AI, particularly deep learning models, has led to the "black box" criticism, highlighting the lack of understanding about how deep learning models arrive at their decisions. While there's truth to this concern, having a nuanced view is important. In this post, I share my view on AI explainability, that it is complex, however possible.

Part 4: Podcast: How can we build trust and safety around AI?

Lawyer Cláudia Lima Costa is an expert in Artificial Intelligence and has created an amazing podcast that raises pertinent questions about trust and safety in AI systems. I was fortunate enough to be invited to a relaxed discussion where I shared my views on various topics related to AI, such as AI evolution, AI applications, data sources for training models, copyright, data protection, privacy-preserving techniques, and achieving reliable, explainable, safe, and helpful AI.

Part 5: Robots and True Love

In this post, I write about robots and their creation challenges in real-life tasks, research areas, safety and ethical considerations, and future aspirations. I also briefly refer to a few starting points for creating robots with Raspberry Pi and Python.

Part 6: ARC-AGI benchmark and a hefty prize

I am sharing information about the recent Kaggle competition launch, which focuses on advancing general intelligence.

Part 7: Narrow AI, General AI, Superintelligence, and The Real Intelligence

I discuss the main AI types in this post. I share my understanding of the possibility of general intelligence in the future.

Part 8: Regulation on artificial intelligence has already been published

The AI Regulation has already been published, and it will imply compliance with several obligations, such as transparency and human oversight, when the AI System is deemed high-risk. It is important to remain updated and understand how this regulation will be applied.

Part 9: Generative AI vs. Large Language Models

Generative AI and Large Language Models (LLMs) are both important concepts in artificial intelligence, but they are not the same. Generative AI refers to different models that can create various types of content, such as text, images, and music. LLMs are a specific type of generative AI that focuses on understanding and producing human language. This post explains their differences, highlights key techniques like Transformers and GANs, and mentions important open-source projects.

Part 10: Multimodal AI

Multimodal AI is rapidly evolving, pushing the boundaries of what machines can understand and achieve by combining information from multiple modalities like text, images, audio, and video. This post explores the core techniques of realising multimodal AI, existing systems and related research.

Part 11: Is DeepSeek R1 Secure?

There is a big question about DeepSeek's security (and also the security of any software product, in fact), safety, and legal usage outside of China. I am sharing my opinion and some relevant links on this topic.

Part 12: Can AI hallucinate?

AI hallucinations are a critical phenomenon in AI, referring to instances where AI systems generate inaccurate or nonsensical information. This post explores the main causes of AI hallucinations, their implications, possible benefits, and existing solutions.

Part 13: How CustomGPT Mitigates AI Hallucinations

CustomGPT reduces AI errors using specialised knowledge, quality data, and user feedback. Along with RAG, it provides accurate and reliable content for various applications.

Part 14: Self-critical AI

Can Large Language Models achieve meta-cognition regarding their own stylistic patterns? In this primary research experiment, I tested Gemini, ChatGPT, and Claude to see if they could not only replicate my human writing style, but actively self-correct when confronted with AI-detection tools like Grammarly.

Part 15: Who Did the AI Learn From?

Large Language Models learn similarly to Rembrandt's apprentices — by endlessly studying the masters. Yet, modern AI models hide their sources. We explore the legal and ethical necessity of a structured transparency framework for AI training data.

Part 16: Cursor Made Me Do It

AI makes software development feel frictionless, leading to unprecedented feature bloat. We explore the symptoms of AI scope creep and define a strict architectural framework to maintain control.

Part 17: Could AI Become a New Religion?

A gentle exploration of how institutions move from resisting scientific novelty to shaping AI ethics. We examine the theological limits of artificial intelligence and why kindness and human dignity must guide the future we build.

Part 18: AI is the New Literacy

AI literacy is becoming as fundamental as reading and writing — a set of skills that shapes who can fully participate in modern work and life. Here is what it looks like in practice, for everyone.

All Posts