Elena' s AI Blog

AI Safety Series

Elena Daehnhardt

Midjourney AI-generated art
Image credit: Illustration created with Midjourney, prompt by the author.
Image prompt

“An illustration representing cloud computing”

AI Safety

Focused posts on defensive architecture, privacy, and risk management for real AI use.

Posts in This Series

Part 2: OpenClaw Isn't a Chatbot Anymore. It's Infrastructure.

AI agents like OpenClaw are wonderful tools — but without strict access rules and proper supervision, they can turn an ordinary Tuesday into something you will be explaining to HR for weeks. In this post, we explore the very real risks of deploying AI assistants carelessly, from leaked credentials to messages you absolutely did not mean to send. Most importantly, we look at how to use OpenClaw the right way, because when deployed thoughtfully, it is one of the most exciting and capable tools we are only just beginning to understand.

Part 3: Using AI Code Assistants Safely

A practical, human guide to using generative code assistants safely — without leaking secrets, breaking trust, or losing control of your work.

Part 4: The Digital Butler or Trojan Horse? A Privacy Playbook for Persistent AI Agents

Persistent AI agents can save hours each week, but they also turn hidden prompt injections into real-world actions unless you design strict controls. This guide shows how to harden agent workflows with policy gates, isolation, scoped permissions, and safe auditing.

All Posts