Introduction
There was a time when reading and writing separated those who could participate fully in public life from those who could not. It was not just about decoding words on a page — literacy unlocked access to contracts, news, correspondence, knowledge, and economic opportunity. Those without it were not less intelligent; they were simply excluded from systems built around the assumption that you had it.
I think we are living through a similar shift right now, and AI is at the centre of it.
This is not a dramatic claim. I am not saying AI will replace everyone or that the world ends for those who do not adapt. What I am saying is more practical: AI tools are being woven into work, communication, healthcare, education, and daily decision-making faster than most people realise, and the gap between those who know how to use them well and those who do not is widening. Knowing how to work with AI — critically, deliberately, and effectively — is becoming a foundational skill, much like reading once was.
In this post I want to explore what AI literacy actually means in practice: what skills it involves, why they matter for everyone, and what it means specifically for those of us working in technology.
What AI Literacy Is (and Is Not)
It helps to start by saying what AI literacy is not. It is not about understanding the mathematics of neural networks (though that is interesting if you enjoy it). It is not about being a data scientist or an ML engineer. And it is definitely not about blindly using every AI tool that comes along.
AI literacy is the ability to work with AI systems thoughtfully — to get useful things out of them, to recognise when their output is wrong or misleading, to know when not to use them, and to understand the broader implications of doing so. It is a set of practical, transferable skills, and they apply whether you are a software developer, a nurse, a journalist, a teacher, or a project manager.
There are several skills I would group under this umbrella.
Communicating Intent Clearly
The most immediately practical AI skill is learning how to express what you actually want. This sounds trivial but is genuinely hard. AI language models do not read minds — they respond to what you write, including all of its ambiguities. A vague prompt produces a vague answer. A well-constructed one, with context, constraints, and an example of the kind of output you are after, produces something much more useful.
This is sometimes called prompt engineering, which sounds more technical than it is. In practice it is closer to clear writing: be specific, provide background, say what you do not want as well as what you do, and iterate. Anyone who has ever written a good requirements document, a clear email, or a well-structured question on Stack Overflow already has most of the instincts for it.
Evaluating Output Critically
The confident hallucination. A colleague once spent an afternoon integrating a Python library an AI had recommended — complete with import statements, method calls, and a cheerful explanation of how it worked. The library did not exist. The AI had simply invented it, named it something plausible, and described its API with the quiet confidence of someone who has used it for years. My colleague found out when pip failed to find it. The worst part? The explanation had been so well-structured that neither of us thought to check first.
This is, in my opinion, the most important skill of all, and also the one most commonly skipped.
AI systems — even very capable ones — produce confident-sounding text that is sometimes simply wrong. They can hallucinate facts, cite sources that do not exist, give outdated information, reflect the biases of their training data, or miss the nuance that would be obvious to an expert in the field. The danger is not that the output looks wrong. Often it looks completely plausible.
Critical evaluation means never treating AI output as ground truth without checking. It means asking: does this match what I know? Can I verify this claim? Is this source real? Does the code actually do what the explanation says it does? It means being more sceptical the more confident the output sounds, because confidence and accuracy are not correlated in these systems.
This skill will serve you well regardless of how AI technology evolves, because it is really just applied critical thinking.
Understanding Limitations
Every AI tool has a set of things it does badly, or should not be used for at all. Knowing these limitations is part of literacy.
Language models have training cutoffs, so they do not know about recent events without access to search. They can reason poorly over long chains of logic. They struggle with precise arithmetic. They do not truly understand context the way a human reader does, and they can be confidently wrong about specialised domains where their training data was thin. They also have no memory between sessions unless explicitly provided with one.
Understanding this does not mean avoiding AI tools — it means using them for the things they are genuinely good at and reaching for a different tool (or your own brain) when they are not.
Knowing When Not to Use AI
The privacy oops. I once watched someone debug a production issue by pasting the full contents of their application’s configuration file into a public AI chat. Database credentials, API keys, internal hostnames — all of it, in one go, because they just needed a quick answer. They got their answer. They also, in all likelihood, handed sensitive infrastructure details to a service whose data retention policy they had never read. Nobody said anything at the time, which is somehow the most alarming part of the story.
Related to limitations, but worth saying explicitly: AI is not always the right choice, and part of literacy is recognising when to set it aside.
There are tasks where human judgement, accountability, or relationship is irreplaceable. A doctor explaining a difficult diagnosis to a patient. A manager having a hard conversation with a team member. A writer finding their own voice. A researcher making a creative leap. Using AI to automate or shortcut these is not just technically limited — it changes the nature of the activity itself, often in ways that matter.
There are also real concerns about privacy, data security, and intellectual property that should inform which tools you use and what you feed into them. Pasting sensitive customer data into a public AI service is not a productivity win.
Ethical and Social Awareness
The infinite confidence trap. A friend — a smart, experienced professional, not someone you would call naïve — once sent me a research summary an AI had produced for a report she was writing. It was beautifully formatted: clear headings, bullet points, a tidy conclusion. Three of the five statistics cited had no traceable source. One figure was subtly wrong in a way that reversed the actual trend in the data. She had not noticed, because the formatting had done the persuading. “It looked so finished,” she said. It really did. That is precisely the problem.
AI systems are not neutral. They encode the values, assumptions, and gaps of the data they were trained on. They can perpetuate biases in hiring, lending, medical diagnosis, and criminal justice. They raise real questions about authorship, attribution, and what work means. They concentrate power in the organisations that build them.
AI literacy includes being aware of these dynamics — not to become paralysed by them, but to make more conscious choices about when and how you use these tools, to advocate for better practices in your organisation, and to stay engaged with a conversation that affects everyone.
AI Literacy for Everyone
I want to be clear that AI literacy is not a skill reserved for people in tech. The tools are already everywhere.
A nurse who knows how to query a clinical decision support AI critically, and when to override it, is a better nurse. A journalist who understands what it means to use an AI writing assistant and how to verify its claims is a more responsible journalist. A teacher who can identify AI-generated student work and design assessments accordingly is a better educator. A small business owner who can use AI tools to draft contracts, analyse customer feedback, or generate marketing copy without being misled by them is running a better business.
The literacy gap here is not primarily technical. It is about developing habits of mind: curiosity, scepticism, a willingness to experiment and to check your work. These are learnable, and they do not require a computer science degree.
What does require attention is access and support. Not everyone has equal exposure to these tools or the time to develop fluency with them. That is a structural problem worth caring about — just as basic literacy programmes mattered for participation in the 20th century, AI literacy support will matter for participation in this one.
A Dedicated Note for Tech Professionals and Developers
For those of us working in software, data science, or adjacent fields, AI literacy takes on some additional dimensions. We are not just users of these tools — we are often the ones building systems that others rely on, and that raises the stakes considerably.
Code Generation Is Not the Whole Story
It is easy to focus on the most visible AI capability in our world: code generation. Tools like Cursor, GitHub Copilot, and others can write large amounts of plausible-looking code quickly. This is genuinely useful for boilerplate, for getting unstuck, for prototyping ideas fast.
But treating a code generator as an oracle is a different matter. AI-generated code can be subtly wrong, insecure, outdated, or mismatched to your actual architecture. The ability to read the output critically — to spot the off-by-one error, the SQL injection waiting to happen, the deprecated API call — requires understanding the domain. In other words, the value of AI code generation scales with your own expertise. It amplifies good engineers; it gives bad ones a faster route to problems they cannot debug.
This does not mean juniors should not use it — they absolutely should. But they should use it as a learning tool, not a replacement for learning. Reading AI-generated code carefully and asking “why does this work?” is a legitimate way to grow.
Understanding What You Are Integrating
As AI components become part of production systems — recommendations, classification, content moderation, search ranking — software engineers need enough understanding of how these systems behave to integrate them responsibly. This means understanding concepts like confidence scores, error rates, distributional shift, and feedback loops, even without deep ML expertise.
It also means understanding that a model which performs well in testing can behave very differently in production when the input distribution changes, or when adversarial users probe it deliberately. Robust integration requires testing, monitoring, and a plan for when things go wrong — not just an API call and a prayer.
Agentic AI and the Expanding Boundary
Increasingly, AI is not just answering questions but taking actions — running code, querying databases, sending messages, calling APIs, browsing the web. The MCP ecosystem I wrote about in my previous post is one example of this pattern. These agentic systems are powerful and also genuinely risky if not designed carefully.
AI literacy for developers now includes thinking through the security model of AI-integrated systems: what tools the agent can reach, what damage a malicious prompt could cause, how to keep a human in the loop for consequential actions, and how to audit what the system actually did. These are not purely AI questions — they are software engineering questions applied to a new kind of system.
The Senior Engineer Trap
One thing I have noticed is that experienced engineers are sometimes the most resistant to AI tools. There is a tendency to see them as shortcuts for people who do not really understand the craft, or as a threat to expertise that was hard-won. I understand this instinct, but I think it is worth examining carefully.
The people best equipped to use AI tools well are precisely those with deep experience — because they can evaluate the output, catch the errors, and know which shortcuts are actually safe to take. If senior engineers opt out entirely, the tools get used without the judgement that would make them safer and more effective. That seems like a poor trade.
At the same time, it is worth being honest about what is changing. Some tasks that once required significant experience are becoming more accessible. This is not entirely different from what happened when high-level programming languages replaced assembly, or when IDEs replaced text editors for many workflows. The craft shifts; it does not disappear. The interesting question is not “will AI replace me” but “what do I want to become good at in this new landscape.”
Conclusion
I started learning computer science feeling deeply overwhelmed, as I wrote about in my post on learning new things. There was too much to know, and the field kept moving. The only strategy that worked was accepting that you cannot know everything, finding what matters most, and building the habit of learning continuously.
AI literacy feels similar to me now. You do not need to understand everything about how these systems work. You need to build the habits — communicate clearly, evaluate critically, understand limitations, act responsibly — and then keep updating them as the tools evolve. That is achievable for everyone, not just for engineers.
The question is not whether AI will be part of how we work. For most of us, it already is. The question is whether we will engage with it thoughtfully, or just let it happen to us.
I hope for the former. Good luck, and as always — feel free to let me know what you think!