Dear reader,
This week, AI quietly strengthened its foundations.
At one end, NVIDIA’s new supercomputers are pushing science to exaflop speeds.
At the other, IBM released small open models that fit right on our laptops.
And somewhere in between, GitHub taught coding assistants to work as a team.
Three stories, one theme: AI is becoming more balanced — powerful where it needs to be, and personal where it matters most.
AI Infrastructure
NVIDIA and U.S. national labs are building AI supercomputing hubs for science, climate research, and training massive models.
These machines operate at exaflop scale — that’s one quintillion (1,000,000,000,000,000,000) calculations every second. A 1 followed by 18 zeros. Unimaginable speed!
When I read this, I imagined endless GPU racks somewhere, glowing and humming through the night, while I fine-tune tiny models on my laptop. It’s humbling — the scale of it all — and exciting that both ends of this world now talk to each other.
Small Models, Big Steps
This week, IBM introduced the Granite 4.0 Nano series — open models from 350 million to 1.5 billion parameters, small enough to run on laptops or even in browsers.
They’re Apache-licensed, efficient, and ready to fine-tune for your own experiments.
I love these small models. They feel personal — like pocket-sized labs I can play with anywhere. My Mac stays quiet, my ideas move faster, and I don’t have to think about power bills or GPUs in distant rooms.
Zoom also updated its AI Companion with NVIDIA’s runtime stack — faster replies, smaller energy use, smoother collaboration.
👉 NoJitter — Zoom Rolls Out Upgrade for AI Companion
I like the pattern here: AI that fits in our tools instead of taking them over. That’s progress you can actually feel.
Multi-Agent Coding
GitHub’s new Agent HQ lets multiple AI assistants work together inside VS Code.
Not just one Copilot — a team. One plans, another codes, another reviews.
👉 The Verge — GitHub is launching a hub for multiple AI coding agents
Sometimes I picture them like colleagues debating pull requests. One argues for recursion, the other for a rewrite. I just sit there with tea, listening — equal parts amused and impressed.
If you had your own AI team, what would you hand them first — debugging, documentation, or the part you’ve been avoiding all week?
As multiple agents begin collaborating inside our editors, new questions surface — about trust, responsibility, and transparency. For a broader look at how these challenges might reshape accountability in the next wave of AI, Bernard Marr’s recent Forbes article, 8 AI Ethics Trends That Will Redefine Trust & Accountability in 2026 , offers thoughtful insights into the responsibilities of autonomous agents and the growing role of regulation. (Some readers may encounter a paywall.)
Pattern Recognition :)
AI is stretching in both directions right now — wider and smaller, faster and closer.
Massive infrastructure powers discovery.
Tiny models make creation accessible.
And our tools are learning to collaborate right beside us.
The pattern is balanced — not more or less AI, but better-placed AI, ideally explainable and trustful.
Have a lovely weekend,
Elena