Hello, Dear Reader — how are you doing today?
This week in AI, I wanted to focus on what actually matters for us developers. You know, the things that will make our lives easier (or at least more interesting) rather than just another hype cycle.
So grab your favourite beverage, and let’s dive into five developments that might actually change how we work.
1. OpenAI signs a $38B, 7-year cloud deal with AWS (yes, that’s billion with a B)
So OpenAI is moving serious workloads to AWS, bringing hundreds of thousands of NVIDIA GPUs online [1, Reuters], [2, The Guardian]. They expect full capacity by end of 2026 [3, OpenAI].
What does this actually mean for you? More computational headroom for training models and lower-latency inference as clusters come online. Also, this is OpenAI saying, “We’re not married to just Azure anymore”—they’re going multi-cloud.
Why you should care: Think of compute capacity like oxygen for AI. More capacity means faster model rollouts and better price-to-performance ratios throughout 2026. If you’re building with LLMs, this translates to real improvements you’ll actually feel.
References: [1, Reuters], [2, The Guardian], [3, OpenAI]
One tiny next step: If you’re already abstracting your LLM calls behind an interface (and you should be!), add AWS Bedrock or EC2 endpoints as provider options. This way, when capacity and prices shift — and they will — you can adapt quickly without rewriting everything.
Multi-cloud strategies are like having multiple coffee shops on your route to work. When one is packed, you've got options!
2. Google Cloud ships Vertex AI Agent Builder upgrades (production-ready tools!)
Google just dropped fresh observability dashboards (tokens, latency, errors), evaluation tools for simulated runs, and tighter governance controls on November 6 [4, Google Cloud Blog], [5, InfoWorld]. They’ve also cleaned up some naming in their agent product family [6, Google Cloud Docs].
Why you should care: Remember those painful “it worked perfectly in dev but exploded in production” moments? These new tools help you avoid that. You can now monitor your AI agents like actual production services, as it should have been from the start.
References: [4, Google Cloud Blog], [5, InfoWorld], [6, Google Cloud Docs]
One tiny next step: Spin up a canary agent using the Agent Development Kit (ADK) or Agent Engine. Set success criteria in the new dashboard — such as step count, guardrail violations, and cost per run. Watch it for a week and see what you learn.
3. GitHub Copilot: org-level custom instructions for the coding agent
Now admins can set organisation-wide guidance for Copilot’s coding agent [7, GitHub Changelog]. We’re talking about style guides, testing requirements, secrets policies — all the “how we write code here” rules enforced consistently across your entire team [8, GitHub Docs].
Why you should care: Instead of every developer interpreting coding standards differently (or ignoring them entirely — you know who you are), you can encode your team’s preferences once and have Copilot respect them automatically. One source of truth, enforced at scale.
References: [7, GitHub Changelog], [8, GitHub Docs]
One tiny next step: Create a simple 10-line “house rules” document covering your lint preferences, test requirements, and commit message format. Deploy it org-wide. Then watch as your PR friction mysteriously decreases. You’re welcome.
Finally! No more "but the linter said..." discussions in code reviews. Well, fewer of them, anyway :)
4. VS Code rolls out a unified agent experience (Agents pane, planning, multi-agent coordination)
VS Code’s latest update consolidates agent sessions and planning into a single, coherent experience [9, VS Code Blog]. It includes Copilot integration and leaves room for other agents to join the party [10, GitHub Blog], [11, VS Magazine].
Why you should care: Agents are becoming a first-class citizen in your development environment, not just a fancy sidebar feature you forget about. This is about making AI assistance as natural as using IntelliSense or the debugger.
References: [9, VS Code Blog], [10, GitHub Blog], [11, VS Magazine]
One tiny next step: Enable Agent Sessions in your current project. Run a one-sprint experiment asking yourself: “What tasks can we reliably delegate to the agent?” Document what works and what doesn’t. This is how we learn what AI is actually good at versus what we wish it was good at.
5. OpenAI previews Aardvark (private beta): an autonomous security researcher
Aardvark is an AI agent that reads your code, writes and runs tests, validates vulnerabilities, and even proposes patches [12, OpenAI Blog], [13, TechRadar]. Basically, it’s like having an AppSec teammate who never sleeps and never complains about having to review the same type of bug for the hundredth time [14, eSecurityPlanet]. It’s currently in private beta and being tested on carefully curated repositories.
Why you should care: This pushes the boundary of “AI that actually files useful PRs” into real-world workflows. We’re not talking about autocomplete anymore; we’re talking about an agent that can autonomously identify, validate, and fix security issues.
References: [12, OpenAI Blog], [13, TechRadar], [14, eSecurityPlanet]
One tiny next step: Pick a non-critical service (emphasis on non-critical!) that has some flaky tests. Enable sandboxed patch PRs and measure your Mean Time To Resolution (MTTR) before and after. Treat this as an experiment, not a production rollout. Science first, excitement second.
Remember: even the best AI agent can make mistakes. Start small, test thoroughly, and don't let it touch your production database without supervision. Trust me on this one!
Quick win checklist
Let me share some practical steps based on this week’s news. You don’t need to do all of them immediately. Pick one or two that make sense for your situation:
-
Abstract your providers: Keep your LLM calls swappable between Azure, OpenAI, AWS, and GCP. This week’s announcements make it clear that multi-cloud is the way forward. Don’t lock yourself into one vendor.
-
Add guardrails: Define allow-listed actions and targets for your agents. Set hard limits on the steps they can take. Log every single tool call. Yes, it’s extra work upfront, but future-you (and your security team) will thank you.
-
Make your codebases agent-friendly: Add
data-test-idattributes to your UI components, improve your README files, and create custom instructions at both organisation and repository levels. Think of this as making your code more readable not just for humans but also for AI assistants.
Dealing with information overload
There is so much happening in AI right now. New models, new tools, new frameworks are dropping every single week. You might feel overwhelmed trying to keep up with everything. Please accept that you cannot learn or implement everything. It is not a failure — it’s a strategy to stay sane and focus on what matters most for your work.
Just remember to eat well, exercise, take breaks, and enjoy the process of learning what genuinely interests you. The AI field will still be here tomorrow, next week, and next year. You don’t need to absorb it all at once :)
Conclusion
AI development tooling is maturing rapidly. We’re moving from “wow, look what AI can do!” to “here’s how AI integrates into professional workflows.” This week showcased infrastructure scaling (OpenAI + AWS), production tooling (Google’s Vertex upgrades), organisational governance (GitHub’s org-wide instructions), IDE integration (VS Code’s unified experience), and autonomous agents (Aardvark).
The common thread? AI is becoming less of a novelty and more of a practical development tool. And honestly? That’s precisely what we need.
Did you find this helpful? Let me know if you have any comments, questions, or if I missed something important this week.
Stay curious and keep coding!
References
1. OpenAI turns to Amazon in $38B cloud deal — Reuters, Nov 3, 2025
2. OpenAI signs $38bn AWS agreement — The Guardian, Nov 3, 2025
3. AWS and OpenAI announce multi-year partnership — OpenAI Blog
4. More ways to build and scale AI agents with Vertex AI Agent Builder — Google Cloud Blog, Nov 6, 2025
5. Google boosts Vertex AI Agent Builder with new observability and deployment tools — InfoWorld, Nov 6, 2025
6. Vertex AI Generative AI Release Notes — Google Cloud Documentation
7. Copilot coding agent supports organization custom instructions — GitHub Changelog, Nov 5, 2025
8. Add organization instructions for Copilot — GitHub Documentation
9. A unified experience for all coding agents — VS Code Blog, Nov 3, 2025
10. GitHub Copilot in Visual Studio Code gets upgraded — GitHub Blog, Oct 28, 2025
11. Microsoft Details How Agents Took Over VS Code in 2025 — Visual Studio Magazine, Nov 5, 2025
12. Introducing Aardvark — OpenAI Blog (private beta)
13. OpenAI’s new Aardvark tool finds and fixes software flaws automatically — TechRadar, Nov 3, 2025
14. Aardvark: OpenAI’s autonomous AI agent aims to redefine software security — eSecurityPlanet, Nov 3, 2025