Elena' s AI Blog

LLM

AI's New Bottleneck


The week, AI signals shifted attention from generic model chatter to concrete releases and constraints. Google launched Gemini 3.1 Flash Live and Lyria 3 Pro for developers, while reports of Anthropic's unreleased Mythos/Capybara model highlighted high-capability safety pressure. At the same time, U.S. datacenter policy and GitHub's Copilot data-default change reinforced that governance and infrastructure are now product-critical.

Infrastructure Is the New Frontier


This week’s biggest AI signal was not a new frontier model. It was the fast consolidation of infrastructure, distribution, and cost. Nvidia pushed agentic AI and robotics as stack problems. Anthropic invested in enterprise distribution and tested asynchronous delegation. Microsoft, Mistral, and OpenAI advanced the efficiency tier. Xiaomi and Rakuten showed how global and contested the open-weight race has become.

Better Models, Burnout, and a $599 Mac


GPT-5.4 arrived with native computer use and a 1M-token context window. Anthropic moved further toward becoming an enterprise platform. And Block's layoffs, alongside new HBR research on "AI brain fry," made one thing clear: this week's real signal was not just better models, but what AI is doing to work.

AI Is Splitting Into Tiers


Three fast, cheap models landed in the same week — Gemini 3.1 Flash-Lite, GPT-5.3 Instant, and Qwen3.5-9B. That is not a coincidence. The cost-performance frontier just moved.

72.5%, $710B, and a March in London


This week's signals trace a collision between software and reality: Anthropic's leap in computer automation, a record-breaking $710B cloud capex plan, and the resulting shockwaves in consumer electronics prices and global energy policy.

Agentic AI at Scale: New models, $30B, and the UKRI Strategy


Weekly AI Signals for February 12-19, 2026: Claude Sonnet 4.6, Gemini 3.1 Pro, Anthropic's $30B Series G, and UKRI's £1.6 billion AI strategy show how capability, capital, and sovereignty are shaping AI at scale.

AI Improves Itself While We Argue About Permits


This week's signals show AI capability racing ahead while real-world constraints tighten: $2.5B inference funding, self-improving models from OpenAI and Anthropic, ByteDance's restricted video AI, and data centres stuck in permit battles. The gap between what models can do and what infrastructure can support is becoming the central story.

The AI Paradox: Lightning Fast and Gridlocked


This week's signals were about constraints and acceleration at once: AI-assisted cloud attacks, multi-year grid queues in Europe, new infrastructure funds, and consumer GPU pricing pressures — alongside fast adoption of consumer AI apps.

Chips, Capex, and Code Risk


This week’s AI signals were practical rather than flashy: Microsoft’s earnings tied AI to long-term capex, Anthropic pushed export-focused regulation, China approved limited H200 imports, and everyday compute continued to rise. Together, they point to AI becoming infrastructure — budgeted, regulated, and increasingly constrained.

Cursor AI for Python Development


Have you ever wondered if AI will make you a lazy programmer? In this post, I share my journey with Cursor AI after a month of extensive testing.

On AI Coding Assistants


After months of hands-on experience with AI coding assistants, I am excited to share with you my opinion on working with Google Gemini, ChatGPT, and Claude AI.

Vibe coding with Generative AI


I've been getting into "vibe coding" recently, quickly prototyping some of my ideas, and working on my pet projects. I must confess that the AI-assisted coding is a very addictive activity, and must be taken with caution since it has some security implications and requires a careful prompts engineering.

How to Use Claude AI


What is Claude AI? What can we do with it, and how? Let's explore this fantastic AI assistant by Anthropic.

How CustomGPT Mitigates AI Hallucinations


CustomGPT reduces AI errors using specialised knowledge, quality data, and user feedback. Along with RAG, it provides accurate and reliable content for various applications.

Is DeepSeek R1 Secure?


There is a big question about DeepSeek's security (and also the security of any software product, in fact), safety, and legal usage outside of China. I am sharing my opinion and some relevant links on this topic.

DeepSeek R1 With Ollama


This post explores the use of Ollama, a state-of-the-art language modelling framework, in conjunction with pre-trained models such as DeepSeek R1.

Generative AI vs. Large Language Models


Generative AI and Large Language Models (LLMs) are both important concepts in artificial intelligence, but they are not the same. Generative AI refers to different models that can create various types of content, such as text, images, and music. LLMs are a specific type of generative AI that focuses on understanding and producing human language. This post explains their differences, highlights key techniques like Transformers and GANs, and mentions important open-source projects.