Elena' s AI Blog

This Week in AI: Regulation Heat, Cloud Bets, and Agentic Shopping

23 Jan 2026 (updated: 23 Jan 2026) / 16 minutes to read

Elena Daehnhardt


Generated with DALL·E by OpenAI, January 2026. Prompt: Box-format editorial illustration showing the systems shaping AI: on the left, Lady Justice holding scales in front of a government building symbolising regulation; in the centre, modern cloud infrastructure with data nodes and upward momentum; on the right, a friendly AI assistant completing online shopping on a screen. Include an AI semiconductor chip, subtle circuit patterns, and commerce icons. Clean, modern tech-magazine style, balanced composition, professional illustration, calm blue tones with warm accents.


TL;DR: PwC's CEO survey reveals AI's ROI gap, Meta's child‑safety trial moves closer, U.S. lawmakers push oversight on AI chip exports, Railway raises $100M for an AI‑native cloud, and Google's UCP proposes standards for agentic shopping.

TL;DR: This week felt less like model drama and more about the systems shaping what AI can actually do. Courts, export rules, and commerce protocols are becoming as important as the models themselves. Here are five signals that stood out.

Introduction

You know what? Sometimes the most interesting AI developments have nothing to do with new models or benchmark scores. This week reminded me of that. Whilst everyone obsesses over the latest transformer architecture or chatbot capabilities, the real story is happening in courtrooms, congressional committees, and standards bodies.

I find this fascinating because it mirrors something I’ve observed throughout my career in computer science: the technical capabilities matter far less than the systems and rules that govern how we can use them. It’s like learning to code—you can master Python syntax, but if you don’t understand the broader ecosystem, licensing, and community standards, you’re missing the bigger picture.

So let’s dive into this week’s signals. Fair warning: there’s a minor timing issue I need to address upfront about one of these stories, but I’ll explain that when we get there.

Five Signals That Actually Matter

1. PwC CEO Survey: The AI Returns Gap Widens

PwC 2026 Global CEO Survey

PwC released their 29th Global CEO Survey on 19th January at the World Economic Forum in Davos, and the findings are quite sobering. Based on responses from 4,454 CEOs across 95 countries, only 30% say they’re confident about revenue growth over the next 12 months—down from 38% in 2025 and 56% in 2022. That’s the lowest level in five years.

But here’s the really striking part about AI specifically: only 12% of CEOs report that AI has delivered both cost and revenue benefits. A staggering 56% say they’re getting “nothing out of it” despite significant investments.

This echoes recent MIT research suggesting many enterprises still see little to no measurable ROI from GenAI pilots—a reminder that execution is the hard part. Mohamed Kande, PwC’s global chairman, noted that whilst everyone has moved from asking whether they should adopt AI to simply “everybody’s going for it,” the disconnect between ambition and reality remains vast.

However—and this is crucial—there’s a growing divide between companies piloting AI and those deploying it at scale. CEOs reporting both cost and revenue gains are two to three times more likely to have embedded AI extensively across products, services, demand generation, and strategic decision-making. Companies with strong AI foundations (Responsible AI frameworks, technology environments enabling enterprise-wide integration) are three times more likely to report meaningful financial returns.

Why This Actually Matters

This survey captures something I’ve observed throughout my career: having the technology isn’t enough. Implementation, integration, and organisational readiness matter just as much as the capabilities themselves.

The 12% figure is particularly revealing. It suggests that most organisations are still treating AI as an experimental add-on rather than fundamentally rethinking their operations around it. Those that have established proper foundations—governance frameworks, integrated technology environments, and AI embedded across core functions—are achieving profit margins nearly 4 percentage points higher than those that haven’t.

This reminds me of earlier waves of technology. When cloud computing first emerged, many companies “moved to the cloud” by simply lifting and shifting their existing applications without redesigning them for cloud-native architectures. They got the bills but not the benefits. We’re seeing something similar with AI.

The survey also shows CEOs spending 47% of their time on issues with horizons of less than 1 year, compared to just 16% on decisions with horizons of more than 5 years. That short-term focus might explain why AI implementations are struggling—successful AI deployment often requires significant upfront investment in data infrastructure, governance, and organisational change before you see returns.

What strikes me most is the implicit question: are we in an AI hype bubble, or are we simply in the early stages of a technology that takes longer to implement properly than anyone expected? Based on the data showing that companies with strong foundations are succeeding, I’d argue it’s the latter. The technology works, but most organisations haven’t done the foundational work required to capture its value.

2. Meta’s Child‑Safety Trial Sharpens the Policy Edge

Meta seeks to limit evidence in child safety case

TechCrunch reported on 22nd January that Meta is working hard to narrow the evidence that can be used against them in an upcoming New Mexico child‑safety trial. The company wants to exclude research on youth mental health impacts, stories about teen suicides linked to social media, details about Meta’s finances, past privacy violations, and even things about Mark Zuckerberg’s university years.

Here’s what’s actually happening: New Mexico’s attorney general filed a lawsuit in late 2023, accusing Meta of failing to protect minors from online predators, trafficking, and sexual abuse on its platforms. The trial is scheduled to begin on 2nd February 2026—that’s less than two weeks away.

Now, it’s fairly standard for companies to try to limit the scope of evidence in trials. But according to legal experts who spoke with Wired, Meta’s attempt to exclude so much information is unusually broad. They even want to prevent any mention of their AI chatbots and a public health warning issued by former U.S. Surgeon General Vivek Murthy about social media’s effects on youth mental health.

Why This Actually Matters

State‑level cases like this can set de facto standards for platform safety and AI‑adjacent features. I think what makes this particularly interesting is that this is considered the first trial of its kind at the state level. When courts begin defining responsibility and acceptable evidence for platform safety, other companies watch closely.

The closer this gets to trial, the more we’ll see ripple effects across the tech industry. Companies developing AI features—especially those targeting younger audiences—will need to carefully consider how courts might later evaluate their safety measures. It’s not just about compliance; it’s about how we fundamentally think about platform responsibility.

And honestly? I believe this is long overdue. We cannot keep building powerful technologies whilst ignoring their impact on vulnerable populations.

3. House GOP Pushes Oversight of AI Chip Exports

House GOP wants final say on AI chip exports after Trump gives Nvidia a China hall pass

The Register reported on 21st January about a House GOP bill—the “AI Overwatch Act”—that would give Congress oversight of the export of AI chips to China. This follows some controversial moves around Nvidia’s H200 sales. The measure advanced out of the House Foreign Affairs Committee with an overwhelming vote, but it still faces a long legislative path before becoming law.

Let me explain why this matters, because it’s not immediately obvious: AI progress is tightly coupled to chip availability. When you’re training large language models or running inference at scale, you need serious computational power. We’re talking about specialised GPUs and tensor processing units that cost tens or hundreds of thousands of dollars each.

Right now, the most advanced AI chips are manufactured by companies like Nvidia, and they’re in incredibly high demand globally. Export controls on these chips essentially determine which countries and organisations can build frontier-scale AI systems.

Why This Actually Matters

This isn’t just about trade policy or international relations. Export oversight can reshape supply expectations for frontier‑scale training and fundamentally influence how global AI capacity is distributed. If the U.S. restricts chip exports to China, it affects not only which models Chinese companies can build but also the broader competitive landscape in AI development.

I find myself thinking about this quite a lot because I’ve worked with high-performance computing resources throughout my career. When you cannot access the hardware you need, you cannot execute your ideas—no matter how brilliant your algorithms are. It’s that simple.

And there’s another angle: if chip exports are restricted, it might actually accelerate development of alternative hardware or more efficient training methods. Necessity drives innovation, after all.

4. Railway Raises $100M for an AI‑Native Cloud Bet

Railway secures $100 million to challenge AWS with AI‑native cloud infrastructure

VentureBeat reported on 22nd January that Railway raised a $100M Series B funding round to build an AI‑native cloud platform. The round was led by TQ Ventures with participation from FPV Ventures, Redpoint, and Unusual Ventures. Their focus is on fast developer workflows and modern AI infrastructure needs.

Now, you might be thinking: “Another cloud platform? Really? Don’t we have enough of those?” And yes, the market is crowded with AWS, Google Cloud, Azure, and others. But here’s what makes this interesting.

As AI assistants compress build cycles—meaning developers can go from idea to deployment much faster—cloud platforms need to remove every bit of friction in the deployment process. Traditional cloud providers were built for a different era. They’re powerful but often cumbersome. You’ve got to navigate complex console interfaces, configure security groups, set up load balancers, and deal with a thousand tiny decisions before your application goes live.

Why This Actually Matters

This funding suggests new infrastructure players see a genuine path to compete with hyperscalers on speed and developer experience. They’re betting that the next generation of AI development demands infrastructure that just works out of the box.

I’ve deployed applications on various cloud platforms throughout my career, and honestly? The experience varies wildly. Sometimes you just want to push your code and run it, without spending three hours reading documentation on VPC configurations. If Railway can deliver that experience whilst optimising for AI workloads specifically, they might carve out a meaningful niche.

The broader signal here is that as AI becomes more prevalent, we’ll see infrastructure optimised for AI use cases rather than retrofitting general-purpose cloud services.

5. Google’s Universal Commerce Protocol Signals Agentic Rails

Google and retail leaders launch Universal Commerce Protocol

InfoQ covered Google’s Universal Commerce Protocol (UCP) on 19th January, an open standard intended to enable AI shopping agents to connect to retailer backends for discovery, checkout, and post‑purchase workflows.

Important timing note: Whilst the InfoQ article was published on 19th January 2026 (within this week’s coverage period), Google’s actual announcement of the Universal Commerce Protocol was made on 11th January 2026 at the National Retail Federation conference—that’s the week before. So, whilst the InfoQ coverage is recent, the underlying story is about 12 days old as I’m writing this. I wanted to include it anyway because the implications are still unfolding, and the InfoQ piece itself is a timely analysis.

Right, now let’s talk about what UCP actually means. Google developed this protocol together with major retailers, including Shopify, Etsy, Wayfair, Target, and Walmart. It’s endorsed by more than 20 companies across the ecosystem, including payment providers. The idea is to create a standardised way for AI agents to interact with e-commerce systems.

Think about it this way: currently, when you want to buy something online, you visit a website, browse products, add items to a cart, enter payment information, and complete checkout. Each retailer has their own system. Now imagine your AI assistant doing this for you. Without a standard protocol, every AI assistant would need custom integrations with every retailer. That’s not scalable.

UCP aims to solve this by creating a common language for AI agents to discover products, initiate transactions, and handle post-purchase activities such as returns and tracking.

Why This Actually Matters

If agentic commerce becomes standardised, assistants shift from merely recommending products to actually completing transactions on your behalf. That’s a fundamental change in how commerce works.

But it also raises fascinating questions about trust and liability. If your AI assistant makes a purchase you didn’t fully authorise, who’s responsible? How do we handle returns or disputes? What about price comparison—will AI assistants always find you the best deal, or might they be influenced by commercial relationships?

I find myself quite sceptical about some aspects of this. We need to be very careful about how much purchasing power we delegate to automated systems. At the same time, I can see the appeal: imagine telling your assistant, “I need new running shoes, budget £150, prefer sustainable brands”, and having it handle the research and purchase whilst you focus on other things.

The protocol itself is open, which is encouraging. Open standards tend to create more competitive markets and better outcomes for users than proprietary systems controlled by single companies.

Closing Reflection

This week felt like a reality-check week, didn’t it? Five signals about AI’s implementation gap, who gets to decide things, which systems are allowed to scale, what rails agents will run on, and whether all this AI investment is actually working.

The PwC survey reveals the stark truth: most companies are struggling to extract value from AI despite massive investments, whilst the Railway funding shows continued confidence in AI infrastructure plays. Meta’s trial addresses how AI systems interact with users and platform responsibility. The chip export oversight speaks to national control over AI development capacity. And the UCP proposal might reshape how AI agents function in commerce.

If these implementation challenges, policy decisions, and infrastructure choices harden in 2026, they will shape how AI evolves just as much as any model release. Perhaps more so. A brilliant model that companies cannot successfully implement, cannot access the chips it needs for training, faces legal restrictions on its deployment, or lacks standardised protocols for integration into existing systems will struggle to reach its potential.

And honestly? I think that’s actually quite healthy. We shouldn’t just be asking “can we build this?” We need to be asking “should we build this?” and “who benefits?” and “what are the risks?” and, crucially, “how do we actually capture value from this once it’s built?”

The Meta trial will help establish what platform responsibility actually means in practice. Chip export controls will influence global patterns of AI development. The PwC survey shows us that having the technology and deploying it successfully are two very different things. The Railway funding suggests the infrastructure layer is still evolving rapidly. And the UCP proposal might reshape how we think about AI agents in commerce.

None of these is a pure technology story. They’re about how technology intersects with law, policy, economics, implementation challenges, and society.

So what feels most consequential to you right now: the AI ROI challenge, regulatory frameworks, hardware access, infrastructure evolution, or commerce protocols? I’d genuinely like to know. Feel free to share your thoughts.

Did you like this post?

Please let me know if you have any comments or suggestions.

References

PwC 2026 Global CEO Survey

MIT report: 95% of generative AI pilots at companies are failing

Meta seeks to limit evidence in child safety case

House GOP wants final say on AI chip exports

Railway secures $100 million for AI‑native cloud

Google and retail leaders launch Universal Commerce Protocol

desktop bg dark

About Elena

Elena, a PhD in Computer Science, simplifies AI concepts and helps you use machine learning.

Citation
Elena Daehnhardt. (2026) 'This Week in AI: Regulation Heat, Cloud Bets, and Agentic Shopping', daehnhardt.com, 23 January 2026. Available at: https://daehnhardt.com/blog/2026/01/23/this-week-in-ai-regulation-heat-cloud-bets-agentic-shopping/
All Posts