Elena' s AI Blog

AGI Timelines, 3D Vision, and the Reality of AI Scams

05 Dec 2025 / 8 minutes to read

Elena Daehnhardt


Midjourney 7.0: A hacker AI bot calls on the telephone to scam, HD
I am still working on this post, which is mostly complete. Thanks for your visit!


Introduction

This week, the theme was convergence—but with a side of caution.

We saw the convergence of policy and technology as the U.S. Health Department moved to make AI part of its core infrastructure. We saw the convergence of senses, with breakthroughs in how AI sees (3D from 2D) and hears (universal sound understanding).

But we also saw the convergence of AI capabilities and criminal intent. While DeepMind predicts AGI by 2030 and researchers give machines better senses, a story out of Kansas served as a chilling reminder of why we need to stay vigilant right now.

Here are the top six developments you need to know this week.

1. Government Gets Serious: HHS AI Strategy

U.S. health department unveils strategy to expand its adoption of AI technology

On December 4, the U.S. Department of Health and Human Services (HHS) released a 20-page strategy to move AI from experimental pilots to core infrastructure. The plan includes five pillars: governance, tool development, workforce empowerment, R&D standards, and clinical integration.

Critically, the department is forecasting a 70% increase in AI projects for fiscal year 2025 and adopting a “try-first” culture, including department-wide access to tools like ChatGPT.

This is the moment AI becomes bureaucracy—in a good way. When the agency responsible for public health decides to operationalise AI, it signals that the technology is stable enough for high-stakes environments. However, the success of this hinges entirely on data privacy; "move fast and break things" doesn't work when you're dealing with patient records.

2. The Timeline: DeepMind CEO Predicts AGI by 2030

DeepMind CEO says AGI approaching 'transformative' juncture

At the Axios AI+ SF summit, DeepMind CEO Demis Hassabis put a date on the industry’s biggest milestone. He reaffirmed that Artificial General Intelligence (AGI)—systems matching or surpassing human capability—could be realised by 2030.

Hassabis noted that the next critical step is not just more data, but “world models”—AI that understands physics and cause-and-effect, rather than just predicting the next word in a sentence.

Five years. That is a blinking red light on the dashboard of history. If Hassabis is right, we aren't just looking at better chatbots; we are looking at a fundamental shift in the nature of intelligence within the decade. The race to build "world models" suggests that the era of Large Language Models (LLMs) might be evolving into something much more grounded in reality.

3. Geopolitics: Cohere CEO on the “Trust Moat”

AI startup Cohere CEO says US holds edge over China in AI race

Speaking at the Reuters NEXT conference, Cohere CEO Aidan Gomez argued that the U.S. and Canada maintain a decisive edge over China in the AI race—but not just because of the technology itself.

Gomez acknowledged that China can build competitive models, but argued that commercialization and trust are the real differentiators. He noted that liberal democracies are unlikely to rely on Chinese tech for critical infrastructure, creating a massive “trust moat” for Western AI companies to scale globally.

This is a crucial distinction. In the world of enterprise and government AI, having the best code isn't enough; you need the best relationships. Gomez is pointing out that geopolitics is becoming a feature of the software stack. If you can't trust the vendor's government, you can't use the model.

4. Vision Breakthrough: Meta’s SAM3D

SAM3D: Transforming 3D Scene Modelling

Meta AI dropped a significant release this week with SAM3D, a new system that brings human-level 3D perception to computer vision. The breakthrough? It can reconstruct high-fidelity 3D models of objects and bodies from a single 2D image.

By using specialised architectures for objects and bodies, SAM3D can infer depth, occlusion, and lighting without needing the complex scanning rigs or multiple camera angles previously required.

This flattens the barrier to entry for 3D creation. Whether it's for game design, VR, or e-commerce, the ability to turn a simple snapshot into a spatial asset changes the workflow entirely. We are moving from a world where we capture flat memories to one where we capture spatial realities.

5. Audio Intelligence: Google’s New Sound Benchmark

From Waveforms to Wisdom: The New Benchmark for Auditory Intelligence

While LLMs have mastered text, sound has remained fragmented—until now. Google Research introduced MSEB (Massive Sound Embedding Benchmark), a new open-source platform to standardise how AI understands audio.

The benchmark tests AI across eight distinct capabilities, from transcription and classification to reasoning and audio reconstruction. The goal is to prove that a single, general-purpose “sound embedding” can handle all auditory tasks, much like how GPT handles all text tasks.

We often forget that intelligence isn't just language—it's perception. For an AI to truly interact with the world, it needs to understand the difference between a breaking glass and a ringing phone as instantly as a human does. MSEB is the scorecard that will help us build those ears.

6. The Dark Side: The “Mom is Kidnapped” Scam

AI voice scam in Lawrence shows challenge for police

In Lawrence, Kansas, a woman received a phone call from her “mother” claiming she had been kidnapped. The voice was a perfect AI clone, likely scraped from social media or voicemails. It triggered a full police response before being revealed as a fraud.

This incident highlights a growing gap: AI capabilities for fraud are outpacing law enforcement’s training and tools to detect them.

We are entering an era where we cannot trust our own ears. When a scammer can wear your mother's voice like a mask, "verification" becomes a survival skill. We need to normalise "safe words" for families and better authentication protocols for telecom providers. If you get a call like this, try to verify it through another channel immediately.

Conclusion

This week showed us the full spectrum of the AI future. On one hand, we have the brilliance of SAM3D and MSEB, giving machines the ability to perceive the world with human-like fidelity. On the other, we have the HHS strategy and Cohere’s geopolitical take, showing how these tools are being woven into the fabric of nations.

But the story from Kansas is the one that sticks. As AI becomes infrastructure, it also becomes a weapon for the opportunistic. The technology to clone a voice is here, and it’s cheap.

The takeaway? Be excited about the research, be supportive of the strategy, but be vigilant about your personal security. Trust, but verify.

Did you like this post? Please let me know if you have any comments or suggestions.

Weekly posts (recent) that might be interesting for you



desktop bg dark

About Elena

Elena, a PhD in Computer Science, simplifies AI concepts and helps you use machine learning.

Citation
Elena Daehnhardt. (2025) 'AGI Timelines, 3D Vision, and the Reality of AI Scams', daehnhardt.com, 05 December 2025. Available at: https://daehnhardt.com/blog/2025/12/05/agi-timelines-3d-vision-and-the-reality-of-ai-scams/
All Posts