🏆 AI at the Oscars, 🧑🎓 Students Beat ElevenLabs, 🧠 DeepMind Predicts AGI
PLUS: Gemma 3 Now Runs Locally • Claude Maps Morals • Agents May Need Security Clearance
👋 This week in AI
🎵 Podcast
Don’t feel like reading? Listen to it instead.
** Highly recommend listening to this one **
📰 Latest news
AI at the Oscars? Allowed, But Not Celebrated
The Academy of Motion Picture Arts and Sciences has updated its Oscar rules to allow the use of generative AI in filmmaking without automatic disqualification. AI “neither helps nor harms” a film’s nomination chances, but the Academy emphasised that it still prioritises human creative input when judging submissions.
Disclosure of AI use won’t be required. The policy update comes amid growing industry debate, following recent films like The Brutalist and Dune: Part Two using A.I. tools during production.
Why it matters
By formally recognising AI in its rules, the Oscars has legitimised its growing role in filmmaking while drawing a clear line: human creativity remains central. This sets a tone for future awards and reinforces that AI may be a tool, but not a substitute for authorship.
With filmmakers like James Cameron embracing the tech and others like Demi Moore backpedalling after criticism, the Academy’s stance reflects a delicate balance — acknowledging innovation without dismissing the value of artists.
📝 Article by the New York Times
Two Students. Zero Funding. A Voice AI That Beats the Best
Two Korean undergrads built Dia, a 1.6B parameter open-source text-to-speech model that beats ElevenLabs and Sesame in expressiveness, timing, and handling of nonverbal cues. Built using Google’s TPU Research Cloud and inspired by NotebookLM, Dia features emotional tone, speaker variation, and sounds like laughter and coughing. It’s available on GitHub and HuggingFace, with a consumer app on the way.
Why it matters
Dia proves top-tier voice models can now be built without funding, labs, or credentials. With open tools, free compute, and public training guides, small teams can match commercial giants. That lowers the barrier for voice-driven apps and content tools—unlocking new possibilities for creators, devs, and indie startups.
📝 Check out the demo compared to other TTS models
🗞️ Google AI News
End of Disease, Rise of Robots”: DeepMind’s Vision on 60 Minutes
On 60 Minutes, Demis Hassabis, Nobel laureate and CEO of Google DeepMind, revealed DeepMind’s latest strides toward artificial general intelligence (AGI).
From curing diseases to AI-powered glasses that understand the world, Hassabis outlined a near-future of systems that see, reason, imagine, and act — across science, healthcare, and daily life.
Their AI model mapped 200M protein structures in a year and is now accelerating drug discovery. DeepMind’s “Astra” is a multimodal agent that can analyse art, understand emotion, and operate in real-time through wearables. Robots are reasoning through logic puzzles. AGI, Hassabis predicts, could emerge in the next 5–10 years.
Why it Matters
This isn’t AI as a tool — it’s AI as a collaborator. From scientific discovery to healthcare and personalised assistance, DeepMind is building systems that learn like humans, reason with intuition, and imagine new solutions. Protein folding, drug discovery, and embodied assistants are early glimpses of what Hassabis calls a world of “radical abundance.”
The shift is clear: from machine learning to machine understanding — and the countdown to AGI has begun.
AI That Learns Like Humans: DeepMind’s Big Shift
Google DeepMind is calling for a new approach to building smarter AI. Instead of training systems on static, human-labelled data (like we do today), they’re proposing “streams”—a way for AI agents to learn by continuously interacting with their environments. These agents won’t just answer questions or follow one-off commands. They’ll live in long-running digital worlds (like Minecraft or scientific simulators), remember what they’ve done, and get better over time based on real outcomes—like solving a puzzle or completing a task.
The idea is inspired by how humans learn through experience. These agents will make their own decisions, learn from mistakes, and adapt to new situations—without needing human labels or instructions for every step.
Why it Matters
This could be a turning point in how AI is built. Instead of copying what humans do, AI could start to figure things out on its own—like a digital intern learning on the job. It opens the door to long-term, helpful AI systems that improve over weeks, months, or years—whether they’re helping in healthcare, education, or scientific research. It’s a shift from static training to lifelong learning.
Gemma 3 Just Went Local — Run a 27B Model on Your GPU
Google’s new quantization-aware trained (QAT) versions of its Gemma 3 models — including the powerful 27B variant — now run locally on consumer GPUs with just 14.1 GB of VRAM. That’s a 74% memory drop from the original 54 GB. This is achieved without sacrificing instruction-following or chat performance, thanks to QAT’s training process that preserves accuracy even at low precision (int4).
Models are available under a permissive licence with no login required, and they’re plug-and-play with llama.cpp, Ollama, LM Studio, gemma.cpp, and Apple’s MLX for local use.
Why it matters
Large models like Gemma 3 are no longer out of reach. This changes who gets to build with powerful AI. You can now run a state-of-the-art 27B LLM locally — on hardware as common as an RTX 3090 — without needing a data centre or cloud credits. It compresses the AI dev stack into something far more accessible, enabling private, offline, high-quality inference for anyone with a decent GPU. That’s a shift in who controls advanced AI — and what becomes possible with it.
🗞️ Anthropic AI News
Claude’s Moral Compass: What AI Really Believes in the Wild
Anthropic just dropped a first-of-its-kind map of Claude’s moral compass—based on over 300,000 real user chats. It reveals how the AI navigates tricky decisions, flips between values depending on the context, and sometimes pushes back hard when users cross ethical lines. From relationship advice to AI ethics, Claude shifts tone with surprising nuance—prioritising “healthy boundaries” in one case, “human agency” in another.
Why it matters
This isn’t some lab test. It’s the AI’s real personality, showing up in wild, unscripted interactions. It proves that values in language models aren’t fixed—they’re fluid, contextual, and even reactive to user input. That opens the door to new ways of monitoring AI alignment, not just in theory, but in action. If AIs are going to be co-workers, therapists, or teachers, this is how we’ll know if they’re sticking to their values—or quietly drifting.
Anthropic Warns AI Employees Will Soon Need Corporate Security Clearance
Anthropic’s CISO says fully autonomous AI workers—complete with passwords, memories, and persistent accounts—are likely to operate on corporate networks within the next year. Unlike today’s task-bound agents, these “virtual employees” would have broad autonomy, raising new challenges in access control, monitoring, and accountability. Security strategies will need to evolve to manage these identities, ensure visibility, and protect systems from unintended or malicious behaviour.
Why it matters
AI won’t just support employees—it will be them. This shift demands a rethink of how companies manage trust, responsibility, and risk. Existing cybersecurity tools weren’t built for agents that write code, access internal systems, and act independently. Without safeguards, AI workers could become internal threat vectors—by mistake or design.
π0.5 Lets Robots Clean Homes They’ve Never Seen
π0.5 is a new VLA model that helps robots generalise to unseen environments—like tidying unfamiliar kitchens or bedrooms. Trained on a diverse mix of robotic demos, web data, and verbal instructions, it plans high-level tasks and executes low-level actions from the same model. It follows open-ended language prompts with multi-step behaviours.
Why it matters
This shifts robots from lab-bound demos to real-world use. π0.5 proves that broad generalisation is possible without hand-tuning for every home, unlocking scalable physical intelligence with language and mixed-modal training.
📝 Physical Intelligence website
Mechanize Wants to Train AI for Every Job, Not Just the Genius Ones
Mechanize is a new startup building realistic virtual work environments to train AI agents capable of performing complex, real-world job tasks. Unlike models focused on narrow research or creative reasoning, Mechanize’s approach targets long-horizon tasks like reprioritising projects, handling interruptions, and using software tools—skills common in everyday jobs.
The startup aims to produce training data and benchmarks that accelerate the development of AI systems suited for full-spectrum labour automation. Mechanize is backed by leading investors, including Nat Friedman, Patrick Collison, and Jeff Dean.
Why it matters
This signals a shift in AI’s trajectory—from automating niche expert work to taking on broad, ordinary labour across the economy. Mechanize’s bet is that the true value of AI won’t come from solving science problems, but from replacing office workers, support staff, and operational roles.
As AI systems become capable of handling coordination, planning, and tool use, we’re likely to see incremental but widespread disruption to knowledge work long before AI reaches superintelligence. It’s a roadmap for how automation will first impact the workforce—visibly, diffusely, and across industries.