š AI at the Oscars, š§āš Students Beat ElevenLabs, š§ DeepMind Predicts AGI
PLUS: Gemma 3 Now Runs Locally ⢠Claude Maps Morals ⢠Agents May Need Security Clearance
š This week in AI
šµ Podcast
Donāt feel like reading? Listen to it instead.
** Highly recommend listening to this one **
š° Latest news
AI at the Oscars? Allowed, But Not Celebrated
The Academy of Motion Picture Arts and Sciences has updated its Oscar rules to allow the use of generative AI in filmmaking without automatic disqualification. AI āneither helps nor harmsā a filmās nomination chances, but the Academy emphasised that it still prioritises human creative input when judging submissions.
Disclosure of AI use wonāt be required. The policy update comes amid growing industry debate, following recent films like The Brutalist and Dune: Part Two using A.I. tools during production.
Why it matters
By formally recognising AI in its rules, the Oscars has legitimised its growing role in filmmaking while drawing a clear line: human creativity remains central. This sets a tone for future awards and reinforces that AI may be a tool, but not a substitute for authorship.
With filmmakers like James Cameron embracing the tech and others like Demi Moore backpedalling after criticism, the Academyās stance reflects a delicate balance ā acknowledging innovation without dismissing the value of artists.
š Article by the New York Times
Two Students. Zero Funding. A Voice AI That Beats the Best
Two Korean undergrads built Dia, a 1.6B parameter open-source text-to-speech model that beats ElevenLabs and Sesame in expressiveness, timing, and handling of nonverbal cues. Built using Googleās TPU Research Cloud and inspired by NotebookLM, Dia features emotional tone, speaker variation, and sounds like laughter and coughing. Itās available on GitHub and HuggingFace, with a consumer app on the way.
Why it matters
Dia proves top-tier voice models can now be built without funding, labs, or credentials. With open tools, free compute, and public training guides, small teams can match commercial giants. That lowers the barrier for voice-driven apps and content toolsāunlocking new possibilities for creators, devs, and indie startups.
š Check out the demo compared to other TTS models
š Check out the Github repo
šļø Google AI News
End of Disease, Rise of Robotsā: DeepMindās Vision on 60 Minutes
On 60 Minutes, Demis Hassabis, Nobel laureate and CEO of Google DeepMind, revealed DeepMindās latest strides toward artificial general intelligence (AGI).
From curing diseases to AI-powered glasses that understand the world, Hassabis outlined a near-future of systems that see, reason, imagine, and act ā across science, healthcare, and daily life.
Their AI model mapped 200M protein structures in a year and is now accelerating drug discovery. DeepMindās āAstraā is a multimodal agent that can analyse art, understand emotion, and operate in real-time through wearables. Robots are reasoning through logic puzzles. AGI, Hassabis predicts, could emerge in the next 5ā10 years.
Why it Matters
This isnāt AI as a tool ā itās AI as a collaborator. From scientific discovery to healthcare and personalised assistance, DeepMind is building systems that learn like humans, reason with intuition, and imagine new solutions. Protein folding, drug discovery, and embodied assistants are early glimpses of what Hassabis calls a world of āradical abundance.ā
The shift is clear: from machine learning to machine understanding ā and the countdown to AGI has begun.
AI That Learns Like Humans: DeepMindās Big Shift
Google DeepMind is calling for a new approach to building smarter AI. Instead of training systems on static, human-labelled data (like we do today), theyāre proposing āstreamsāāa way for AI agents to learn by continuously interacting with their environments. These agents wonāt just answer questions or follow one-off commands. Theyāll live in long-running digital worlds (like Minecraft or scientific simulators), remember what theyāve done, and get better over time based on real outcomesālike solving a puzzle or completing a task.
The idea is inspired by how humans learn through experience. These agents will make their own decisions, learn from mistakes, and adapt to new situationsāwithout needing human labels or instructions for every step.
Why it Matters
This could be a turning point in how AI is built. Instead of copying what humans do, AI could start to figure things out on its ownālike a digital intern learning on the job. It opens the door to long-term, helpful AI systems that improve over weeks, months, or yearsāwhether theyāre helping in healthcare, education, or scientific research. Itās a shift from static training to lifelong learning.
š Read the paper
Gemma 3 Just Went Local ā Run a 27B Model on Your GPU
Googleās new quantization-aware trained (QAT) versions of its Gemma 3 models ā including the powerful 27B variant ā now run locally on consumer GPUs with just 14.1 GB of VRAM. Thatās a 74% memory drop from the original 54 GB. This is achieved without sacrificing instruction-following or chat performance, thanks to QATās training process that preserves accuracy even at low precision (int4).
Models are available under a permissive licence with no login required, and theyāre plug-and-play with llama.cpp, Ollama, LM Studio, gemma.cpp, and Appleās MLX for local use.
Why it matters
Large models like Gemma 3 are no longer out of reach. This changes who gets to build with powerful AI. You can now run a state-of-the-art 27B LLM locally ā on hardware as common as an RTX 3090 ā without needing a data centre or cloud credits. It compresses the AI dev stack into something far more accessible, enabling private, offline, high-quality inference for anyone with a decent GPU. Thatās a shift in who controls advanced AI ā and what becomes possible with it.
šļø Anthropic AI News
Claudeās Moral Compass: What AI Really Believes in the Wild
Anthropic just dropped a first-of-its-kind map of Claudeās moral compassābased on over 300,000 real user chats. It reveals how the AI navigates tricky decisions, flips between values depending on the context, and sometimes pushes back hard when users cross ethical lines. From relationship advice to AI ethics, Claude shifts tone with surprising nuanceāprioritising āhealthy boundariesā in one case, āhuman agencyā in another.
Why it matters
This isnāt some lab test. Itās the AIās real personality, showing up in wild, unscripted interactions. It proves that values in language models arenāt fixedātheyāre fluid, contextual, and even reactive to user input. That opens the door to new ways of monitoring AI alignment, not just in theory, but in action. If AIs are going to be co-workers, therapists, or teachers, this is how weāll know if theyāre sticking to their valuesāor quietly drifting.
š Read the post
Anthropic Warns AI Employees Will Soon Need Corporate Security Clearance
Anthropicās CISO says fully autonomous AI workersācomplete with passwords, memories, and persistent accountsāare likely to operate on corporate networks within the next year. Unlike todayās task-bound agents, these āvirtual employeesā would have broad autonomy, raising new challenges in access control, monitoring, and accountability. Security strategies will need to evolve to manage these identities, ensure visibility, and protect systems from unintended or malicious behaviour.
Why it matters
AI wonāt just support employeesāit will be them. This shift demands a rethink of how companies manage trust, responsibility, and risk. Existing cybersecurity tools werenāt built for agents that write code, access internal systems, and act independently. Without safeguards, AI workers could become internal threat vectorsāby mistake or design.
š Article by Axios
Ļ0.5 Lets Robots Clean Homes Theyāve Never Seen
Ļ0.5 is a new VLA model that helps robots generalise to unseen environmentsālike tidying unfamiliar kitchens or bedrooms. Trained on a diverse mix of robotic demos, web data, and verbal instructions, it plans high-level tasks and executes low-level actions from the same model. It follows open-ended language prompts with multi-step behaviours.
Why it matters
This shifts robots from lab-bound demos to real-world use. Ļ0.5 proves that broad generalisation is possible without hand-tuning for every home, unlocking scalable physical intelligence with language and mixed-modal training.
š Physical Intelligence website
Mechanize Wants to Train AI for Every Job, Not Just the Genius Ones
Mechanize is a new startup building realistic virtual work environments to train AI agents capable of performing complex, real-world job tasks. Unlike models focused on narrow research or creative reasoning, Mechanizeās approach targets long-horizon tasks like reprioritising projects, handling interruptions, and using software toolsāskills common in everyday jobs.
The startup aims to produce training data and benchmarks that accelerate the development of AI systems suited for full-spectrum labour automation. Mechanize is backed by leading investors, including Nat Friedman, Patrick Collison, and Jeff Dean.
Why it matters
This signals a shift in AIās trajectoryāfrom automating niche expert work to taking on broad, ordinary labour across the economy. Mechanizeās bet is that the true value of AI wonāt come from solving science problems, but from replacing office workers, support staff, and operational roles.
As AI systems become capable of handling coordination, planning, and tool use, weāre likely to see incremental but widespread disruption to knowledge work long before AI reaches superintelligence. Itās a roadmap for how automation will first impact the workforceāvisibly, diffusely, and across industries.
š Launch post







