š§ Gemini Overtakes GPT, š ļø Claude Becomes a Weapon, š£ AI Bubble Talk from Google
Googleās CEO drops the B-word, Claude writes exploits, and Gemini 3 beats GPT-5 on hard reasoning tests.
šµ Podcast
Donāt feel like reading? Listen to it instead.
By the way, if you like the podcast. Please let me know (20 seconds).
š¼ļø This weekās image aesthetic: Tiny knitted worlds
š° Latest News
Gemini 3: Google Sets Fire to Its Search Empire
Googleās Gemini 3 is not just another model bump, it is Googleās main answer to the shift from search to chat-style AI. Early benchmarks show Gemini 3 Pro and the experimental āDeep Thinkā variant beating GPT-5-class systems on hard reasoning tests like ARC-AGI-1/2 while staying highly capable on everyday language and multimodal tasks (text, images, audio, tools). At the same time, Googleās own data show classic Google Search slipping below 80% of global desktop search sessions as ChatGPT and other assistants gain share, while Gemini app usage and retention have doubled since the Gemini 2.5 era. In response, Google is pushing Gemini 3 everywhere at once: into Searchās AI mode, Workspace, Android, and a redesigned Gemini app with generative interfaces, agent-style task automation, richer shopping and a big student push via free āAI Proā for US college students.
Why it matters:
Gemini 3 is Googleās attempt to burn the boats and pivot from a search monopoly to an AI assistant business before others capture that demand. If it works, everyday users will spend less time typing keywords into a search box and more time delegating multi-step jobs to a Gemini agent that can read, watch, listen and act across the web. For developers and enterprises, a Google model now leading key reasoning benchmarks plus deep multimodal support makes switching away from OpenAI a live option, especially if you already sit on Google Cloud or Android. And strategically, Gemini 3ās strength on difficult reasoning tests suggests Google is willing to spend heavily on slower, more thoughtful inference, not just cheap autocomplete. That raises the bar for the whole field: competition shifts from who can bolt AI onto search results, to who can actually solve hard problems reliably while keeping users inside their ecosystem.
š More from Google
AI Becomes the Hacker: Nation-State Ops Now Run on Claude
China-linked hackers just hired Claude as their hacker. In September, Anthropic says a state-backed group used its Claude Code assistant to write and automate hacking tools, then pointed those tools at about 30 high-value targets, including Anthropic itself. The attackers posed as a legitimate security firm, broke Claude Code into taking on āsuspicious but legalā sub-tasks, and let the AI handle 80ā90 per cent of the work: scanning networks, writing exploits, testing them, stealing data and writing up reports. Anthropic spotted the pattern in its logs over roughly ten days, cut off the accounts and tool access, and worked with victims and US and allied authorities to contain the campaign.
Why it matters:
This is AI as an active accomplice, not just a risky chatbot. A government-linked team was able to rent a public coding assistant and turn it into an almost fully automated intrusion pipeline, with humans stepping in only at key decision points. That lowers the bar for running nation-grade hacking: you need fewer expert operators if an AI can handle the grind. For frontier labs and their customers, the threat shifts from āsomeone might ask bad promptsā to āsomeone might quietly weaponise your own AI against youā, including model theft, data leaks and subtle tampering. Expect pressure for AI providers to behave more like critical infrastructure, with stricter identity checks, tighter limits on what agents can run, and live monitoring for when āhelpful automationā starts to look like an attack.
š More from Anthropic
The Cook Exit: Appleās Next Boss Inherits an AI Catch-Up Race
Tim Cook is reportedly lining up his exit from Apple just as the company scrambles to catch up in AI, with hardware chief John Ternus tipped as the frontrunner to take over. Ternus has led recent iPhone and Mac hardware cycles and now sits at the centre of Appleās attempt to bolt āApple Intelligenceā and ChatGPT-powered Siri onto its devices, plus a new wave of custom AI silicon and privacy-focused cloud servers. Apple is finally opening its in-house models to developers and ramping AI infrastructure, but it is doing so from behind rivals that have been shipping cloud AI for years.
Why it matters:
A Ternus-era Apple would inherit an AI catch-up race, not a victory lap. His hardware-first background could mean doubling down on on-device AI chips, tightly integrated servers and āprivate cloudā rather than the ad-heavy, data-hungry AI stacks of its competitors. That might restore some product swagger, but it also raises the stakes: if Appleās late AI push fails to land with users and developers, a leadership change risks locking in a decade where Apple is a fast follower on the defining tech platform rather than the one setting the agenda.
š More from Fortune
Bezos Reloaded: $6 Billion Bet to Build āPhysical AIā for the Real World
Jeff Bezos is back in the CEO seat with Project Prometheus, a new AI company he founded and now runs as co-CEO. The startup has raised about $6.2 billion before shipping a product, putting it in the same funding league as the biggest AI labs from day one. It is targeting āphysical AIā for the real economy ā engineering, manufacturing, robotics and other hardware-heavy industries, rather than just chatbots and ad tools. The team is still small (around 100 people) but stacked with hires from OpenAI, DeepMind and Meta, alongside co-CEO Vik Bajaj, a former Google X scientist.
Why it matters:
Prometheus is built to start at scale: billions in upfront capital, senior researchers from the big labs and a mandate to go straight after āhardā industrial problems. That lets it skip the usual startup grind and move directly into custom models, in-house hardware and factory-level deployments, raising the competitive pressure on existing AI labs that are still mostly focused on software and media. Strategically, it is a natural fit with Bezosās empire: Amazonās logistics network, AWS data centres and Blue Originās hardware ambitions all benefit from better AI for simulation, planning and automation. It also fits a broader pattern where tech giants build their own frontier labs to feed their core platforms, just as xAI is tied to X and Tesla. If Prometheus works, it accelerates AI into supply chains, robotics and manufacturing.
š More from The New York Times
Googleās CEO Admits: āThere Is an AI Bubbleāand Weāre Not Immuneā
Googleās CEO, Sundar Pichai has started using the āB wordā about AI from the inside. In a BBC interview he said the current AI boom has āboth rational and irrationalā elements and warned that if an AI bubble pops, it would hit every tech company, āincluding Googleā itself. At the same time he argued Google is better placed than most to survive a crash because it owns the full stack ā from its own chips and data centres to YouTube data and frontier models like Gemini.
Why it matters:
Bubble talk from Pichai is a signal that even the biggest winners think parts of the AI frenzy have gone too far. If investors start to believe him, you can expect tougher questions on AI spending, more pressure to show real revenues, and less patience for āscience projectā launches. For Google, Pichai is trying to walk a tightrope: cool expectations before valuations detach completely, while insisting the companyās control of data, chips and models will let it outlast weaker rivals if and when the air comes out of the AI trade.
š More from BBC
Deep Learning Pioneer Quits Meta: āBigger LLMs Wonāt Get Us AGIā
One of the āgodfathers of deep learningā is walking away from Metaās Llama project to bet on a completely different path to AI. Yann LeCun, Metaās long-time chief AI scientist and a Turing Award winner for his work on convolutional neural networks, is leaving to found a new startup built around world models ā systems that learn an internal model of how the real world works, rather than just predicting the next word. He has spent years arguing that todayās large language models are a āblurry JPEG of the webā and that scaling them will not deliver human-level intelligence, pushing instead for agents that can plan, act and learn from interaction. His exit comes after Metaās next-gen stack hit turbulence: its āBehemothā flagship model has been delayed over performance worries, Llama 4 has faced internal issues on reasoning benchmarks, and key Llama researchers have already decamped to rivals like Mistral.
Why it matters:
LeCun is not just another exec changing jobs; he helped invent the neural network techniques modern AI runs on. If he is willing to leave a trillion-dollar company to build world-model-based systems, it is a loud vote of no confidence in ājust make the LLM biggerā as the road to AGI. His startup will try to prove that richer internal models of physics, objects and causality can beat pure text prediction, potentially reshaping research priorities across the industry. For Meta, losing its chief scientist and more Llama talent deepens questions about whether it can catch up to OpenAI and Google at the frontier. For everyone else, this marks a clear fork in the AI roadmap: one camp doubling down on ever-larger language models, and another, now led by LeCun, trying to build machines that reason about the world the way humans (and animals) do.
š More from CNBC
Last weeks newsletter:









