🚨 AI Too Dangerous to Release • 🕵️♂️ Altman Exposed • 🚀 Musk’s Orbital AI Factory
Plus: OpenAI just dropped a wild manifesto demanding a 4-day work week and a new tax on bots.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest News
This week’s image aesthetic (Flux 2 Pro): The Frank Miller Sin City aesthetic
Too Dangerous for the Public: Anthropic's Unreleased AI Finds Thousands of Zero-Days in Weeks
Anthropic has quietly assembled a cybersecurity super-team. In a massive new initiative called Project Glasswing, the AI company is joining forces with tech giants like Apple, Google, and Microsoft to deploy an unreleased model named Claude Mythos Preview. This model is explicitly designed to hunt down and exploit complex software flaws that have historically evaded human detection. In just a few weeks of testing, Mythos successfully identified thousands of critical zero-day vulnerabilities across every major web browser and operating system. Because Mythos is so extraordinarily capable, Anthropic has completely ruled out a public release. Instead, they are locking the tool behind closed doors and granting exclusive access to Glasswing partners so they can proactively patch critical infrastructure.
Why it Matters
This marks a terrifying but necessary shift in global cybersecurity, fundamentally altering the economics of hacking. Because frontier AI models are now so capable of discovering vulnerabilities at scale, the cost, effort, and level of expertise required to find and exploit zero-days have dropped dramatically. The broader vulnerability landscape is already being flooded with a massive surge of AI-generated exploit code. For the first time, an AI model can autonomously discover and weaponise previously unknown software flaws—a task once strictly reserved for elite human hackers.
By handing this immense power exclusively to defenders first, Project Glasswing aims to patch critical global systems before malicious actors inevitably build similar tools. Anthropic’s outright refusal to release Mythos to the public highlights a profound new era of caution in the tech industry. It is a stark acknowledgement that frontier AI models now possess capabilities so dangerous they simply cannot be released to the masses, establishing a vital new precedent for responsible deployment.
Taxing the Bots: OpenAI Admits AI Will Break the Economy, Pitches a Radical Wealth Fund
OpenAI just dropped a 13-page policy manifesto called ‘Industrial Policy for the Intelligence Age’, and it outlines a radical restructuring of society. Operating on the belief that superintelligence is a rapidly approaching reality, the AI giant is proposing a massive overhaul of the economic system. Key proposals include taxing automated labour and corporate profits to build a public wealth fund that distributes dividends to all citizens. The document also suggests piloting a four-day work week fueled by AI productivity gains, alongside the development of extreme “containment playbooks” for potentially uncontrollable AI systems.
Why it Matters
This document is a glaring admission from the industry leader that advanced AI will fundamentally break the current social contract. By proposing concrete taxes on AI-driven profits and the creation of a universal wealth fund, OpenAI is shifting the global conversation from theoretical job losses to immediate, redistributive economic policy. It signals that the impending impact on labour markets will render existing economic systems completely inadequate. Furthermore, the explicit call for government coordination on containment protocols shows a company trying to shape its own regulation from the inside, all while
Secret Memos and "Deception": The Real Reason OpenAI Fired Sam Altman
A bombshell New Yorker investigation just reignited the drama surrounding Sam Altman’s brief 2023 ouster, alleging a deep, documented pattern of deception by the OpenAI CEO. Drawing on over 100 interviews and leaked internal documents, the report unearths secret memos from former chief scientist Ilya Sutskever and private notes from Anthropic CEO (and former OpenAI executive) Dario Amodei. These documents accuse Altman of systematically misrepresenting facts to his own board and executive team, specifically concerning critical safety protocols. This alleged manipulation is what directly triggered the board’s shocking decision to fire Altman—a move that was rapidly crushed by immense pressure from loyal employees and heavyweight investors like Microsoft.
Why it Matters
This isn’t just boardroom gossip; it exposes a massive governance crisis at the world’s most influential AI company. The allegations highlight a dangerous collision between OpenAI’s breakneck commercialisation and the foundational safety principles the organisation was actually built upon. If the leader of a company developing society-altering technology is actively misleading his own board about safety protocols and strategic partner deals, it creates terrifying uncertainty about the firm’s reliability. Ultimately, these claims risk shattering regulatory trust while proving a grim reality: when safety clashes with commercial momentum, mega-investors hold the ultimate power to dictate the leadership and future direction of AI.
Project Terafab: Why Intel is Helping Elon Musk Build AI Chips for Space
Intel is officially joining forces with Elon Musk’s technology empire to build the ultimate semiconductor powerhouse. Dubbed Project Terafab, this joint initiative between Intel, Tesla, SpaceX, and xAI will establish a massive $20 billion to $25 billion manufacturing complex on the north side of the Giga Texas campus in Austin, Texas. The facility is designed to handle every single stage of chip production under one roof, from initial design and lithography to memory production and advanced testing. Intel is bringing its heavyweight manufacturing expertise to the table—including its cutting-edge 18A process and advanced packaging technologies—with a staggering goal: churning out one terawatt of computing power every year. To put this monumental scale into perspective, one terawatt is roughly 50 times the current global annual output of AI chips. According to project roadmaps, 80% of these chips will be radiation-hardened for SpaceX and xAI orbital data centers, while the remaining 20% will power terrestrial products like Tesla’s Cybercabs, Full Self-Driving (FSD) systems, and Optimus humanoid robots.
Why it Matters
This mega-factory is a direct strike against the impending global shortage of specialised AI chips. By building a vertically integrated base on US soil, Musk is aggressively cutting out foreign suppliers to seize total control of his own supply chain. Terafab’s final production goal of one million wafers per month is equivalent to roughly 70% of the entire current global production capacity of industry leader TSMC. This guarantees Musk’s companies the hardware they need to accelerate autonomous driving and space exploration without waiting in TSMC’s deeply backlogged queues, which are reportedly booked solid through 2028. For Intel, whose stock surged roughly 4% following the announcement, Terafab is a monumental lifeline. Securing Musk’s empire as a cornerstone customer validates their expanding Intel Foundry division and proves they can still compete in the elite contract manufacturing market. Ultimately, the sheer scale of this project threatens to permanently shift the balance of power in global semiconductor production.
Last week’s newsletter:





