🎨 Google Tanks Figma | 💀 OpenAI Kills Sora | 🚀 Musk's $25B Space Chips
Plus: How a hilariously sloppy hacker bug just saved NASA and Nvidia from a historic AI data breach.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest News
This week’s image aesthetic (Flux 2 Pro): Surrealist Vintage Collage (The "Monty Python" Look)
Vibe Design is Here: Why Google’s UI-Generating AI Just Wiped 12% Off Figma’s Stock
Google is officially taking on Figma, and the stock market is already pricing in the damage. Following the massive March 2026 update to its free AI design tool, Google Stitch, Figma’s stock plummeted 12% in just two days.
Stitch is no longer just a basic UI generator; it has evolved into a fully AI-native “vibe design” platform. Powered by the new Gemini 3.1 model, Stitch operates on an infinite canvas where users can generate entire design systems and interactive prototypes from text prompts, rough sketches, or even by pasting a live website URL to extract its visual style.
Users can literally speak directly to the canvas using voice commands, instructing a live AI design agent to tweak layouts, adjust colour palettes, and provide real-time design critiques. Crucially, Stitch doesn’t just create flat, static images; it generates usable HTML and CSS, automatically exporting an agent-friendly DESIGN.md file that plugs directly into AI coding assistants like Claude Code and Cursor.
Why it Matters
This release signals a fundamental shift in how software is built, moving the industry away from meticulous, pixel-pushing workflows and towards conversational “vibe designing”. By allowing users to simply express their intent, solo founders and backend developers can move from a vague concept to a clickable, responsive prototype in a matter of minutes without needing a dedicated designer. The integration of voice commands makes the messy, early stages of brainstorming significantly faster and more intuitive.
Furthermore, by instantly generating the underlying code and exporting strict design system rules, Stitch effectively eliminates the traditional, often-painful handoff process between design and engineering teams. For Figma which only went public in July 2025 this free, highly capable AI alternative represents a massive threat to its core collaborative business model, proving that the future of UI design is conversing with an AI co-worker rather than staring at a blank screen.
RIP Sora: OpenAI Kills Its Video Generator to Build the Ultimate Enterprise OS
OpenAI is ruthlessly consolidating its empire to conquer the enterprise desktop. In a massive pivot, the AI giant is merging ChatGPT, its Codex programming platform, and its Atlas web browser into a single unified application.
To make room for this hyper-focused enterprise push, OpenAI has axed its highly anticipated video generator, Sora. This deliberate move sacrifices flashy consumer media to concentrate entirely on lucrative business tools.
While the mobile app will remain a separate product, this desktop consolidation requires massive scale. To execute the rollout, OpenAI plans to nearly double its global workforce to 8,000 employees by the end of 2026.
Why it Matters
This consolidation is a direct assault on the fragmented enterprise software market. By weaving its most powerful tools into one ubiquitous desktop application, and killing off resource-heavy projects like Sora, OpenAI is attempting to become the default operating system for modern knowledge work.
This unified platform is designed to lock corporate clients into a single inescapable ecosystem. But the quiet hiring of a former Meta advertising executive reveals a company desperate for profitability. OpenAI is now executing a ruthless dual-monetisation strategy, securing enterprise contracts while preparing to run ads across its free consumer tools. This aggressive scramble for cash highlights a glaring vulnerability. While much smaller rivals like Anthropic are already printing profit, the industry leader is being forced to squeeze revenue from every possible angle just to cover its astronomical operating costs.
Inside Elon Musk’s Radical One Terawatt "Terafab" Chip Plant
Elon Musk has just announced what he calls “the most epic chip-building exercise in history.” In a staggering $25 billion joint venture between Tesla, SpaceX, and xAI, Musk is launching “Terafab,” a massive semiconductor fabrication plant in Austin, Texas. Revealed at the city’s defunct Seaholm Power Plant, the project aims to vertically integrate the entire production lifecycle under one roof, from chip design and lithography to advanced packaging and testing.
The facility will produce two distinct 2-nanometer chip families: a specialised edge-inference processor designed for Tesla’s vehicles and Optimus humanoid robots, and the “D3”, a radiation-hardened chip specifically built for orbital data centres. Initial small-batch production is targeted for late 2026, with highly ambitious volume production ramping up in 2027 to eventually generate one terawatt of computing power annually.
Why it Matters
This massive $25 billion gamble is born out of severe supply chain desperation. Musk claims that the combined output of every advanced semiconductor foundry on Earth meets only about 2% of his companies’ projected needs for the coming AI boom. By attempting to completely bypass external giants like TSMC and Samsung, Musk is trying to secure total, geopolitical-proof control over the custom hardware that will power his entire empire.
However, the most radical element of the Terafab strategy is its extra-terrestrial ambition. Musk plans to allocate 80% of the factory’s output to space, arguing that the vacuum of orbit provides superior cooling and significantly greater solar irradiance for AI satellites compared to terrestrial data centres. While semiconductor experts are heavily scrutinising the rapid 2026 timeline, this level of extreme vertical integration might be necessary to achieve his goals.
🔗 More from Datacenter Dynamics
A Sloppy Bug Just Saved NASA and Nvidia From the Biggest AI Hack in History
The AI industry just narrowly avoided a catastrophic digital heist and we only have a hilariously sloppy hacker to thank for it.
On March 24, 2026, a hacking group successfully compromised a massively popular piece of software called LiteLLM. Think of LiteLLM as a digital master keychain: it holds the passwords and access keys that major organisations like NASA, Netflix, and Nvidia use to connect to AI models from OpenAI, Google, and Anthropic. It gets downloaded over 97 million times a month.
The hackers managed to slip a malicious file into the official LiteLLM update. This wasn’t a virus you had to accidentally click on; the second a developer updated their system, the malware automatically ran in the background, designed to quietly scoop up every master password, cloud server key, and crypto wallet on the machine. However, the hackers got sloppy. They “vibe coded” the malware, meaning they likely rushed it or used AI to write it without checking the logic. Their code contained a fatal bug that accidentally forced the computer into an endless loop, using up so much memory that it instantly crashed the infected machines. Because of this loud, crashing error, a developer investigating his frozen computer discovered the breach within an hour of it going live.
Why it Matters
This near-miss exposes the terrifying reality of “supply chain attacks” in modern software. You don’t have to do anything wrong to get hacked; you just have to trust the wrong update. In this case, many developers didn’t even intentionally download LiteLLM; it was automatically pulled into their computers as a background requirement for other popular AI coding tools.
If the attackers had simply written cleaner code, this massive breach would have been completely invisible. The malware would have silently siphoned off the deepest corporate secrets of thousands of companies for weeks or even months before anyone noticed. The fact that a simple software bug is the only thing standing between the tech industry and a historic data breach highlights a massive blind spot: the companies building AI the fastest right now often have the least visibility into the vulnerable, interconnected software running beneath it.
Last week’s newsletter:





