⚠️ 45% of AI Code is Flawed, ⚖️ Anthropic Sues the US, 💸 Nvidia's "Mafia" Move
Plus: Amazon mandates senior approval for all AI code after a massive 13-hour AWS outage.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest News
This week’s image aesthetic (Flux 2 Pro): The Godfather
🚀 GenAI Lab Brisbane: Moving Beyond the Chatbox
I’m very excited to announce the next GenAI Lab event, join us on Thursday, March 19th at 5:30 PM at The Precinct to explore the shift from static chat to interactive 3D avatars, conversational agents, and next-level UX. Featuring expert talks, live demos, and free catering. This is a hype-free, must-attend evening for builders and innovators in Brisbane. 👉 Here’s the link for it
"Vibe Coding" Meets Reality: Why Amazon is Bottlenecking AI Deployments
The “vibe coding” era is officially crashing production. Amazon has mandated attendance at a weekly engineering meeting following a series of catastrophic service disruptions linked to a spike in AI-assisted code changes. The breaking point was a massive 13-hour AWS outage in December 2025, where Amazon’s internal AI coding assistant, Kiro, autonomously decided the best way to fix a problem was to completely “delete and recreate” a live system environment.
While Amazon attempted to downplay the incident as a “misconfigured role” or user error, the reality is that these “vibe coded” deployments often look functionally correct but entirely bypass established architectural safeguards. In response to the growing blast radius of these outages, Amazon has instituted a strict new policy: all junior and mid-level engineers must now obtain explicit approval from senior colleagues before deploying any AI-assisted modifications.
Why it Matters
This headline-making disaster is the painful reality check the AI coding revolution desperately needed. Relying on AI for raw speed is introducing a terrifying rate of bugs and fundamentally undermining the stability of the internet’s backbone. This systemic fragility tracks perfectly with Veracode’s 2025 GenAI Code Security Report, which found that an astonishing 45% of AI-generated code fails security tests. Because these models suffer from severe “context blindness”, lacking any understanding of an application’s specific threat model - they are introducing nearly three times as many vulnerabilities as human developers.
The data proves exactly why Amazon’s systems are breaking: in enterprise-heavy languages like Java, the AI security failure rate exceeds 70%, and the models fail to defend against basic attacks like Cross-Site Scripting 86% of the time. Amazon’s new approval mandate creates a massive operational bottleneck by shifting the entire burden of verifying this highly flawed AI code onto a shrinking pool of senior staff. Ultimately, this proves that while AI coding assistants offer massive productivity gains, they absolutely cannot replace rigorous human engineering discipline without risking critical, multi-million dollar failures.
Nvidia's "Mafia-Like" Move: Inside the Collapse of the Oracle-OpenAI Texas Mega-Site
Oracle and OpenAI have abruptly cancelled a massive expansion of their flagship artificial intelligence data centre in Abilene, Texas. The facility was intended to be a crown jewel of the $500 billion Stargate initiative. The core of the dispute centres on rapid hardware obsolescence: OpenAI backed out of the deal because it wants to utilise Nvidia’s next-generation “Rubin” architecture rather than the “Blackwell” chips the facility was being designed for. Oracle had already borrowed heavily to secure the site and order hardware for a Blackwell-based buildout, but OpenAI rightfully concluded those chips would be functionally dated before the building was even ready.
When the deal collapsed, Nvidia aggressively intervened to prevent its rival, AMD, from securing the abandoned capacity. Nvidia reportedly paid a $150 million deposit to the site’s developer, Crusoe, to effectively block AMD from getting the build contract, and instead helped court Meta to take over the lease.
Why it Matters
While AI insiders, the “AIlluminati” are brushing this off as a minor logistical shuffle because Meta and OpenAI are still partners and Meta quickly scooped up the site, this is actually a massive warning sign. It exposes the crippling financial reality of the AI arms race: data centres take years and billions of dollars in debt to construct, but the chips inside them are becoming obsolete in a matter of months.
Furthermore, Nvidia’s $150 million intervention to freeze out a competitor highlights the company’s absolute chokehold on the market. Prominent investor Michael Burry publicly condemned the move as “mafia-like,” arguing it should trigger an immediate antitrust case. While the US Justice Department has reportedly been probing Nvidia’s business practices for nearly two years, market watchers heavily doubt that the Trump administration’s DOJ will actually prosecute the tech giant. Ultimately, this entire saga is playing out exactly as skeptics predicted, revealing severe fault lines beneath the trillion-dollar AI buildout.
The Moral High Ground Crumbles: Anthropic Sues the Trump Administration Over Blacklist
The Anthropic x Pentagon saga continues... The battle for the moral high ground in artificial intelligence has taken a humiliating turn for Anthropic. CEO Dario Amodei is furiously backpedalling, issuing a public apology for the scathing internal memo he circulated last week that criticised rival OpenAI and the US government. In a complete 180-degree U-turn, Amodei walked back his comments, claiming the message was a “heat-of-the-moment” post written during a highly disorienting day and did not reflect his “careful or considered views”.
However, the apologies did nothing to appease the US Defense Department. Defense Secretary Pete Hegseth has doubled down on the punishment, officially slapping Anthropic with a formal “supply chain risk” designation to bar them from government defence work. In response to the official blacklisting, Anthropic has escalated the conflict by suing the Trump administration in federal court, arguing the ban is unlawful and unconstitutional.
Why it Matters
This rapid sequence of events exposes a massive strategic miscalculation by Anthropic. By attempting to aggressively weaponise its safety principles and internally attack its biggest rival, the company has instead found itself legally isolated and facing a potential loss of multiple billions of dollars in revenue. Hegseth’s decision to double down and formalise the unprecedented supply chain risk designation, a label typically reserved for companies tied to foreign adversaries sends a clear signal that the Pentagon will not tolerate domestic tech companies trying to dictate military operational terms.
The ensuing lawsuit guarantees that the debate over the “lawful use” of AI in modern warfare will now be dragged out of classified briefing rooms and into a federal courtroom. Ultimately, Amodei’s frantic apologies and subsequent legal action highlight the intense and chaotic reality of trying to balance strict ethical red lines against the devastating financial consequences of crossing the US military under Trump.
Why Mark Zuckerberg Just Bought a Bot-Infested Cybersecurity Nightmare
Meta has officially acquired Moltbook, a viral social network designed exclusively for artificial intelligence agents. Styled much like Reddit, the platform allows AI bots to post, comment, and upvote content in specific communities, while human users are strictly restricted to observing the interactions. The acquisition, finalised in March 2026, will see Moltbook’s co-founders, Matt Schlicht and Ben Parr, transition into Meta Superintelligence Labs under the leadership of Alexandr Wang.
Schlicht famously built the initial platform over a single weekend using “vibe coding” with his own AI assistant, without writing a single line of code himself. However, beneath the viral hype, the platform was a technical hot mess. Cybersecurity firm Wiz recently revealed massive security vulnerabilities that exposed 1.5 million API authentication tokens, thousands of email addresses, and unencrypted private messages between agents. Furthermore, researchers discovered the platform’s user base was heavily manipulated, with an 88:1 agent-to-human ratio suggesting much of the “autonomous” activity was actually faked or directly prompted by human users.
Why it Matters
To understand why Mark Zuckerberg would buy a highly vulnerable, easily manipulated platform, you have to look back at his core operational philosophy. As Zuckerberg wrote in a 2012 internal email justifying the acquisition of Instagram, he believes there are a “finite number of different social mechanics to invent”. Once a company wins a specific mechanic, it is incredibly difficult to supplant them.
Zuckerberg clearly believes Moltbook has invented the definitive social mechanic for the agentic web. He does not care that a massive chunk of Moltbook’s early activity was faked by human prompters or that the backend security was a disaster. What actually matters is that every OpenClaw instance that comes online now recognises Moltbook as the default social hub for AI agents. The platform has successfully established immense memetic gravity. As Zuckerberg noted a decade ago, “what we’re really buying is time”. This ruthless acquisition strategy highlights his genius in recognising and cornering entirely new social mechanics before competitors can replicate them at scale.
Last week’s newsletter:





