💣 Anthropic's Drone Hypocrisy | 🏦 $1B AI Mortgage Fraud | 👓 Meta's Privacy Nightmare
Plus: Dario Amodei warns of a cascading "intelligence explosion" arriving in 2026.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest News
This week’s image aesthetic (Flux 2 Pro): 1930s Rubber Hose Animation (The "Cuphead / Max Fleischer" Look)
Anthropic’s Drone Hypocrisy: How the "Ethical" AI Went to War in Iran
The narrative surrounding Anthropic’s high-profile split from the Pentagon is rapidly unravelling. Following a public standoff where the US Defence Department designated the AI firm a “supply chain risk” for refusing to permit autonomous weapons or mass domestic surveillance, the reality on the ground reveals a stark contradiction.
First, Anthropic’s Claude is currently acting as the operational engine behind the massive joint US and Israeli military campaign in Iran. Embedded within Palantir’s Maven Smart System, Claude is actively processing classified surveillance data to generate real-time targeting and helped coalition forces strike 1,000 targets within the first 24 hours of the attack.
Second, a new Bloomberg report reveals that during their fraught ethics negotiations, Anthropic submitted a proposal to compete in a $100 million Pentagon prize challenge specifically designed to develop voice-controlled and autonomous drone swarms. Anthropic’s pitch involved using Claude to translate a commander’s intent into digital instructions to coordinate a fleet of drones. The company argued this did not cross its “red lines” because humans would retain oversight. Ultimately Anthropic was not selected for the project, and the Pentagon awarded bids to SpaceX, xAI, and defence contractors partnering with OpenAI.
Why it Matters
This revelation severely undermines Anthropic’s public positioning as the uncompromising moral vanguard of the AI industry. Prior to the drone report, Anthropic’s supposed ethical stance triggered a massive consumer backlash against its main rival. A coordinated “Cancel ChatGPT” movement emerged online, with users accusing OpenAI of having “no ethics at all” and “selling their soul” after the company signed its own deal with the US military. This backlash caused ChatGPT uninstalls to surge by 295% daily and propelled Claude to become the most downloaded free app in the US. Anthropic even capitalised on this momentum by releasing a tool allowing users to import their ChatGPT conversation histories directly into Claude.
However, the fact that Anthropic was actively bidding to power offensive military drone swarms while its AI is simultaneously selecting lethal targets in Iran suggests a deep hypocrisy or at least a highly fluid definition of its own ethical boundaries. The US military is now so dependent on Claude for the Iran strikes that the government plans to forcefully retain the technology during a six-month phase-out period, with one official stating they will not let the CEO’s morals cost American lives. Ultimately, this saga proves that in the race for military AI supremacy, the line between an ethical safeguard and a lost contract is incredibly thin, and the true winners are the labs willing to provide operational flexibility to the Pentagon.
🔗 More from The Washington Post on Claude in Iran
$1 Billion in Fake Mortgages: AI Just Broke the Australian Housing Market
An internal review triggered by whistleblowers has identified approximately $1 billion in suspected fraudulent home loans at the Commonwealth Bank of Australia (CBA). Crucially, some of the falsified documents used to secure these mortgages, such as payslips and bank statements, were generated using artificial intelligence. The fraudulent applications were reportedly funnelled into the bank through mortgage brokers and third-party referral channels. CBA has formally reported the scale of the operation to the police and corporate regulators. However, this is no longer an isolated incident. CBA’s review initially intensified after rival National Australia Bank (NAB) was allegedly defrauded of $150 million by an organised group dubbed the “Penthouse Syndicate”. Since then, the crisis has expanded to the rest of Australia’s “Big Four” banks. Westpac and ANZ have also contacted the police over suspected loan fraud, with combined fraudulent loans across these other institutions expected to exceed $300 million.
Why it Matters
This incident marks a dangerous escalation in modern financial crime. It proves that accessible AI tools can now generate synthetic documents sophisticated enough to bypass traditional banking verification methods that rely on manual review and standard optical character recognition. As a direct result, financial institutions will be forced to implement far more stringent security and identity checks, such as biometric authentication and deeper compliance reviews. This necessary tightening of security will significantly slow down the loan approval process for all legitimate customers. Ultimately, this exposes an escalating arms race where regulators and banks must rapidly deploy advanced AI detection tools simply to keep pace with AI-facilitated deception.
🔗 More from The Financial Review
"Watching You Undress": The Dystopian Reality of Meta's Ray-Ban Smart Glasses
Meta’s Ray-Ban smart glasses are capturing highly intimate photos and videos from the wearer’s perspective, and human eyes are watching everything. A joint investigation by Swedish newspapers revealed that uncensored, point-of-view footage from the devices is being sent to third-party contractors at a Kenya-based company called Sama. Because the glasses record exactly what the user sees, these data annotators are being subjected to a firehose of deeply personal moments. Workers report being forced to meticulously label footage of wearers going to the toilet, watching porn, and even recording themselves having sex. In one particularly disturbing instance, a contractor reviewed footage where a wearer left the glasses on a bedside table, capturing his wife walking into the room and getting undressed completely unaware she was being recorded. Meta maintains that its terms of service clearly state user content may be subject to manual human review.
Why it Matters
This exposes a dystopian privacy nightmare for both the wearers of the seven million pairs sold and the innocent bystanders caught in their gaze. Users are capturing highly sensitive data, from private text messages to credit card numbers at checkout, completely forgetting that the cameras are rolling and that strangers in another country are watching. The situation also highlights the dark and hidden human cost of AI development. The Kenyan contractors are forced to watch and label this violating content or risk losing their jobs, with one worker noting that anyone who questions the process is immediately fired. Meta’s only official advice is simply not to record sensitive information, which offers absolutely no protection to the spouses, friends, and strangers who are unwittingly broadcast to overseas data annotators. This fundamentally shatters the illusion of automated AI and turns consumers into paying participants in a corporate surveillance state.
Anthropic CEO: The "Radical Acceleration" of AI is Already Here
Anthropic CEO Dario Amodei delivered a stark warning about the sheer velocity of artificial intelligence development at the recent Morgan Stanley TMT conference. Completely dismissing rumours that AI scaling has hit a wall, Amodei stated that 2026 will see a “radical acceleration” that catches the world off guard. He compared the current trajectory of AI to being on square 40 of a 64-square chessboard , where the compounding math of exponential growth transforms into unimaginable scale. Internally, Anthropic is already experiencing this fast takeoff in software engineering. By using models to write the code that builds future models, the company has created a compounding feedback loop where AI constructs its own tools and scaffolding.
Why it Matters
We are standing on the precipice of a cascading intelligence explosion. The recursive gains Anthropic is seeing internally mean that AI development is now self-accelerating. Amodei revealed that these automated systems are already doubling and tripling the rate at which human researchers can produce new things end-to-end. Coding is merely the leading indicator for this transformation. The rapid automation of software engineering proves that the technology will eventually be infused into every aspect of the economy. If models can autonomously manage servers and control clusters today, this compounding “country of geniuses” will soon automate the structural foundations of every other industry at a pace humanity is completely unprepared for.
OpenAI Kills the Corporate Speak With GPT-5.3 Instant
OpenAI has quietly shifted its focus from raw power to everyday usability. The company has rolled out GPT-5.3 Instant as the new default model for all ChatGPT users. This update is specifically designed to drop the robotic corporate speak, providing more direct and accurate answers while slashing unnecessary refusals and disclaimers. It replaces the outgoing GPT-5.2 Instant, which is officially scheduled for retirement on 3 June 2026. Meanwhile, the rumour mill is already spinning for the next generation. References to an unconfirmed GPT-5.4 model have surfaced in OpenAI’s public code repositories. Leaked details point to a massive upgrade in vision capabilities that can process images at full resolution, alongside a new “fast mode” and speculation of a significantly expanded context window.
Why it Matters
The era of the lecturing AI chatbot is officially coming to an end. With GPT-5.3 Instant, OpenAI is actively acknowledging that a smarter model is entirely useless if it constantly scolds the user or refuses basic prompts. By adopting a less rigid and more conversational tone, the technology becomes infinitely more practical for daily tasks. However, the GPT-5.4 leaks reveal where the true technical frontier lies. The ability to analyse visual data without compression could revolutionise fields like engineering and design where precision is paramount. Combined with a “fast mode” and a larger memory bank for extended conversations, these updates directly target the most frustrating bottlenecks in current AI workflows, signalling a massive leap in utility.
Last week’s newsletter:







