💊 AI Exposes Ozempic Secrets ⚖️ • Musk's $134B Legal Ambush • 🛡️ The "Mythos" Cyber Panic
Plus: OpenAI releases a comprehensive new blueprint to combat synthetic child exploitation.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest News
This week’s image aesthetic (Flux 2 Pro): High Renaissance style
Mining 400,000 Reddit Posts: The AI Revealing Weight Loss Drugs Hidden Symptoms
The internet has effectively become the world’s largest decentralised clinical trial. In a new study published in Nature Health, researchers from the University of Pennsylvania used artificial intelligence to scan over 410,000 Reddit posts to uncover hidden side effects of popular GLP-1 weight-loss drugs like Ozempic and Mounjaro. Spanning content from May 2019 to June 2025, this “computational social listening” approach audited the unprompted online confessions of roughly 67,000 users. The AI was trained to separate general chatter from personal experiences, translating colloquial internet slang directly into standardised medical syntax. It found that 43.5% of self-reported users experienced at least one side effect. Crucially, the AI flagged major symptoms that were significantly underreported in official clinical trials. Nearly 17% of these users reported intense fatigue, while others frequently cited reproductive issues like irregular menstrual cycles and extreme temperature shifts such as sudden chills and hot flashes.
Why it Matters
This represents a profound evolution in post-market drug surveillance. Traditional clinical trials are designed to identify the most dangerous medical reactions, but they often possess massive blind spots when it comes to the daily symptoms that patients are actually most concerned about. By auditing massive amounts of raw and unfiltered social media data, researchers can now use AI as a rapid early warning system to flag unrecognised adverse reactions in newly popular medications. While there are obvious limitations to this data, such as Reddit’s demographic skew and the complete lack of a placebo control group, this AI-assisted method offers a significantly faster and more responsive supplement to notoriously slow traditional drug safety monitoring.
The "Mythos" Panic: Why Anthropic's Cyber AI Just Triggered an Emergency Meeting in DC
The AI industry has officially entered a high-stakes cyber arms race, but the two leading companies are taking drastically different approaches to deploying their most dangerous tools. OpenAI just unveiled GPT-5.4-Cyber, a specialised model fine-tuned exclusively for defensive security tasks. This version features significantly lowered restrictions, allowing vetted professionals to execute advanced workflows—like reverse-engineering binary files to uncover malware and vulnerabilities—without even needing the source code. Access is being aggressively expanded through OpenAI’s Trusted Access for Cyber (TAC) program to thousands of verified individual researchers and hundreds of enterprise security teams.
In stark contrast, Anthropic is treating its equivalent model like a systemic weapon. Its unreleased “Mythos” model possesses the extraordinarily dangerous ability to autonomously discover and exploit severe software vulnerabilities, including zero-days. Deeming Mythos far too dangerous for public release, Anthropic has locked it behind Project Glasswing, a restricted defensive alliance granting access only to a tight circle of major tech infrastructure providers like Apple and Google. The sheer threat of Mythos recently triggered an unprecedented emergency meeting in Washington, where the US Treasury Secretary and Federal Reserve Chair summoned Wall Street CEOs to urgently address the severe risks the model poses to the global financial sector.
Why it Matters
These twin releases represent a terrifying paradigm shift in global cybersecurity: AI can now autonomously identify complex vulnerabilities that have evaded elite human experts for years. However, the differing release strategies highlight a profound tension in managing dual-use frontier models. OpenAI’s broader, identity-verified rollout aims to democratise access and rapidly arm a massive community of defenders with advanced tools. Meanwhile, Anthropic’s hyper-restricted approach prioritises patching the world’s most vital systems in secrecy before the technology inevitably proliferates.
Ultimately, both models permanently lower the cost, effort, and level of expertise required to find significant security flaws. This guarantees a future where the speed and scale of vulnerability discovery will explode, forcing a complete overhaul in how software is secured. As demonstrated by the recent panic among US financial regulators, banks and corporations now face immense pressure and skyrocketing costs to rapidly fortify their complex, legacy supply chains before these AI-driven hacking capabilities fall into the hands of malicious actors.
🔗 More from The Wall Street Journal
The $134 Billion Legal Ambush: Elon Musk Demands OpenAI Fire Sam Altman
Elon Musk has dramatically escalated his legal war against OpenAI, and he is not looking for a personal payout. With a massive trial looming on April 27 in California, Musk has filed a surprise amendment to his lawsuit, officially seeking to oust CEO Sam Altman and President Greg Brockman from their roles. He is demanding that the estimated $79 billion to $134 billion in potential damages be awarded entirely to OpenAI’s original charitable foundation rather than to his own pockets. The core dispute hinges on Musk’s accusation that he was defrauded into bankrolling the company’s early days under the false promise it would remain a non-profit. OpenAI has aggressively fired back. The company dismissed the lawsuit as a “harassment campaign” from a competitor and slammed the last-minute demands as a “legal ambush” designed to inject chaos into the proceedings.
Why it Matters
This courtroom showdown is the ultimate stress test for the AI industry’s rapidly fading altruism. It exposes a massive, fundamental tension between developing world-changing technology for the public good and cashing in on a lucrative commercial boom. If Musk wins, it could trigger a catastrophic restructuring of the world’s most influential AI company. A ruling in his favour would potentially strip Altman of his power and force a complete unwinding of OpenAI’s massive for-profit empire.
Furthermore, this case establishes a terrifying new precedent for Silicon Valley. It signals that foundational mission statements are not just marketing spin, but potentially legally binding promises as startups scale and take on investment. Ultimately, the outcome of this trial will likely rewrite the governance models for future tech giants. It also proves that billionaire rivals are increasingly willing to weaponise the legal system to brutally challenge the structure and direction of their biggest competitors.
"Safety by Design": OpenAI Unveils Blueprint to Combat Synthetic Child Exploitation
OpenAI has officially unveiled a comprehensive policy blueprint designed to fortify US child protection frameworks against the rapidly escalating threat of AI-enabled exploitation. Released in April 2026, this “Child Safety Blueprint” is not a new piece of software, but rather a strategic roadmap developed alongside major advocacy groups like the National Center for Missing & Exploited Children. The framework aggressively targets three core pillars: modernising outdated state laws to explicitly criminalise AI-generated abusive material, overhauling the quality of reporting pipelines to law enforcement, and embedding “safety-by-design” safeguards directly into the architecture of AI systems to detect and block misuse from day one.
Why it Matters
Generative AI has fundamentally lowered the barrier to entry for creating synthetic abusive material, allowing horrifying digital crimes to scale at an unprecedented rate. Because existing legal frameworks often possess massive blind spots when dealing with entirely fabricated content, they create severe loopholes that this blueprint is desperately trying to close. This proposal signals a monumental shift in how the tech industry approaches product development. It proves that safety must be built into AI systems by default rather than bolted on as a reactionary afterthought. Ultimately, this framework establishes a vital new baseline for the industry, driving up expectations from both regulators and consumers for verifiable safety features while forcing a much more proactive, coordinated response to combatting digital harm.
Last week’s newsletter:





