💀 "The F*ckening", 👁️ Apple’s New Vision, ⚛️ GPT-5.2 Physics Breakthrough
Plus: How to boost prompt accuracy by 75% with one simple tweak.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest News
This week’s image aesthetic (Flux 2 Pro): Claymation / Stop-Motion
The Double-Read Hack: Copy-Pasting Your Prompt Boosts Accuracy by 75%
Google researchers have discovered a “free lunch” in AI optimisation: simply repeating a prompt twice can dramatically improve a model’s performance. In a new paper, the team demonstrated that duplicating the input text allows Large Language Models (LLMs) to better grasp the full context of a query, leading to measurable accuracy gains across seven major models (including Gemini, GPT-4o, and Claude 3.7 Sonnet) and seven different benchmarks. The improvements were consistent and sometimes massive; in one test involving finding a name in a list, the Gemini 2.0 Flash-Lite model’s accuracy skyrocketed from 21.33% to 97.33% just by processing the text twice. Crucially, because the repetition happens during the parallelisable “pre-fill” stage, this technique requires no fine-tuning and results in no increase in latency or generation costs.
Why it Matters
This solves a fundamental structural flaw in how LLMs read. Because these models process text left-to-right, tokens at the beginning of a prompt (the context) cannot “see” or attend to the tokens at the end (the question). This often leads to failures in tasks where the answer depends on understanding the relationship between the context and the specific query. By feeding the prompt into the model twice (<PROMPT><PROMPT>), the system effectively gives every token a second chance to attend to every other token, mimicking the deep contextual understanding of bidirectional models without the associated computational overhead. For developers, this offers a rare efficiency hack: a way to instantly boost intelligence and stability without paying for longer generation times or slower responses.
The End of Premium: Sonnet 4.6 Delivers Flagship Coding Skills for Pennies
The gap between “flagship” and “budget” models has effectively evaporated. On 17 February 2026, Anthropic released Claude Sonnet 4.6, a mid-tier model that statistically matches its heavier sibling in pure coding ability. On the SWE-Bench Verified leaderboard, the gold standard for autonomous software engineering, Sonnet scored 79.6%, sitting within a margin of error of the massive Opus 4.6 (80.8%).
Why it Matters
This drives the marginal cost of software creation toward zero. High-end intelligence is officially a commodity. Developers no longer need to burn budget on premium inference to get state-of-the-art coding capabilities. By delivering flagship performance at a mid-tier price point, Sonnet 4.6 makes automated software engineering economically viable for the first time. It allows companies to deploy autonomous bug-fixing agents at scale without the crippling unit economics of a large model. Ultimately this signals a future where software is no longer a scarce asset but a disposable utility generated on demand.
Siri Gets Eyes: Apple is Building Smart Glasses to Watch Your Life
Apple is quietly building eyes and ears for the iPhone. The company is developing three new AI wearables designed to function as sensory extensions for Siri including smart glasses, a camera-equipped pendant and updated AirPods with integrated cameras. All three devices are stripped-back accessories without screens. The glasses are slated for a 2027 release and will rely entirely on voice commands and visual intelligence. The pendant and AirPods could arrive sooner to serve as always-on sensors that feed real-time auditory and visual data to the phone.
Why it Matters
This strategy attempts to make computing ambient rather than immersive. By giving Siri the ability to “see” the user’s environment Apple is shifting the interaction model from deliberate touch inputs to seamless and context-aware assistance. For instance the AI could proactively identify landmarks or answer questions about what you are looking at without you ever pulling out a phone. It positions Apple to compete in the AI hardware wars by upgrading familiar form factors like glasses and headphones rather than forcing users to adopt entirely new device categories.
From Inference to Invention: GPT-5.2 Solves a ‘Impossible’ Physics Problem
OpenAI’s GPT-5.2 has effectively graduated from inference to invention. In a preprint released in February 2026, a specialised checkpoint of the model autonomously derived a novel formalism for gluon particle interactions, a problem previously considered intractable by human theorists. Crucially, the system did not merely predict the next token; it generated a rigorous formal proof in approximately 12 hours of inference time, which was subsequently verified by physicists.
Why it Matters
This represents the first genuine instance of de novo scientific discovery by a large language model. By autonomously proposing and formally proving a theorem that existed outside its training distribution, GPT-5.2 has demonstrated that transformer-based architectures can perform valid reasoning in highly abstract domains. It’s a paradigm shift where AI moves from a tool for information retrieval to a collaborative co-author capable of expanding the boundaries of human knowledge. The ability to compress decades of theoretical work into a 12-hour inference run suggests we are entering an era of accelerating returns for fundamental research.
The Silicon Blockade Fails: China’s 744B Model Runs Entirely on Domestic Chips
The silicon blockade has officially been breached. Z.ai released GLM-5 in February 2026. It is a massive open-source language model that utilises a Mixture-of-Experts architecture with 744 billion parameters total and 40 billion active for any given task. The system employs a specific attention mechanism to process contexts up to 205,000 tokens and was post-trained using a novel asynchronous reinforcement learning method. The model is available immediately under an MIT licence via the Z.ai API and platforms like Hugging Face.
Why it Matters
This proves that frontier AI can be built without American hardware. GLM-5 was trained entirely on domestic Chinese chips which demonstrates a viable path to creating state-of-the-art models without reliance on US-sanctioned components like NVIDIA GPUs. The model is currently the highest-performing open-source system on several industry benchmarks. It significantly reduces the capability gap with leading proprietary systems in coding and complex reasoning. This signals a major shift in the global AI supply chain. Hardware independence is no longer a theoretical goal for Chinese labs but an operational reality.
‘The F*ckening’: Andrew Yang Predicts a White-Collar Job Wipeout in 12 Months
Former presidential candidate Andrew Yang has issued a stark warning about the immediate future of work. In a recent newsletter, he predicts a “great disemboweling” of the American workforce where artificial intelligence will eliminate millions of white-collar jobs within the next 12 to 18 months. Yang argues that any role primarily involving processing information at a desk is now at risk, from coding to middle management. He has termed this displacement wave “The Fuckening” to capture its visceral nature. He notes that once companies begin streamlining with AI, competitors will be forced to follow suit because the stock market will reward headcount reductions and punish those who retain human staff.
Why it Matters
This represents a collapse of the modern social contract. Yang argues the impact will extend far beyond the office, crippling local economies as unemployed professionals stop spending on services like dry cleaning and dog walking. The ripple effects are predicted to be severe. Yang forecasts surging mortgage defaults in wealthy enclaves like Silicon Valley, the devaluation of expensive college degrees as entry-level jobs vanish, and the transformation of city centres into “urban wastelands” as commercial real estate empties out. Ultimately, he warns that the vaporisation of upward mobility will lead to unprecedented social unrest and anger, reinforcing his long-standing argument for Universal Basic Income as the only viable buffer against a future where brainpower creates zero economic value for the average worker.
Last week’s newsletter:







