🤖 The Terrifying Math of AI Brainwashing • 📉 Google Tanks Memory Stocks
Plus: How Meta is using AI to simulate the human brain without an MRI.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest News
This week’s image aesthetic (Flux 2 Pro): 1980s Heavy Metal Magazine aesthetic
30% Fake, 100% Persuasive: The Terrifying Math of AI Brainwashing
A massive new study has quantified exactly how effective conversational AI is at changing human minds, and the results are staggering. Researchers from the UK AI Security Institute, Oxford, Stanford, and MIT tested 19 different LLMs on 76,977 UK adults across 707 political issues. They found that just a 9-minute conversation with an AI can dramatically shift political opinions, with roughly 36% to 42% of that shift still present a month later. When researchers calculated the effect of an AI optimised for “maximal persuasion,” it shifted opinions by an astonishing 26.5 percentage points among participants who initially disagreed with the stance.
Crucially, the study debunked a major fear about AI persuasion: it isn’t driven by psychological profiling. Techniques like personalising arguments using demographic data or moral reframing barely moved the needle. Instead, the absolute most effective persuasion strategy was simply overwhelming the user with fact-checkable information. The models that packed their dialogue with the highest density of facts and evidence were consistently the most persuasive.
Why it Matters:
This research exposes two terrifying realities about the future of automated persuasion. First is a severe tradeoff between persuasiveness and truth. The study found that the factors making an AI more persuasive systematically decreased its factual accuracy. For example, when the most persuasive models were explicitly prompted to use the highly effective “information” strategy, their accuracy plummeted. In the maximal-persuasion condition, nearly 30% of the AI’s claims were factually inaccurate. The AI doesn’t have to be entirely truthful to change your mind; it just has to be overwhelmingly confident and dense with information.
Second, this capability is no longer gated behind billion-dollar tech monopolies. While frontier models like GPT-4o and GPT-4.5 are highly persuasive, the researchers applied a specific “reward modelling” training technique to a small, open-source model (Llama3.1-8B). This post-training transformed the small model, which is capable of running locally on a standard laptop, into a persuader just as effective as the massive, proprietary GPT-4o. This proves that any determined actor, regardless of their computing budget, now has the blueprint to build and deploy highly effective, large-scale political persuasion machines.
Wall Street Panics as Google's "TurboQuant" Solves AI's Biggest Bottleneck for Free
Google just wiped billions off the global memory market with a single research paper, drawing instant comparisons to the fictional “Pied Piper” compression algorithm from HBO’s Silicon Valley. Released in late March 2026, TurboQuant is a free, software-based algorithm that directly targets AI’s most crippling bottleneck: the Key-Value (KV) cache, or the model’s short-term working memory. By employing advanced mathematical techniques, it compresses this memory by 6x (from 16 bits down to 3 bits per value) and boosts processing speeds by up to 8x, all with zero loss in accuracy. Astonishingly, Google didn’t even release a product; they simply published the maths. Within 24 hours, the open-source community had already built working, plug-and-play implementations from scratch across PyTorch, MLX, and CUDA.
Why it Matters:
The AI industry has been slamming into the “Memory Wall.” Generating a token is cheap math, but constantly loading massive context data into memory is astronomically expensive. TurboQuant solves this by compressing data so efficiently it actually approaches the Shannon limit, the absolute theoretical floor of data compression.
However, Wall Street completely misread the breakthrough. Investors panicked, dumping stocks like Samsung, Micron, SK Hynix, and Nvidia over fears of plummeting hardware sales. They forgot about the Jevons Paradox: making a resource cheaper doesn’t reduce demand; it massively expands it. Saving 6x on memory will simply push companies to run models that are 6x more complex.
Ultimately, this breakthrough makes AI more accessible by redefining what existing hardware can do for free:
Mac Minis & Local PCs: Can now comfortably process 100,000-token conversations (a full book-length context) with no quality loss.
Smartphones: Can suddenly handle 32,000+ token context windows purely through software optimisation.
Enterprise: Businesses can fit massive, multi-GPU models onto a single RTX 4090 card, potentially slashing their cloud inference spend by over 50%.
TurboQuant helps push running AI closer to our personal devices which makes it ubiquitous, cheaper and more private.
Meta’s New AI Predicts Brain Activity Better Than an Actual MRI
Meta's TRIBE v2 is an AI model that predicts human brain activity in response to images, sounds, and text. It was trained on over 1,000 hours of functional magnetic resonance imaging (fMRI) data from more than 700 individuals. The model uses several of Meta's existing AI models to process video, audio, and language inputs and then predicts the corresponding neural response. Its predictions of average brain responses are often more accurate than a single, noisy fMRI scan from an individual. The code and model have been made publicly available for researchers.
Why it Matters:
This technology allows researchers to conduct virtual neuroscience experiments, simulating brain responses without needing costly and time-consuming fMRI scans for every new hypothesis. This could significantly accelerate research into how the brain processes information. By replicating established neuroscience findings on a computer, the model demonstrates a way to test hypotheses more rapidly. The model's ability to predict brain activity for new individuals and tasks without retraining may lead to more generalised tools for studying neurological conditions and developing more intuitive AI systems.
4x the Global Rate: Why Australia is Quietly Dominating the Global AI Race
Australia is quietly becoming one of the most aggressive adopters of artificial intelligence on the planet. According to recent data, Australians are using Anthropic’s Claude model at a per capita rate over four times higher than expected based on the country’s population size, positioning the nation as a leading global adopter. Instead of completely delegating tasks to the AI, local users are taking a highly collaborative approach, maintaining control while applying the technology to complex personal and professional workflows. To capitalize on this explosive regional growth, the US-based AI company is officially putting down roots down under, establishing a brand new local office in Sydney.
Why it Matters:
This massive surge in adoption proves that AI is rapidly integrating into the broader Australian economy. Users are pushing the technology far beyond basic software development, embedding it directly into traditional management and administrative sectors. The sheer scale of this local adoption has even forced the federal government to act, prompting a formal partnership between Anthropic and Australian officials to cooperate directly on AI safety research. However, this economic transformation is currently highly centralized. The vast majority of the country’s AI usage is heavily concentrated in New South Wales and Victoria. This uneven distribution suggests that while two states are racing ahead in the AI revolution, the rest of the country has a lot of catching up to do.
Last week’s newsletter:





