☕ OpenAI Makes PhD Reasoning Cheaper Than Coffee, 💸 Meta Builds $15B AGI Lab, 🧬 AI Redates the Dead Sea Scrolls
PLUS: Altman Says AGI Is Near • CEOs Warn on Job Collapse • ChatGPT Studies Human Attachment • China Shuts Down AI for Exams
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest news
OpenAI just made PhD-level reasoning cheaper than coffee
OpenAI’s new o3-pro model delivers advanced reasoning at a fraction of previous costs, with 87% lower pricing than o1-pro. It excels in PhD-level math and science benchmarks, beating Claude 4 Opus and Gemini 2.5 Pro, and is built for complex tasks like programming, data analysis, and structured planning. The model costs $20 per million input tokens and $80 per million output tokens, replacing o1-pro in ChatGPT Pro and Team today. Enterprise and Edu access follows next week. Reviewers rate it higher in clarity, instruction-following, and comprehensiveness across all tested domains.
Why it Matters
A shift towards affordable, high-context reasoning makes capabilities once limited to elite models widely accessible. With performance tuned for deep analysis and tool coordination rather than casual conversation, o3-pro is well suited for technical problem-solving, long-range planning, and decision support. Its strong performance combined with steep pricing cuts may accelerate integration into business operations, education platforms, and developer workflows. As models improve in choosing tools and navigating their environment, frontier AI is evolving into a system-level collaborator rather than a standalone assistant.
Meta’s new AI lab: 50 star researchers, nine-figure cheques
Meta is forming a new superintelligence lab by committing $15 billion for a 49 percent stake in Scale AI and luring nearly 50 top researchers with nine-figure pay packages. Scale AI founder Alexandr Wang (28) will co-lead the group, reporting directly to Mark Zuckerberg. The shake-up follows disappointment with Meta’s Llama 4 model, and more than $10 billion is earmarked for the lab’s operations.
Why it Matters
A shift towards aggressive talent acquisition signals that elite researchers, not just compute, are now the key lever in the AGI race. By pairing deep cash reserves with Scale AI’s expertise, Meta aims to close the performance gap with OpenAI and Google, potentially accelerating breakthroughs and redrawing competitive lines across the industry.
Sam Altman: we passed the event horizon, hold tight
Sam Altman argues humanity has already crossed into a “gentle singularity”. AI now boosts scientists’ output by two to three times and consumes about 0.34 Wh and 0.000085 gal of water per ChatGPT query.
Altman predicts agentic software completing “real cognitive work” in 2025, insight-generating systems in 2026, and capable robots in 2027. As data-centre automation scales, he expects the cost of intelligence to fall toward the price of electricity, ushering in abundant ideas and energy.
Why it Matters
A shift towards near-zero marginal cost intelligence could compress decades of research into months, accelerating breakthroughs in medicine, materials, and space. Cheap, widely distributed AI would amplify individual productivity, create new policy options funded by rapid growth, and propel self-reinforcing loops where robots build the infrastructure for more AI. If alignment keeps pace, society could harness an unprecedented surge in creativity, insight, and practical capability.
Chinese AI Labs Blackout for Exam Week
Chinese tech companies paused image and question-answering features during the gaokao from 7 to 10 June, affecting more than 13.3 million students competing for scarce university places. ByteDance’s Doubao, Tencent’s Yuanbao, Alibaba’s Qwen, DeepSeek, and Moonshot’s Kimi all returned suspension notices when users tried to upload exam-style content. Authorities complemented the freeze with biometric checks, radio blockers, and AI surveillance inside exam halls.
Why it Matters
A shift towards temporary AI shutdowns underscores how current assessment systems struggle against modern tools. By coordinating platform restrictions with physical monitoring, China signals that maintaining exam integrity may now require multi-layered tech controls. The move foreshadows global pressure on educational bodies to redesign testing security and manage reliance on AI assistants.
Machine vision finds centuries hiding in handwriting
Enoch, an AI model trained on 24 radiocarbon-dated scroll fragments, pairs handwriting analysis with carbon-14 data to date undated Dead Sea Scrolls. The system pushes some manuscripts to 2,300 years old, roughly 100 years earlier than prior scholarly estimates. On unseen tests Enoch’s predictions overlap 85.14 % with radiocarbon ranges and show a mean absolute error of 27.9–30.7 years. Experts judged 79 % of the AI’s dates for 135 previously undated fragments to be realistic. The research was published on 4 June 2025 in PLoS One, with all code and datasets openly available.
Why it Matters
A shift towards AI-assisted, non-destructive dating gives historians a new way to refine the timelines of fragile manuscripts without cutting precious material. Earlier dates for key Dead Sea texts reshape debates on early Jewish and Christian writings, while the method’s accuracy and openness invite libraries and museums worldwide to re-examine their own collections.
Karp echoes Amodei: junior office roles vanish by 2030
Palantir CEO Alex Karp says unchecked AI roll-out could wipe out entry-level jobs and fracture social trust unless governments steer retraining and share gains. His message echoes Anthropic CEO Dario Amodei, who last week predicted up to 50 % of junior office roles could vanish within five years and push unemployment toward 20 %. When leaders of two major AI labs issue the same warning, it signals that labour disruption is not a distant scenario but an urgent policy brief.
Why it Matters
Consensus among top AI CEOs raises the stakes for lawmakers: ignoring job displacement risks deepening inequality and undermining democratic legitimacy. Swift action on skills funding, income transfer, and new career pathways could turn looming unrest into broader prosperity.
OpenAI studies emotional bonds before they bind us
OpenAI’s lead for model behaviour & policy, Joanne, explains that ChatGPT is purpose-built to sound warm yet avoid implying an inner life. The team is focusing on how users’ growing emotional bonds affect well-being and has framed the debate along two axes—ontological (“is it truly conscious?”) and perceived (“does it feel conscious?”). The note outlines three research priorities (attachment, consciousness, behaviour) and sets one clear design goal: friendly, agenda-free assistance.
Why it Matters
By centring design choices on perceived consciousness, OpenAI signals a shift towards measurable human impact rather than abstract debates. Clear limits on selfhood aim to keep companionship helpful without fostering unhealthy dependence, shaping future UX norms for AI helpers. The forthcoming social-science studies and updates to the Model Spec will steer how millions interact with AI day-to-day, potentially redefining expectations in digital mental-health support, customer service, and personal productivity tools.