📉 OpenAI Bends the Knee, 🧠 AI Predicts Brain Activity, 🛠️ Figma Automates UI
PLUS: Google’s 10M Token Context Leap • Gemini Leads WebDev • Windsurf Powers GPT Coders
👋 This week in AI
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest news
OpenAI Bends the Knee: Nonprofit Retakes Control After Musk Lawsuit, Staff Pressure, and Public Backlash
OpenAI has scrapped plans to become a fully for-profit company, instead restructuring as a Public Benefit Corporation still under full nonprofit control. The nonprofit now holds majority board power and equity in the PBC, enabling it to steer OpenAI’s mission, raise unrestricted capital, and fund external research.
This move follows public pressure from former employees, civil society groups, and investors—including a lawsuit from Elon Musk—challenging OpenAI’s direction and governance. The reversal keeps the nonprofit at the centre of decision-making, while allowing OpenAI to scale operations and continue offering tools like GPT-4 and o3 without disruption to developers.
Why it Matters:
This is a decisive course correction. OpenAI is now aligning with Anthropic and xAI in adopting a Public Benefit Corporation model, but with tighter nonprofit control. It resolves mounting concerns over mission drift and transparency, especially amid legal scrutiny and staff criticism.
Crucially, it allows OpenAI to attract unlimited capital—potentially trillions—while preserving its stated mission to ensure AGI benefits everyone. The promise to open source future models signals a strategic pivot toward openness, setting the tone for how powerful AI tools may be governed and shared.
Gemini 2.5 Pro Now Leads in All Arenas
Gemini 2.5 Pro Preview (I/O Edition) is an early release that enhances coding performance – particularly for front-end and UI development. Now available via the Gemini API in Google AI Studio, it incorporates improved error handling in function calls, supports video-to-code flows, and streamlines feature development. The update is already leading on the WebDev Arena leaderboard and comes ahead of Google I/O.
Why it Matters:
This update shifts coding tasks towards greater automation and efficiency, helping developers translate complex ideas into polished, functional apps with less manual intervention. The enhanced performance and improved reasoning capabilities offer faster prototyping, better code quality, and more reliable outputs for creating agentic workflows, directly impacting productivity and user experience.
OpenAI Buys Windsurf to Train the Future of AI Coders
OpenAI has acquired Windsurf — the leading “vibe coding” IDE — for $3 billion, its biggest acquisition to date. Windsurf, formerly Codeium, started as a GPU infra company before pivoting into AI-assisted coding tools in 2022.
It quickly became the go-to agentic IDE, offering structured edits, multi-agent workflows, and VS Code compatibility — and is now used daily by hundreds of thousands of developers, including enterprise teams at JPMorgan and Dell.
The IDE enables everything from autocomplete to full-stack agent deployment, positioning itself at the heart of how modern code is written, reviewed, and shipped.
Why it Matters:
This isn’t about buying users — it’s about buying context. Vibe coding IDEs like Windsurf are quietly becoming the operating system for AI software development.
With agents increasingly doing the building, the value lies in the telemetry — granular data on developer intent, file structure, and workflow patterns. Windsurf gives OpenAI a deeply instrumented feedback loop to train coding agents at scale — and its own IDE layer to deliver them.
As software creation shifts from typing to steering, OpenAI now controls both the brains (GPT) and the hands (Windsurf) of AI-native engineering.
AI Scientist Goes to Work: Finch Runs Biology Experiments Without Prompts
Finch is a new AI agent from FutureHouse that autonomously analyses biology datasets, reproduces results, and generates novel findings with minimal human input. Now in closed beta, it discovered both known and previously unreported cancer-related gene associations from open-ended prompts.
Finch is part of FutureHouse’s public “AI Scientist” platform, which also includes Crow (scientific Q&A), Falcon (deep literature search), Owl (prior work detection), and Phoenix (chemistry experiment planning). All agents are available via API and built to integrate with research workflows.
Why it Matters:
AI can now run autonomous data-driven research. Finch doesn’t just assist — it inspects results, iterates, and generates hypotheses without direction. By chaining agents like Finch, Crow, and Falcon, FutureHouse compresses weeks of work into hours. With full-text access and transparent reasoning, this shift enables scalable, self-directed AI discovery — turning agents into collaborators, not just tools.
📝 Check out the Future House website
The 10M Token Leap: Superhuman Recall Is Coming
Google DeepMind’s Nikolay Savinov confirms that 10 million-token context windows are already technically viable, with near-perfect retrieval — though still too costly to serve widely.
The current priority is optimising 1–2M context to deliver near-perfect recall affordably. This leap enables large models to operate over vast data sources, including full codebases, video transcripts, and complex documents, in one pass.
Combined with reasoning, these models can perform extended multi-step tasks, hold persistent state across sessions, and generate long-form outputs exceeding 65K tokens.
Why it Matters:
Mastering long context is the key to unlocking superhuman capabilities in AI systems. Once models reliably handle millions of tokens, they surpass human working memory — tracking every line of a massive codebase or every moment of a 60-minute video without forgetting details.
This transforms agent workflows, coding assistance, and personal knowledge retrieval. AI agents will soon fetch and assemble relevant context automatically, removing a major bottleneck in how users interact with LLMs. As cost falls and quality rises, this evolution reshapes AI into a tool that doesn’t just generate — it comprehends at scale.
From Prompt to Prototype: Figma Make Automates UI Behaviour
Figma has officially transformed from a design tool into a full-stack creative platform. At Config 2025, the company launched five major products: Figma Sites (a website builder with live publishing, CMS and code layer support), Figma Make (AI-powered prototyping via prompts), Figma Buzz (template-based content creation for non-designers), Grid Layouts (2D layout in auto layout), and Figma Draw (22 new vector editing features including brushes, textures and dynamic strokes).
All five tools integrate tightly with the core Figma ecosystem, enabling designers, marketers, and developers to go from idea to product — with fewer handoffs and no extra tooling. Sites and Buzz are now in beta; Make is rolling out to full seat holders; Draw and Grid are available today.
Why it Matters:
Figma is no longer just for product design — it’s now positioning itself as the default operating system for building digital experiences. By collapsing the steps between designing, prototyping, publishing, and editing content, Figma is removing traditional barriers between teams. This is key for two audiences:
Designers and devs can now launch responsive websites and prototypes directly, with CMS and code layer support reducing reliance on frontend teams.
Non-designers can safely edit branded assets and campaign materials without breaking design systems.
With AI (Make) generating functional React code and interactive interfaces from plain prompts, and Draw unlocking creative expression with precision, Figma is aggressively targeting both speed and craft. This suite of releases redefines what “designing in Figma” means — turning it from a step in the process into the process itself.
📝 Read the press release from Figma
Your Brain’s Next 5 Seconds? This AI Already Knows
Researchers have developed a Transformer model that can predict future brain activity up to 5.04 seconds in advance using only 21.6 seconds of fMRI data. Trained on data from the Human Connectome Project, the model achieved a correlation of 0.997 when forecasting the next brain state, using time series inputs from 379 regions of the brain. The system uses positional encoding and self-attention to analyse temporal dependencies in brain signals.
The model can generate up to 1,150 time points of synthetic fMRI data and maintains high accuracy for the first seven predictions, equivalent to ~5 seconds of neural activity. It also replicates the brain’s functional connectivity structure at both individual and group levels.
Why it Matters:
This research shows that a Transformer—similar to those used in ChatGPT—can learn and forecast human brain dynamics with extreme precision. By capturing the temporal patterns in resting-state fMRI signals, the model unlocks the potential for faster and shorter brain scans, better brain-computer interfaces, and early diagnosis of neurological disorders.
Its ability to predict the brain’s next state could become foundational for neuroadaptive AI systems, enabling machines to align with a user’s future intent, not just present input. In neuroscience, it marks a step towards building generative models of brain function, compressing minutes of data collection into seconds while preserving clinical relevance.
How John Deere Is Rebuilding Heavy Industry Around AI
John Deere is embedding AI throughout its operations—from smart sprayers to real-time diagnostics—to reshape how large-scale agricultural and industrial systems operate.
AI now helps identify machine issues instantly, guides farmers through setup and optimisation, and delivers personalised ROI reports. Precision tools like See & Spray use machine vision to reduce chemical usage by up to 70%.
Deere is also shifting to usage-based subscriptions powered by AI insights, aligning revenue with demonstrated customer value.
Why it Matters:
For heavy industries, this is a playbook for becoming AI-native without replacing existing infrastructure. Instead of one-off automation, John Deere is threading AI through every operational layer—support, performance, training, and revenue. '
This allows decades-old machinery businesses to scale support to thousands of machines per rep, personalise guidance in real time, and shift toward outcomes-based models.
It’s not just about making equipment smarter—it’s about re-architecting how industrial systems deliver value and stay competitive in a world where customers expect intelligent, adaptive tools.