š Google Hits 980T Tokens, š° Vogueās AI Model Backlash, š¤ Microsoft Rethinks AGI Deal
Plus: China pitches the UN on global AI rules, Anthropicās agents audit other agents, and Stanfordās āVirtual Labā designs COVID-beating nanobodies.
šµ Podcast
Donāt feel like reading? Listen to it instead.
š¬ Explainer video
We break it down with a visual presentation.
š° Latest News
980 trillion tokens in June, up from 480 trillion in May, mark a near doubling of Googleās AI usage in 30 days.
Alphabet says its AI systems processed over 980 trillion tokens in June 2025, up from 480 trillion in May, reflecting a sharp jump in usage across Google products and Gemini APIs. Demis Hassabis echoed the near-quadrillion figure publicly. Importantly, this is a consumption metric for inference across Google surfaces, not a training-data count.
Why it matters:
Scale like this signals surging real-world demand and heavier workloads from newer reasoning models, which often consume more tokens per task. Expect continued spend on data centres and potential latency or quota tuning as Google manages growth. Treat the number as an adoption and cost barometer rather than proof of a quality breakthrough. Validate on your tasks, since tokens processed do not directly equal better outputs.
š Google's blog post
Vogueās first AI model, a two page Guess ad, triggers instant backlash within hours.
Vogueās August 2025 issue ran a two-page Guess advert that used an AI-generated model, credited in fine print to āProduced by Seraphinne Vallora on AI.ā Readers quickly identified the image as synthetic and backlash spread across social platforms. Vogue said the inclusion was advertising, not an editorial choice. Multiple outlets note it is the first time an AI-generated person has appeared in the magazine.
Why it matters:
This is a live case study of synthetic people entering premium print. Brands gain speed, cost control and total art direction, but face reputational risk, job-displacement criticism and scrutiny over disclosure, which in this instance was small. Expect louder calls for clear labelling and internal standards, plus brand-safety reviews before deploying AI faces at scale. For marketers, the takeaway is simple. Test in low-risk channels, publish explicit disclosures, and be ready with reasoning for why AI adds value beyond cutting costs.
š Article by ABC
Microsoft moves to AGI-proof its OpenAI deal, keeping access through 2030
As of 29 July 2025, Microsoft and OpenAI are in advanced talks to revise their partnership so Microsoft keeps access to OpenAIās latest models even if OpenAI declares AGI. Under current terms, an AGI declaration could void Microsoftās access, which is why the clause is being renegotiated. No signed deal has been announced. Microsoft says existing arrangements run through 2030, covering IP access for products like Copilot and Azure.
Why it matters:
If a new agreement lands, Microsoft would likely retain preferred access to OpenAIās next-gen models across Azure and Office, strengthening its moat against rivals. Treat the headlines with caution. The outcome hinges on how āAGIā is defined, which remains contested and reportedly includes profit-linked triggers in some tellings. Until a formal deal is disclosed, this is positioning, not a capability breakthrough. For planning, assume continuity with todayās models and APIs, plus possible contractual stability for enterprise rollouts.
š Article by Reuters
China to the UN: let us set the rules for AI
China unveiled an Action Plan on Global AI Governance at the World AI Conference in Shanghai on 26ā27 July 2025, proposing a UN-centred framework, a new international cooperation body, and expanded cross-border standards work. The plan builds on Beijingās prior āGlobal AI Governance Initiativeā and the 2024 Shanghai Declaration that called for inclusive, interoperable governance and capacity-building for developing countries. Compared with the U.S., which leans on domestic safety institutes and security-driven controls, China is positioning itself as the convenor of multilateral AI rules.
Why it matters:
If the action plan turns into a real UN-aligned forum with uptake beyond Chinaās close partners, expect clearer cross-border standards, more joint research channels, and easier entry points for firms seeking data, talent, and open-source collaboration. The counterforce is U.S. policy: tighter global controls on advanced chips and even AI model weights can still choke technology flows into China-linked projects, while Washington also coordinates allies through its AI Safety Institute network. Net effect for businesses is opportunity with caveats. Partnerships may expand, but compliance, export-control exposure, and data-sovereignty risks remain decisive. Watch whether the proposed global body actually forms and attracts broad membership.
š Article by Reuters
Agents that audit agents: Anthropicās gatekeepers catch rogue behaviour early
Anthropic unveiled automated auditing agents that watch, probe and red-team other AI agents, generating targeted tests and flagging risky behaviour. Internal runs show materially better detection of hidden goals and unsafe actions, with research code available for replication.
Why it matters:
Oversight shifts from ad hoc checks to a repeatable pre-deployment gate. Teams can catch sandbagging, goal drift and prompt-injection failures earlier, cut manual review time, and ship with clearer evidence of safety. Not a silver bullet, but real progress you can trial now.
š Anthropic's blog
Stanfordās āVirtual Labā agents design COVID nanobodies that beat recent variants
Stanford researchers built a āVirtual Labā of AI agents that designed 92 candidate SARS-CoV-2 nanobodies in silico using ESM, AlphaFold-Multimer and Rosetta, then validated a subset in wet-lab assays. Two designs showed improved binding to recent variants JN.1 or KP.3. The work was released as a bioRxiv preprint on 11 Nov 2024, with a Nature news feature highlighting the agentic āAI scientistā approach. This is not from 2020. It is a late-2024 proof of concept with experimental follow-up.
Why it matters:
This looks like real signal, not hype. Agents cut the search space and automate design cycles, yet physical assays still decide what works.
Translation: faster hit-finding and lower upfront cost for antibody-like drugs, but no clinical claims. Teams can reproduce much of the stack today from the released code and methods, then plug into their own wet labs. It is a credible accelerator for discovery, not a shortcut to approved therapeutics.
š Paper in Nature