🧬 Cancer Risk in Your Face, 🇨🇳 China’s AI Racks Sit Empty, 📱 Gemini Everywhere
PLUS: OpenAI Predicts Scientific Discovery • Trump Overturns AI Chip Ban • ChatGPT Thrives Behind Firewalls
👋 This week in AI
🎵 Podcast
Don’t feel like reading? Listen to it instead.
📰 Latest news
AI Detects Age Gap: Older-Looking Faces Signal Higher Cancer Risk
FaceAge—an AI model from Mass General Brigham—turns a single clinic photo into a biological-age estimate and refines survival forecasts for cancer patients.
Cancer patients look ≈ 5 years older than their actual age; every extra “looking decade” lifts mortality risk 11–15 %.
Adding FaceAge scores pushed doctors’ 6-month survival accuracy from AUC 0.74 → 0.80.
Trained on 58 851 public images; validated on 6196 patients across pan-cancer, thoracic, and palliative cohorts.
Mean absolute error for over-60s: 4.09 years.
FaceAge age-gap aligns with the CDK6 senescence gene, implying capture of cellular ageing signals.
Works offline, dropping straight into existing workflows via one low-cost photograph.
Why it Matters:
Turning intuitive “you look unwell” impressions into a quantitative score lets clinicians tailor treatment intensity and end-of-life planning with sharper, data-backed odds. The gene link hints that the model is reading real biology, not just surface features, paving the way for face-based biomarkers that track disease-driven ageing and guide truly personalised oncology care.
China’s AI Server Farms Sit Dark as 80 % Capacity Goes Unused
China’s AI data-centre boom has flipped from frenzy to surplus. Up to 80 % of the shiny new capacity sits idle, even as Chinese tech giants and US rivals keep announcing multi-billion-dollar build-outs—underscoring a mismatch between speculative infrastructure and real AI workloads.
500+ smart-computing projects announced; 150 finished by end-2024, yet most racks remain unused.
GPU rental for an 8 × Nvidia H100 server plunged from ¥180 000 to ¥75 000 per month; black-market H100s once hit ¥200 000 each.
DeepSeek’s cheap reasoning model shifted demand from bulk training to low-latency inference, sidelining inland centres optimised for cheap power rather than speed.
Many facilities are now “distressed assets”; Beijing is expected to seize and transfer them to experienced operators.
The spend keeps rising: Alibaba plans $50 bn, ByteDance $20 bn for fresh infrastructure, while the US-led Stargate consortium targets $500 bn over four years.
Why it Matters:
China’s glut highlights the cost of building ahead of need: capital and energy are tied up in empty halls while AI usage pivots towards smaller, quicker inference clusters near talent hubs. Yet the parallel surge in Chinese and US investment shows compute remains the currency of AI power. Expect a shift towards smarter allocation—consolidating stranded assets and prioritising low-latency, inference-ready sites—to turn vast but silent infrastructure into productive engines of next-generation AI.
Gemini Everywhere: Google’s AI Expands Across Android Ecosystem
Google is turning Gemini into the connective tissue of the Android universe. The AI assistant, already on phones, is now set to power smartwatches, TVs, cars, and upcoming XR headsets—creating a multimodal assistant that follows you throughout the day.
Wear OS Smartwatches: Gemini arrives in the coming months, enabling natural voice interactions for reminders, answers, and app-connected tasks—without touching your phone.
Android Auto: Will soon support Gemini for managing drive-related requests like routing, summarising texts, and translating responses—all hands-free.
Google TV: Later this year, Gemini will offer smart content suggestions and educational answers, tailored for families and kids.
XR Headsets: Gemini will be integrated into Android XR devices (co-developed with Samsung), helping users plan immersive travel itineraries and multitask in mixed-reality environments.
Why It Matters:
While companies like Apple have moved cautiously with LLM-powered assistants, Google is embedding Gemini across the entire Android ecosystem. This positions Gemini as a persistent, AI-native layer that unifies experiences across devices.
Whether you’re driving, cooking, watching TV, or exploring the metaverse, Gemini is designed to act as a personalised AI presence wherever you are.
OpenAI: Our Models Are Already Discovering New Knowledge
Jakub Pachocki believes AI is already showing signs of reasoning and discovery. In a new interview with Nature, the OpenAI chief scientist says we should expect systems capable of producing original research and economic impact by the end of the decade. He also confirmed OpenAI will soon release its first open-weight model since GPT-2.
Pachocki says there is “significant evidence” that models can generate novel insights
He expects AI to autonomously produce valuable software and early-stage research this year
Reinforcement learning is becoming a core method for enabling reasoning-style behavior
OpenAI will release a new open-weight model aimed at outperforming current open models
AGI, in his view, will be marked by measurable economic output and autonomous discovery
Why It Matters:
Pachocki’s definition of AGI is practical and surprisingly near-term. Rather than waiting for humanlike reasoning or sentience, OpenAI is aiming for systems that deliver real-world scientific value and self-directed output. If successful, AI won’t just assist researchers — it will start becoming one. This shifts the timeline and the stakes for labs, universities and regulators around the world.
Trump Overturns Biden’s Global AI Chip Restrictions
The Trump administration has scrapped a sweeping Biden-era rule that would have imposed global export restrictions on U.S. AI chips, instead opting for a deal-by-deal strategy focused on specific countries. The decision came just days before the regulation was set to take effect.
The Biden-era Artificial Intelligence Diffusion Rule split countries into three tiers, introducing chip export restrictions for the first time to many U.S. allies and tightening controls on China and Russia.
The Commerce Department cancelled the rule and announced plans for a more flexible framework based on country-specific agreements.
New guidance also states that using Huawei’s Ascend AI chips anywhere globally now violates U.S. export laws.
Trump’s Secretary of Commerce called Biden’s rule “counterproductive” and reaffirmed keeping advanced tech out of adversarial hands while expanding AI trade with “trusted partners.”
The reversal coincides with a high-profile U.S.-Saudi business forum in Riyadh, where Trump, Elon Musk, Sam Altman, and other tech leaders met with Saudi Crown Prince Mohammed bin Salman.
Why It Matters:
The rollback marks a fundamental shift in U.S. AI export policy, favouring geopolitical agility over blanket restrictions. The presence of industry leaders like Altman and Musk in the Middle East signals alignment between U.S. tech and Trump’s more commerce-driven AI strategy. While the new approach may open markets and boost diplomatic flexibility, it also risks fragmenting global export governance and embedding AI access within political alliances.
ChatGPT Thrives Behind Firewalls
ChatGPT fingerprints now appear in 12.6 % of paper summaries on ArXiv and BioRxiv—open‑access preprint sites for the physical and life sciences (Aug 2023).
Uptake is highest where ChatGPT is blocked: Chinese researchers reach 22.3 %, versus 11.1 % in countries with legal access.
ChatGPT‑polished papers climb roughly two percentile ranks in abstract views, PDF downloads, and full‑text reads.
Why It Matters:
Access bans are leaky. Researchers in restricted regions tunnel through VPNs to harness ChatGPT as an unbiased writing aid, narrowing English‑language gaps and boosting visibility. Clearer abstracts draw extra readers, hinting at wider knowledge flow once language friction drops.
AI is Better at Structured Documentation and Summarisation Than Doctors
OpenAI’s o4 logged 77.2 % factual accuracy, 83.2 % reasoning, and only 3.4 % hallucinations—topping Claude 3.5 and Gemini 1.5 Pro on the new HealthBench dataset.
HealthBench packs 5 000 multi-turn conversations, scored against 48 562 physician-written criteria across 14 medical task types.
Built with 262 doctors from 60 countries speaking 49 languages, it blends lay-person and clinician viewpoints.
The full dataset and GPT-4.1 grading pipeline are free on GitHub, letting anyone benchmark models locally or in the cloud.
Tests reveal LLMs now exceed physician baselines on structured documentation and summarisation, though gaps remain in context seeking and worst-case reliability.
Why It Matters:
AI matching expert documentation frees clinicians for direct patient care and quickens communication. Because HealthBench pinpoints where models still stumble—yet covers multiple languages and specialties—progress measured here is likely to translate across diverse health systems. Open sourcing the rubric and tooling gives hospitals, startups, and researchers a shared yardstick, tightening feedback loops and focusing effort where human-level performance is still out of reach.
📝 OpenAI's HealthBench announcement