🥋 Robots in the Ring, 💻 Claude Turns Criminal, 🌈 Photons Beat Pixels
Claude was used in live ransomware attacks across 17 critical sectors.
🎵 Podcast
Don’t feel like reading? Listen to it instead.
By the way, if you like the podcast. Please let me know (20 seconds).
📰 Latest News
Small Bot, Big Kick: Unitree G1 Upstages WAIC Beside Dana White
Unitree’s G1 humanoid took a star turn in Shanghai, throwing a roundhouse kick beside UFC president Dana White and stealing attention at WAIC with ring-style demos. Clips show a G1 stepping toward White before handlers intervened, and separate videos highlight boxing sequences and high-torque joints. The G1 is a compact, electric humanoid sold for research and entertainment use.
Why it matters:
Strong balance and dynamic motion are moving into smaller, cheaper robots, which makes live shows, training, and R&D more accessible. The global race is heating up. Figure is piloting humanoids with BMW in Spartanburg, with reports of a 20-hour endurance run under evaluation. Amazon’s work with Agility’s Digit continues, and Tesla keeps posting Optimus gait upgrades. Sanctuary AI is showing new Phoenix iterations, and Apptronik raised a large round to scale Apollo. Together these signals point to faster progress in real settings, not just lab videos.
Weaponised AI: Anthropic Says Claude Code Fueled a Multi-Target Ransom Spree
Anthropic says a criminal group abused Claude Code to run a large-scale data-theft and extortion operation—using the tool to scout targets, break in, steal credentials, sort stolen files and even draft ransom notes. The campaign hit at least 17 organisations across critical sectors before Anthropic disrupted it. Security outlets report the same case and note copycats building ransomware with AI assistance.
Why it matters:
AI coding tools can speed up cybercrime and lower the skill needed to launch it. That means faster attacks, more tailored threats and wider impact when these tools are misused. Expect closer scrutiny of how agentic features are gated, how access is monitored and how quickly vendors can detect and shut down abuse. It’s a clear signal that AI now sits on both sides of the security line.
Your Favourite Podcasts Now Sound Like ChatGPT, FSU Study Finds
Florida State University analysed 22.1 million words from unscripted spoken English, including conversational podcasts, and found a clear rise in LLM-associated buzzwords after ChatGPT’s release. Words like “delve,” “intricate,” “garner,” “surpass,” and “meticulous” show marked increases, while close synonyms did not. The study was accepted to AIES 2025 and links the shift to AI influence rather than normal news-driven spikes.
Why it matters:
Podcast talk is picking up AI’s vocabulary, which shapes how listeners learn about tech and how creators frame topics. Expect more AI-flavoured language in mainstream shows, more audience familiarity with niche terms, and more scrutiny of how AI trends steer public conversations. For producers, this is a signal to explain terms plainly and watch for echo-chamber phrasing that can confuse or bias audiences.
📝 More from Florida State University News
Excavator Gets Fingers: Japan’s Robot Hand Sorts Quake Rubble Safely
Japan is trialling a giant, excavator-mounted robot hand that uses AI and fingertip sensors to pick through earthquake rubble with much finer control than a bucket. The prototype comes from the government-backed CAFE project (Tsukuba University, NAIST, Kumagai Gumi, ETH Zurich) and was publicly demoed in July, showing delicate lifting of buried objects for disaster clean-up.
Why it matters:
Rescuers can keep people out of unstable zones while the machine sorts debris without crushing it, cutting risk of secondary collapse and speeding paths to survivors and critical gear. If trials continue to hold up in the field, expect faster, safer clearances after quakes and floods, using equipment crews already know how to operate.
📝 More from Interesting Engineering
VaxSeer vs Flu: MIT Tool Outscores WHO Picks in Most Seasons
MIT’s VaxSeer uses AI to forecast which flu strains will dominate and which vaccine picks will best match them. Trained on decades of viral sequences and lab assay data, it combines a “dominance” predictor with an “antigenicity” predictor to score candidate vaccines months ahead of the season. In a 10-year retrospective, VaxSeer’s choices beat or matched the WHO selections in most seasons for H3N2 and H1N1. It is being evaluated, not yet in broad public use.
Why it matters:
Better picks mean better protection. If adopted, health agencies and manufacturers could choose strains with more confidence, reduce mismatches, cut waste, and start production earlier. That translates to fewer illnesses, fewer clinic visits, and steadier planning for vaccine supply.
AI at Light Speed: UCLA Prints Images in a Single Optical Shot
UCLA built an optical generative AI that makes images with light instead of GPUs. A small digital encoder creates a phase “seed,” a spatial light modulator writes it onto a laser beam, and trained diffractive layers act as a physical decoder so the picture appears in a single shot. The team also shows a multi-step optical mode that mimics digital diffusion, and a multi-colour setup where each wavelength decodes with its own matched optic, hinting at built-in privacy. Today, speed is mostly limited by the display hardware in front of the optics, not by computation.
Why it matters:
One optical pass can deliver real-time output at very low energy, which suits lab instruments, smart cameras, and AR devices where power and latency matter. It is still a lab system that mixes optics with electronics, but the direction is clear: faster and greener image generation as photonic components improve.