Meta's Multi-token Prediction π₯ AI Can Read Your Mind π§ OpenAI Gets Hacked π¨
Your weekly dose of AI insights
Welcome to this week in AI.
This week, Meta launched a new lightning-fast LLM, researchers reconstructed images from brain activity, OpenAI unveiled a personalised AI health coach, and Kenyan youth used AI to combat corruption.
Letβs get into it!
π If youβre new here, welcome!
Subscribe to get your AI insights every Thursday.
Meta Just Changed the Game: Multi-token Prediction
Meta AI has developed a new approach to training large language models (LLMs) called multi-token prediction.
This technique allows the model to predict multiple words simultaneously, unlike the traditional method of predicting one word at a time.
Meta has open-sourced four new language models that use this technique, each with 7 billion parameters and designed for code generation tasks.
The models were trained on large dataset code samples, with two models trained on 200 billion tokens and the other two on 1 trillion tokens.
Why it matters
Meta's multi-token prediction models have shown significant improvements in accuracy and speed over traditional LLMs.
They have demonstrated:
3 times faster text generation than traditional models.
17% improvement on the MBPP benchmark (~1,000 Python coding tasks test)
12% improvement on HumanEval (a set of coding tasks that span multiple programming languages)
These advancements could revolutionise the field of LLMs, leading to more efficient and accurate code generation, which could streamline software development and boost productivity.
The ability of these models to learn longer-term patterns, especially when dealing with small units of information like individual letters or bytes, opens up new possibilities for applications where predefined vocabularies are not available.
AI Can Read Your Mind
Researchers have developed an AI system capable of reconstructing images viewed by humans and animals based on their brain activity.
By analysing recordings from fMRI scans or implanted electrodes and focusing on specific brain regions, the AI can generate remarkably accurate images, particularly when reconstructing AI-generated images viewed by macaques.
While this breakthrough offers potential for restoring vision, it also raises ethical concerns about the potential misuse of such technology.
Why it Matters
This advancement in AI not only holds the promise of restoring sight to the visually impaired, but also opens up a Pandora's Box of ethical dilemmas.
This technology could potentially be used to extract images directly from our minds, raising serious privacy and security concerns.
It begs the question of who controls our thoughts and images, and how this technology might be used to manipulate or surveil individuals.
OpenAI Joins Forces With Thrive Global to Develop Personalised AI Health Coach
Chronic diseases affect a staggering 129 million Americans and consume a vast portion of healthcare spending, with 90% of the $4.1 trillion annual healthcare budget directed towards treating these conditions.
However, AI offers a promising solution through personalised health coaching.
By leveraging individual data and focusing on five key behaviours - sleep, food, movement, stress management, and social connection - AI can guide individuals towards healthier choices.
Thrive AI Health, a new venture backed by OpenAI and Thrive Global, is set to develop an AI-powered health
Why it Matters
This innovative approach could empower individuals to take control of their health. By addressing the root causes of chronic diseases through behaviour change.
Moreover, the personalised nature of AI coaching ensures that recommendations are tailored to individual needs and preferences, increasing the likelihood of sustained behaviour change.
This is not just about treating diseases. Itβs also about prevention and promoting a healthier lifestyle.
With the right collaboration among stakeholders, AI-driven behaviour change could transform the healthcare landscape, making it more effective, accessible, and equitable.
π° Article co-authored by OpenAI's CEO
Kenyaβs Youth Use AI to Expose Corruption
In Kenya, a wave of youth-led anti-government protests has taken a tech-savvy turn, with demonstrators leveraging AI tools to amplify their voices and challenge the political establishment.
Protesters have developed AI-powered chatbots like "Corrupt Politicians GPT" and "Finance Bill GPT" to expose corruption, disseminate information about controversial legislation, and mobilise support.
These tools have proven particularly effective in translating complex legislative jargon into actionable information, fostering informed activism among a wider audience.
A surge in tech-driven activism is not surprising given Kenya's burgeoning tech ecosystem and high population of young developers.
Why It Matters
The Kenyan protests offer a glimpse into the future of political activism, where AI plays a pivotal role in empowering citizens and holding governments accountable.
This innovative use of technology could serve as a model for other social movements around the world, demonstrating how AI can be harnessed for good.
π° Article by Semafor
OpenAI Changes Board Structure Again Amid Regulatory Scrutiny
Microsoft has relinquished its observer seat on OpenAI's board, a position that granted access to board meetings and confidential information without voting rights.
This move comes amid growing antitrust concerns over Microsoft's significant investment in OpenAI.
OpenAI has confirmed it won't appoint any more observers, including Apple, who was reportedly considering joining. Instead, OpenAI is establishing a new approach to engage with strategic partners and investors through regular stakeholder meetings.
Why it Matters
The observer seat serves as a strategic vantage point, allowing companies to stay informed about the direction and decisions of a company without directly influencing them.
Microsoft's departure, coupled with OpenAI's decision to discontinue the observer role altogether, signals a shift towards a more independent and autonomous approach for OpenAI.
It also suggests a response to regulatory concerns about potential control and influence by major investors.
π Report from the International Energy Agency
Stable Diffusionβs Dramatic Downfall
Stable Diffusion, a revolutionary image generation model developed by Stability AI, marked a significant milestone in AI.
Unlike its predecessors that relied on complex algorithms, Stable Diffusion achieved impressive results using a diffusion process, making image generation more accessible, this kick-started the consumer image generation tools weβre all using today.
However, the companiβs new model has faced a turbulent journey since the release of its third iteration, SD3.
The model's restrictive licensing terms, a stark departure from its open-source origins, sparked outrage within the AI community.
Concerns arose about the terms stifling innovation and favouring commercial interests over collaborative development. Despite attempts to appease the community with a revised license, the damage was done.
The issues surrounding SD3 extended beyond licensing disputes. The model's launch was plagued by technical difficulties, including incompatibility with existing systems and performance issues on consumer-grade hardware.
Why It Matters
The events at Stability AI underscore AI companies' complex challenges as they navigate the delicate balance between commercialisation and open-source collaboration.
The controversy surrounding SD3 also underscores the power of collective action within the AI community, demonstrating how user feedback can significantly influence the development and direction of AI technologies.
π° Article by Umbra AI
π License update post by Stability
OpenAIβs Security Breach
In early 2023, OpenAI suffered a security breach where a hacker accessed internal discussions about its AI technology.
While not deemed a national security risk, the incident raised concerns about protecting AI innovations, particularly amidst the US-China AI race.
OpenAI addressed the breach internally and has since strengthened its security measures.
Why it Matters
The breach highlights the vulnerability of even leading AI companies to cyber threats.
As AI advances, safeguarding these technologies becomes crucial, especially with growing concerns about intellectual property theft and potential misuse.
This incident emphasises the need for robust security protocols and ongoing vigilance in the rapidly evolving AI landscape.
π° Article by The New York Times
Another Startup Acquihire by Big Tech
Big tech companies like Amazon and Microsoft are using a new strategy to circumvent antitrust concerns in the AI industry: "reverse acquihires." Instead of buying AI startups outright, they hire most of the employees and licence the technology, avoiding antitrust scrutiny.
This trend is exemplified by Amazon's recent absorption of talent from Adept AI and Microsoft's similar move with Inflection AI.
This strategy is driven by the intense scrutiny by EU and US regulators in blocking acquisitions to reduce monopolisation in the tech sector.
Why It Matters
This strategy could further consolidate the AI industry, potentially stifling innovation and reducing competition.
For established players, it's a way to quickly acquire talent and technology without regulatory hurdles.
The rise of reverse acquihires highlights the need for regulators to adapt to new strategies and ensure fair competition in the rapidly evolving tech landscape.
π° Article by The Verge
Thatβs a Wrap!
If you want to chat about what I wrote, you can reach me through LinkedIn.
Or give my editor a bell through his LinkedIn here.
If you liked it, give it a share!