πŸ”₯ Breaking
Anthropic Unveils Claude Mythos Preview and Project Glasswing

Anthropic pulled the covers off its most capable model yet this week, announcing Claude Mythos Preview together with Project Glasswing, a carefully scoped initiative that puts the new model into the hands of cybersecurity defenders first. Announced on 7 April, the rollout is unusual: Anthropic is holding Mythos back from general release and instead routing it through partners working on the world’s most critical software.

Mythos performs strongly across standard benchmarks, but its most striking capability is in computer security. Over the past several weeks, Anthropic says it used Mythos Preview to autonomously identify thousands of previously unknown vulnerabilities across every major operating system and browser, including a 17-year-old remote code execution flaw in FreeBSD. Through Project Glasswing, the model is being made available to a small group of partners including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft and Nvidia so they can shore up critical defences before models with similar capabilities become widely available.

The announcement matters because it reframes how a frontier lab can ship a model that is plainly too dangerous to release openly. Rather than sitting on the work or relying solely on refusals, Anthropic is giving defenders a head start. For enterprises, it also signals a new reality: the bar for offensive cyber capability in commercial AI is climbing fast, and the window to patch long-standing vulnerabilities is shrinking.

Why This MattersEditor’s Analysis

This is not just a product launch; it is a new playbook for shipping dangerous capabilities. For years, the AI safety debate has lived at the two extremes: release everything and let the community react, or lock everything down and hope nobody else catches up. Anthropic has carved out a third path with Glasswing. By routing Mythos to a vetted group of defenders running some of the world’s most critical infrastructure, the company is turning a “too dangerous to ship” moment into an offensive security exercise that favours the good guys. It is the most concrete example so far of what “responsible scaling” looks like when the risk is no longer theoretical.

The commercial and regulatory implications are enormous. Enterprises that assumed they had years to patch long-standing vulnerabilities now know that a frontier model can find thousands of zero-days on its own. Defenders with Glasswing access get a head start; everyone else is suddenly on a shorter clock. Expect CISOs, regulators and insurers to start asking hard questions about whether their suppliers are inside Glasswing or its equivalents, and whether mandatory disclosure rules should follow this kind of discovery at scale.

The bottom line: Mythos is a reminder that frontier capability no longer arrives with a press release and an API. The real question for 2026 is not who has the most powerful model, but who gets to use it first, under what conditions, and with what obligations to the rest of us.

Also Major This WeekRunners Up
πŸ“Š
Key Statistics & Insights
The numbers that defined AI this week
Apr 4 – 10, 2026
The Week’s Defining TrendIntelligence Brief

The Great AI Capability Split. This week made it clear that the AI industry is splitting into two parallel races. On one side, closed frontier labs like Anthropic and OpenAI are spending tens of billions on compute and choosing who gets to use their most dangerous capabilities first. On the other, Chinese and open-weight players like Z.AI are closing the gap on benchmark performance and shipping models anyone can run. Everything from the Anthropic-Google-Broadcom deal to GLM-5.1’s SWE-Bench Pro win and the Glasswing launch points at the same reality.

The signal for enterprise leaders: raw capability is no longer the scarce resource. Trusted, well-governed deployment is. The companies that figure out how to use frontier models safely, inside proper permissioning and audit frameworks, will pull ahead faster than those still chasing the biggest benchmark number.

πŸ”
This Week’s Spotlight
Deep-dive on the stories every AI professional needs to understand
πŸ“’
Google Syncs Notebooks Across Gemini and NotebookLM
Google made one of its most consequential product moves of the year this week, launching Notebooks as a first-class feature inside the main Gemini app and tying it directly to NotebookLM. Announced on 8 April, the update brings the source-grounded research workspace that turned NotebookLM into a cult favourite directly into the assistant most paying Google customers already use every day. Inside Gemini, Notebooks act as persistent project workspaces where you can collect chats, uploaded files, links and custom instructions in one place. Anything you add to a notebook in Gemini shows up in NotebookLM and vice versa, so you can start a research project in one app and jump into the other to use features like Video Overviews, Audio Overviews and Infographics without re-uploading anything. Google AI Ultra, Pro and Plus subscribers on the web are first in line, with mobile, more countries and free users following. For schools, newsrooms and professional services firms, a single Google AI subscription now covers chat, research, agents and audio overviews.
Read on Google Blog β†’
πŸ›‘οΈ
Project Glasswing: Frontier AI Gets a Defender-First Rollout
Alongside Claude Mythos Preview, Anthropic launched Project Glasswing, a new model of how frontier AI capabilities reach the world. Instead of an open release, Mythos is going to a tightly scoped group of defenders at AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft and Nvidia, all of whom run infrastructure the rest of us depend on. Over a few weeks of testing, Mythos autonomously surfaced thousands of previously unknown vulnerabilities across every major operating system and browser, including a 17-year-old remote code execution flaw in FreeBSD. The Glasswing approach effectively says: if a model is capable enough to find zero-days at scale, the first people to use it should be the ones fixing them. It is a sharp contrast to how compute and capability used to reach the market, and it sets up a harder question for regulators about whether this should become the default for all future frontier models with similar offensive capabilities.
Read on Anthropic β†’
Spotlight: GLM-5.1 Breaks the Closed-Source Lead on SWE-Bench ProOpen Source

Chinese lab Z.AI released GLM-5.1 on 7 April, a 754-billion-parameter Mixture-of-Experts model published under the MIT licence with a 200K context window. GLM-5.1 topped SWE-Bench Pro at 58.4, narrowly beating GPT-5.4 (57.7), Claude Opus 4.6 (57.3) and Gemini 3.1 Pro (54.2). It is the first open-weight model to lead the most demanding software engineering benchmark.

GLM-5.1 was built explicitly for long-horizon autonomous execution. Z.AI reports the model sustained up to 8 hours of independent software engineering work across thousands of tool calls in internal tests. Combined with permissive licensing, this makes GLM-5.1 the strongest argument yet that open-weight models can match or beat frontier closed models for the hardest enterprise coding and agent tasks.

Read the coverage β†’
πŸ’‘
ChatGPT & OpenAI News
All things OpenAI: models, products, business, and funding
β˜€οΈ
Claude & Anthropic News
Everything from Anthropic: Claude updates, research, and business moves
✨
Google Gemini News
Gemini models, Google AI products, and DeepMind research
πŸ’Ό
Corporate AI Developments
Big Tech AI investments, product launches, and strategic moves
πŸš€
AI Innovations
New models, open-source releases, and technical breakthroughs
πŸ“ˆ
AI in Business Applications
Real-world AI deployment across industries
βš–οΈ
Responsible AI
Policy, regulation, safety, ethics, and governance
🧠
Model Tracker
All frontier AI models: status, specs, and benchmark highlights
15 models tracked