Advertisement

A new cybersecurity-focused variant of ChatGPT and an expanded access program put OpenAI in direct competition with Anthropic’s Project Glasswing — and raises fresh questions about who gets to wield the most powerful security AI.


Listen to this article

0:00

Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.

Illustration shows the ChatGPT logo on a smartphone in Washington, DC, on March 15. (Photo by Olivier Douliery/AFP via Getty Images)

OpenAI said it is expanding its Trusted Access for Cyber program to ā€œthousands of individuals and organizations,ā€ who will use the company’s technology to root out bugs and vulnerabilities in their products.

The program will also incorporate  GPT 5.4 Cyber, a new variant of ChatGPT that OpenAI says is specifically optimized for cybersecurity tasks. OpenAI’s goal with this release is to make advanced cybersecurity tools more widely accessible.

The company said access to the program and cybersecurity-focused model will still be governed by ā€œstrongā€ Know-Your-Customer and identity verification rules to help prevent the model’s spread to bad actors.

ā€œOur goal is to make these tools as widely available as possible while preventing misuse,ā€ the company said in a blog posted Tuesday. ā€œWe design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t.ā€

Advertisement

OpenAI’s announcement comes one week after Anthropic rolled out Project Glasswing, a similar effort that seeks to provide major tech companies with Claude Mythos, an unreleased model that Anthropic officials have claimed is too dangerous to sell commercially.

OpenAI officials noted they publicly announced Trusted Access for Cyber program months earlier. They have also quietly avoided direct comparisons to Mythos, and GPT 5.4 Cyber.

Cybersecurity experts in the U.S. and UK have described Mythos as a significant improvement from previous frontier models around identifying (and potentially exploiting) cybersecurity vulnerabilities, though there remains debate and speculation about the model’s ultimate impact on information security.  

Similarly, GPT 5.4 Cyber has been finetuned for testing and vulnerability research, though OpenAI wants to make iterative improvements to the program as lessons are learned.

The company has plans to allow  a broader group of cyber operators to use the model to protect critical infrastructure, public services and other digital systems. The company said it is also leery of having too much influence over which industries or sectors ultimately take part in the program.

ā€œWe don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,ā€ the blog stated. ā€œInstead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability.ā€

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts

Government

Technology