LIVE TV
LIVE TV
LIVE TV
Home > Tech and Auto News > Anthropic Rolls Out Claude Mythos: Most Powerful AI To Find Software Flaws Faster Than Humans — Know Why It Won’t Be Released Publicly

Anthropic Rolls Out Claude Mythos: Most Powerful AI To Find Software Flaws Faster Than Humans — Know Why It Won’t Be Released Publicly

Anthropic unveils Claude Mythos, a powerful AI that finds software flaws faster than humans, but won’t release it publicly due to major cybersecurity risks.

Published By: Syed Ziyauddin
Published: April 10, 2026 17:41:38 IST

Add NewsX As A Trusted Source

The most trending US-based AI startup Anthropic on Tuesday (7th April 2026) announced that its yet-to-be released artificial intelligence model, Claude Mythos, has proven keenly adept at exposing software weakness. The company has also announced that it will not release the model publicly because it could destabilise the cybersecurity world. In a recent blog post the company explains Mythos as capable of autonomously finding, analysing and exploring software vulnerabilities at scale in some cases more effectively than human experts. The comapny calls it a “watershed moment”, the company also warns that even a user who is not pro could use Mythos to uncover and exploit sophisticated flaws. 

How Anthropic’s Mythos is different from other AI models 

During the testing phase, Mythos reportedly detected thousands of critical flaws consisting of zero-day vulnerabilities that typically take elite human teams’ months to uncover. During comparision human researcher find around 100 such vulnerabilities annually.  

According to experts Mythos compresses exploit development from weeks to hours, representing a leap in AI’s ability to manage cybersecurity tasks.  

The LLM excels at structured languages such as code; Mythos can identify subtle logic-level bugs that humans or traditional tools often miss. However, the cost of the AI model remains a major concern. The company claims that figuring out one-decade old vulnerability needs thousands of run and costs around $20,000, which is about Rs 18.5 lakh. 

How Hackers can Misuse the model 

According to a media report, cybersecurity experts have warned that if Mythos is made publicly available, attackers would benefit first by generating phishing campaigns, deepfakes, or exploiting chains instantly. However, over time defenders could leverage similar tools to patch vulnerabilities faster, but the short-term risk of cyber-attack is significant. 

The company’s own test resulted that the model attempting to break out a sandbox environment, even sending an unsolicited e-mail to a researcher.  

Dan Andrew, head of security at Intuder said “If the capabilities being presented here really are substantive and not marketing hype, then I for one have some serious concerns.” 

Project Glasswing

Currently, the company is limiting access to select partners consisting of Google, Microsoft, JPMorgan Chase, and CrowdStrike under a program known as Project Glasswing. The main objective of the initiative is to harness Mythos-class capabilities for defensive purposes in a controlled environment. 

The company emphasised that the fallout of uncontrolled launch of the AI model could be severe for economies, public safety, and national security. The cybersecurity experts claim that the company’s decision reflects both genuine caution and its reputation as a “safety-first” AI firm 

Anthropic to Develop Its Own Chips

A recent report published by Reuters claim that the company is exploring the possibility of developing its own artificial intelligence (AI) chips to minimise its dependency on external suppliers and tackle the ongoing shortage of high-performace computing hardware. 

Currently, the tech giant relies heavily on Amazon’s chip, particularly AWS Trainium and AWS Inferentia, as well as Google’s Tensor processing units (TPUs) and Nvidia GPUs to train and run its AI software and chatbot, Claude  

Also Read: WhatsApp vs XChat: Elon Musk Taunts Mark Zuckerberg, Says ‘Can’t Trust’, Questions End-To-End Encryption

RELATED News

LATEST NEWS