Categories: World News

Why Are Anthropic And ChatGPT Hiring Experts On ‘Dirty Bombs’? The Shocking Reason Behind Big AI Firms’ New Safety Push Amid US-Israel-Iran War

Amid rising tensions in the Middle East following US-Israeli strikes on Iran and Tehran’s retaliatory actions in the Gulf, concerns are growing over whether Iran could opt for a less lethal but highly disruptive tactic such as deploying a 'dirty bomb'.

Add NewsX As A Trusted Source
Add as a preferred
source on Google
Published by Namrata Boruah
Published: March 24, 2026 13:19:35 IST

US based AI company Anthropic is hiring a specialist in chemical weapons and high-yield explosives not to build such weapons, but to prevent its AI systems from being misused. The position, which is being advertised on LinkedIn, involves a background in chemical weapons defence and understanding in radiological dispersal devices, which are otherwise known as dirty bombs. It has become evident to the company that the vision is to reinforce safety guardrails so that its AI tools cannot be used to produce harmful instructions or help in risky operations.

Why Are Anthropic And ChatGPT Hiring Experts On ‘Dirty Bombs’?

As it is stated in a report by BBC, Anthropic is not the only one to adopt this approach. The company, which created ChatGPT, OpenAI, also has similar positions in biological and chemical risks, with the salaries being very high as they want to hire the best specialists. These recruits belong to a larger trend in the AI sector of going on the offensive in avoiding worst-case scenarios, particularly as worries have intensified that highly powerful AI models could be exploited to create or empower weapons. The approach is indicative of an increased focus on the topic of AI safety, in which businesses are looking to preempt threats before they occur.

Nevertheless, the relocation has generated apprehension with the professionals. The critics point out that it may be unsafe to let AI systems handle sensitive knowledge even under the pretext of safety. Other scholars caution that to date, no international framework of regulation of AI engagement with risk areas such as chemical or radiological weapons exists, and therefore, it is associated with concerns of control and responsibility. Simultaneously, there is also a rising conflict between AI companies and governments, especially concerning the use of such technologies in the military, According to BBC.

Who Is Going To Drop ‘Dirty Bomb’?

In general, the trend of hiring controversies seems to be a subset of a greater change in the AI competition. Some companies such as Anthropic are attempting to strike the right balance between speed of innovation and accountability, so that their systems do not turn into instruments of destruction. With the growing capabilities of AI, the industry is moving towards prevention with experts in the domain to ensure the technology will not be abused and deal with the complicated ethical and geopolitical issues.

Also Read: What Is A Dirty Bomb And How Does It Differ From A Nuclear Weapon? Understanding The Threat As Iran’s Uranium Stockpile Sparks Tensions Amid Trump’s Warning To Attack Iranian Power Plants

Recent Posts

How To Download GTA 5 Supercar For Free? Rockstar Games Is Giving A Freebie Ahead of GTA 6 Launch, Check Last Date Here To Claim The Offer

As Grand Theft Auto VI nears its much-anticipated release, Grand Theft Auto Online is rewarding…

March 24, 2026

Gujarat Tables UCC Bill 2026: What It Means, Are Live-In Relationships Legal Now And What Major Changes It Brings For Couples And Personal Laws?

Gujarat tables UCC Bill 2026 to unify personal laws, mandates live-in registration, bans bigamy, and…

March 24, 2026

Telangana TGCET Results 2026 Declared At tgcet.cgg.gov.in, Direct Link To Check Scorecard Here

The Telangana Social Welfare Residential Educational Institutions Society (TSWREIS) has announced the TGCET Results 2026…

March 24, 2026