US based AI company Anthropic is hiring a specialist in chemical weapons and high-yield explosives not to build such weapons, but to prevent its AI systems from being misused. The position, which is being advertised on LinkedIn, involves a background in chemical weapons defence and understanding in radiological dispersal devices, which are otherwise known as dirty bombs. It has become evident to the company that the vision is to reinforce safety guardrails so that its AI tools cannot be used to produce harmful instructions or help in risky operations.
Why Are Anthropic And ChatGPT Hiring Experts On ‘Dirty Bombs’?
As it is stated in a report by BBC, Anthropic is not the only one to adopt this approach. The company, which created ChatGPT, OpenAI, also has similar positions in biological and chemical risks, with the salaries being very high as they want to hire the best specialists. These recruits belong to a larger trend in the AI sector of going on the offensive in avoiding worst-case scenarios, particularly as worries have intensified that highly powerful AI models could be exploited to create or empower weapons. The approach is indicative of an increased focus on the topic of AI safety, in which businesses are looking to preempt threats before they occur.
Nevertheless, the relocation has generated apprehension with the professionals. The critics point out that it may be unsafe to let AI systems handle sensitive knowledge even under the pretext of safety. Other scholars caution that to date, no international framework of regulation of AI engagement with risk areas such as chemical or radiological weapons exists, and therefore, it is associated with concerns of control and responsibility. Simultaneously, there is also a rising conflict between AI companies and governments, especially concerning the use of such technologies in the military, According to BBC.
Who Is Going To Drop ‘Dirty Bomb’?
In general, the trend of hiring controversies seems to be a subset of a greater change in the AI competition. Some companies such as Anthropic are attempting to strike the right balance between speed of innovation and accountability, so that their systems do not turn into instruments of destruction. With the growing capabilities of AI, the industry is moving towards prevention with experts in the domain to ensure the technology will not be abused and deal with the complicated ethical and geopolitical issues.