Request Media Kit

Stephen Hawking, Elon Musk, and Hundreds More Call for Ban on Autonomous Weapons

According to Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, and hundreds of AI and robotics researchers, governments should ban autonomous weapons in order to prevent a “military AI ar...
Stephen Hawking, Elon Musk, and Hundreds More Call for Ban on Autonomous Weapons
Written by Josh Wolford
  • According to Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, and hundreds of AI and robotics researchers, governments should ban autonomous weapons in order to prevent a “military AI arms race.”

    In a letter signed by over 1,000, Musk, Hawking and others say that most AI researchers “have no interest in building AI weapons, and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.”

    The letter, which will be officially announced at the International Joint Conferences on Artificial Intelligence (IJCAI) in Buenos Aires, is organized by the Future of Life Institute. FLI “are a volunteer-run research and outreach organization working to mitigate existential risks facing humanity. We are currently focusing on potential risks from the development of human-level artificial intelligence.”

    According to the organization, its mission is “to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future.”

    And to FLI and the signatories of this open letter, flying death robots do not an optimistic future make.

    Here’s the full text of the letter:

    Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

    Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

    Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

    In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

    Elon Musk, Steve Wozniak, and Stephen Hawking have all gone on record plenty of times with concerns about artificial intelligence.

    Image via Stephen Hawking, Facebook

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit