Membership is FREE!   Professional Designations for Business & Technology Professionals

Artificial Intelligence

A Dogfight Renews Concerns About AI’s Lethal Potential

3 Mins read

Alphabet’s DeepMind pioneered reinforcement learning. A California company used artificial intelligence to create an algorithm that defeated an F-16 pilot in a simulation.

IN JULY 2015, two founders of DeepMind, a division of Alphabet with a reputation for pushing the boundaries of artificial intelligence, were among the first to sign an open letter urging the world’s governments to ban work on lethal AI weapons. Notable signatories included Stephen Hawking, Elon Musk, and Jack Dorsey.

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.

Others in AI are grappling with similar issues, as more ethically questionable uses of AI, from facial recognition to deepfakes to autonomous weapons, emerge.

A DeepMind spokesperson says society needs to debate what is acceptable when it comes to Artificial Intelligence weapons. “The establishment of shared norms around responsible use of AI is crucial,” she says. DeepMind has a team that assesses the potential impacts of its research, and the company does not always release the code behind its advances. “We take a thoughtful and responsible approach to what we publish,” the spokesperson adds.

The AlphaDogfight contest, coordinated by the Defense Advanced Research Projects Agency (Darpa), shows the potential for AI to take on mission-critical military tasks that were once exclusively done by humans. It might be impossible to write a conventional computer program with the skill and adaptability of a trained fighter pilot, but an AI program can acquire such abilities through machine learning.

“The technology is developing much faster than the military-political discussion is going,” says Max Tegmark, a professor at MIT and cofounder of the Future of Life Institute, the organization behind the 2015 letter opposing AI weapons.

The US and other countries are rushing to embrace the technology before adversaries can, and some experts say it will be difficult to prevent nations from crossing the line to full autonomy. It may also prove challenging for AI researchers to balance the principles of open scientific research with potential military uses of their ideas and code.

Without an international agreement restricting the development of lethal AI weapons systems, Tegmark says, America’s adversaries are free to develop AI systems that can kill. “We’re heading now, by default, to the worst possible outcome,” he says.

US military leaders—and the organizers of the AlphaDogfight contest—say they have no desire to let machines make life-and-death decisions on the battlefield. The Pentagon has long resisted giving automated systems the ability to decide when to fire on a target independent of human control, and a Department of Defense Directive explicitly requires human oversight of autonomous weapons systems.

But the dogfight contest shows a technological trajectory that may make it difficult to limit the capabilities of autonomous weapons systems in practice. An aircraft controlled by an algorithm can operate with speed and precision that exceeds even the most elite top-gun pilot. Such technology may end up in swarms of autonomous aircraft. The only way to defend against such systems would be to use autonomous weapons that operate at similar speed.

“One wonders if the vision of a rapid, overwhelming, swarm-like robotics technology is really consistent with a human being in the loop,” says Ryan Calo, a professor at the University of Washington. “There’s tension between meaningful human control and some of the advantages that artificial intelligence confers in military conflicts.”

AI is moving quickly into…

Read The Full Article

Related posts
Artificial Intelligence

Why Your Board Needs a Plan for AI Oversight

2 Mins read
AI demands increased board fluency with technology, and attention to its risks as well as its rewards. We can safely defer the…
Artificial Intelligence

Gartner Identifies the Legal and Compliance Technologies to Focus on Post COVID-19

1 Mins read
Gartner’s New Hype Cycle for Legal and Compliance Technologies Helps General Counsel to Focus on the Most Meaningful Technology Innovations COVID-19 has…
Artificial Intelligence

Google Offers to Help Others With the Tricky Ethics of AI

2 Mins read
COMPANIES PAY CLOUD computing providers like Amazon, Microsoft, and Google big money to avoid operating their own digital infrastructure. Google’s cloud division will soon invite customers to…
Join BIZTEK

Yes, I have read and live by this Code of Ethics - https://biztek.org/code-of-ethics/. We are BIZTEK, located in Mississauga, Ontario. Business Certification is an important part of doing business in Canada. Join us to set new standards and professionalism to the technology sector. We will email you regarding issues that affect business and technology professionals in Canada. Contact us at info@biztek.org or call us at 647 499 2744. You can unsubscribe at any time.