The use of artificial intelligence in combat robots has sparked numerous debates, with scientists and entrepreneurs urging authorities worldwide to formulate a convention on the use of killer robots or ban them altogether.
Killer drones might have attacked people for the first time on their own, New Scientist wrote citing a report from the United Nations Security Council’s Panel of Experts on Libya. The North African country has been mired in a civil war since the toppling of Muammar Gaddafi.
According to the obtained documents, last year a Turkish-made drone, STM Kargu-2, “hunted down” soldiers loyal to General Khalifa Haftar without being ordered to do so. The report did not elaborate on whether individuals were killed during the incident, but said that if they had been, this would mark a first-known case when an artificial intelligence-based robot killed a person on its own.
According to the description on the manufacturer’s website, the Kargu-2 drone is designed for asymmetric warfare or anti-terrorist operations. It can be carried by a single person in both autonomous and manual modes. The device uses machine learning to identify targets and attack them. It has an explosive charge and is used in kamikaze-styled attacks.
The news is likely to reignite the debate on the use of autonomous killer robots, which has been going for several years. Proponents of the use of AI-based robots say they will reduce the risk…