Skip to ContentSkip to Navigation
About us Latest news News News articles

Autonomous weapons lower threshold to war

15 January 2024

Killer robots? Self-firing drones? There is a strong suspicion that an autonomous weapon system is already in use in Libya. However, regulation is still lacking. PhD student Taís Blauth studies the legal and ethical questions raised by these weapons.

Text: Jurgen Tiekstra / Photo's: Henk Veenstra

When in 2011 the Arab Spring caused political upheavals from Tunisia to Syria, a civil war broke out in Libya at the same time. Today, 12 years later, the conflict is still ongoing. By now, the Libyan battleground even has the dubious honour of probably being the first place in the world where an autonomous weapon system has attacked people on its ‘own’ authority.

At least, that is what researchers working for the United Nations are suggesting. In May 2020, in the eastern part of capital city Tripoli, the remains of a Turkish-made drone were found: the STM Kargu-2. There is a suspicion that this drone was able to autonomously fire at rebel groups. This discovery shows that artificial intelligence is also a reality in wars. From her present home in Sneek, Brazilian PhD student Taís Blauth is studying the ethical and legal questions that are raised by the use of these lethal autonomous weapon systems. Blauth is an expert in international law. After completing a Bachelor’s degree at the Universidade Feevale in the Brazilian city of Novo Hamburgo and a Master’s degree at the University of Durham in the UK, she found a place at the University of Groningen’s Campus Fryslân.

Taís Blauth
"At first, it was about a ban, but how can something that may already exist, be banned?"

Stop killer robots

Within the UN context, there have been discussions about regulating autonomous weapon systems for years, but an international agreement is still beyond reach, says Blauth. ‘This debate gained more attention because of an open letter in 2015 signed by many well-known personalities, including Elon Musk, for example. Even before that open letter, these weapon systems were already a topic of discussion within the United Nations, but at the time without much publicity. A number of NGOs also launched the Stop Killer Robots campaign, through which they pushed the issue to the top of the UN agenda.’

‘Within the UN, there have been all kinds of discussions in recent years about what type of regulation we should have,’ says Blauth. ‘At first, it was about a ban, but how can something that may already exist, be banned? Then, the discussion shifted to the principle of “meaningful human control over autonomous systems”, as a minimum requirement for these weapons. What do we mean by that, though?’

UN resolution adopted

‘In early November,’ Blauth continues, ‘the United Nations General Assembly adopted a resolution that stated for the first time the need for the international community to address the challenges surrounding autonomous weapon systems. But even such a resolution still received five votes against: from Russia, Belarus, India, Mali, and Niger. There were also eight abstentions, including China, Israel, and Turkey.’ The NATO countries, including the Netherlands, did sign the resolution. That does not mean these countries are against autonomous weapon systems, however. In a 2016 parliamentary letter, the Rutte II cabinet wrote that offensive autonomous weapons are part of a ‘permanent high-technological Dutch military force’.

Soldier using drone
"What if such an autonomous system comes into the hands of terrorists? ‘They will not worry about meaningful human control." Photo: armyinform.com.ua/ Wikicommons

‘Meaningful human control’

At the time, the government made what it saw as an all-important distinction between fully autonomous weapons and weapon systems under ‘meaningful human control’. There is still no international consensus on the definition of the latter term. But the cabinet’s interpretation at the time was that ‘humans do play an emphatic role in programming the characteristics of the targets to be attacked and in the decision to deploy the weapon.’ Or, as then-PvdA minister Koenders and VVD minister Hennis-Plasschaert wrote in that parliamentary letter: ‘This puts people in the so-called wider loop of decision-making.’ The Dutch position has not changed since, as evidenced, for example, by a 2021 parliamentary letter signed by then-ministers Sigrid Kaag and Ank Bijleveld.

Offensive weapon systems are in demand

Defensive autonomous weapon systems have existed for years. The Dutch navy uses the Goalkeeper system, which automatically intercepts incoming missiles. The United States have a similar system called Phalanx. When the autonomous weapons become offensive it is a whole new ball game. Countries like to use them because they are thought to be faster and more accurate than humans, and they can access the most dangerous places. But Blauth, along with others, has many doubts. For example: what if such an autonomous system comes into the hands of terrorists? ‘They will not worry about meaningful human control,’ says Blauth. Another concern is the potential ‘liability gap’ or, in other words, who could be held responsible if a war crime has been committed? ‘Another reason why it’s important to have a human behind the system.’

Taís Blauth
"We would not be able to programme ethical standards into a computer system."

Ethical standards

Yet another problem: is international law programmable? Because how do you teach a machine that civilians look different from military personnel? —The ‘principle of distinction’. How does a machine know just how far it is permitted to go? —The 'principle of proportionality'. ‘Suppose,’ says Blauth, ‘that an autonomous weapon is tasked with killing a particular person, but that person is at a school with sixty children. Would that weapon system try to kill that terrorist or not? Those kinds of decisions are now made by military personnel who have been extensively trained on this and have these ethical standards in their heads. We would not be able to programme anything like that into a computer system.’ The next problem is the psychological phenomenon of ‘automation bias’. Just how sovereign are humans when compared to an autonomous weapon system? It is often said that people primarily tend to value the conclusion of a computer system over their own understanding.

Taís Blauth with laptop
"That is one of the objections to these weapons: they lower the threshold to war, also because there is no risk of losing many soldiers."

A regular day at the office

Finally, there is the question of whether autonomous weapon systems, such as autonomously firing drones, might not just bring war closer. Investing in new offensive, rather than defensive, capabilities perhaps increase the likelihood of countries turning to violence. Moreover, autonomous weapon systems are a further step in making war more clinical. Currently, drones over a battlefield are already being controlled from military bases a thousand kilometres away. A next step could be that those drones are not being operated at the actual moment of fighting, but are programmed in advance on a regular day at the office, by people dressed in a smart suit, from their nice air-conditioned room, a cup of coffee in hand, Blauth describes. ‘They are not on a battlefield, with the smell and the noise, where you see people suffering and where you are in an environment that impacts your most natural and human moral values. A sense of distance is created that way, which sometimes allows violence to be used more easily. That is one of the objections to these weapons: they lower the threshold to war, also because there is no risk of losing many soldiers.

This article has been taken from our alumni magazine Broerstraat 5 (in Dutch)

More information

See profile page

Last modified:16 January 2024 2.29 p.m.
View this page in: Nederlands

More news

  • 02 April 2024

    Is more data always better?

    Xiaoyao Han researches the added value of Big Data and explores how the accumulation of data enriches our scientific understanding

  • 29 January 2024

    Sustainable behaviour? Information alone is not enough

    As a social and environmental psychologist, Josefine Geiger looks into what motivates people to act in environmentally friendly ways. According to her, if we manage to overcome the barriers we sometimes perceive, an individual could have a lot of...

  • 11 December 2023

    Join the 'Language and AI' community

    As a part of the Jantina Tammes School, the 'Language and AI' theme is an interdisciplinary initiative that aims to encourage collaboration among academics, PhD candidates, students, and industry representatives who share a keen interest in the...