Geneva launches new government round on autonomous weapons:

The danger of machines learning to kill by themselves

  • Español
  • English
  • Français
  • Deutsch
  • Português
  • Opinión
Foto: Kaos en la Red
-A +A

It stands for LAWS. These are the lethal autonomous weapon systems, also called killer robots. Far from being futuristic hallucinations of science fiction writers, they constitute one of the nuclei of the current arms race.


The US is conducting an aggressive plan to develop different types of semi-autonomous and autonomous weaponry, conducting both basic and applied research. Among the central aspects are the improvement of perception, reasoning and intelligence of machines but also the collaborative interactions between machines and humans.


As an example, one of the main programs – conducted by DARPA (Defense Advanced Research Projects Agency) – is OFFSET (OFFensive Swarm-Enabled Tactics), which aims at the integration of drone smart drones with ground robots and small infantry units to operate in urban contexts.


Another case is the Maven Project – recently challenged by Google employees for Google’s involvement in it – which involves the use of artificial intelligence to identify potential drone bombing targets.


The importance that the US Department of Defense attaches to autonomous systems is reflected in its investment plans. Expenditure planned for its development between 2016 and 2020 is $18 billion[1], in addition to other government-funded research initiatives on Artificial Intelligence (totaling some $1.1 billion in 2015).


But in addition to state investment, the US military industry relies on a widespread civilian “reserve army”, made up of companies, universities and innovative individuals, which dilutes the boundary between commercial and military applications. The case of autonomous armament is no exception, especially in relation to advances in artificial intelligence.


The new technological cold war: Artificial Intelligence


This U.S. attempt to maintain an advantageous position in the war effort is challenged by several countries that also see the AI as the key to future technological and economic supremacy.


China’s State Council released in July 2017 a roadmap that aims to make the country the global innovation centre in IA by 2030. To this end, it has a budget of more than one trillion yuan (about 147.8 billion US dollars).2] In January 2018, the construction of an industrial park dedicated to 30 km from Beijing was announced, in which the government aims to house some 400 companies in the sector. According to findings in Nature magazine, a war has broken out over the use of AI talent – still relatively scarce – by attracting them with succulent salaries.


In the field of armament, China is determined to imitate the US model, opening the door to a web of civil-military research and application, as announced in the August 7, 2016 issue of the military newspaper PLA Daily.3] Even setting up a Scientific Research Steering Committee, which is seen by industry analysts as the Chinese counterpart of the DARPA agency.


South Korea, Russia, Israel, India, France and Great Britain in turn have their own plans. The South Korean government, following the Japanese model, aims at a powerful development of industrial and domestic robotics, but also military robotics, emphasizing unmanned drones and surveillance systems, sharing with Israel the dubious honour of having installed autonomous robot sentries on its borders.


Israel is one of the leading exporters of autonomous armed systems, both in the air, sea and land sectors. Russia also does not want to be left behind in this scientific-technological battle. Hence it launched – according to the SIPRI report “Mapping the development of autonomy in weapons systems” – the Robotics 2025 program and created a Robotics Development Center in Skolkovo (SRC) in 2014.


According to the report, “French Defence Minister Jean-Yves Le Drian announced in February 2017 that the IA will play an increasingly important role in the development of new military technologies to ensure that France is not placed at a disadvantage vis-à-vis its allies, the US and Britain. With regard to the latter, the report states that “Great Britain is the country in Europe that invests the most in military research and development (R&D)”. Robotics and autonomous systems “are one of the 10 priority areas of the 2017 Industrial Modernization Strategy”, the report adds.


Something similar happens in Germany – the world’s second largest producer of industrial robotics – and India, where military research in this field is carried out by the Centre for Artificial Intelligence and Robotics, located in Bangalore, Karnataka State.


Overcoming the best without previous knowledge in just 41 days


This headline, worthy of an advertisement for any mediocre course academy, accurately describes a dramatic advance of the knowledge released in December 2017. Its protagonist is not human (at least after its creation) and its name is AlphaGo Zero.


It is an artificial intelligence algorithm applied to the Chinese game of go, which allowed this new digital “player” to beat the current world champion – his predecessor also electronic, AlphaGo – by beating him in a hundred games without defeat. Both incoming and outgoing champions were sponsored by Deep Mind, a company acquired by Google (Alphabet Inc.) in 2014.


The “dramatic breakthrough” is literal. Until now, the ability to learn this digital technology was based on the ability to process terabytes of previously supplied data. In the case of games such as go, chess, poker, or “three in a row” it was a large number of games played by the most brilliant human contenders.


AlphaGo Zero was not fed with any game. He played against itself starting from the rookie level, until in 41 days it managed to become – thus the note on the company’s website – “probably the best go player in the world”. The powerful computer structure used is known as “neural networks”, fed by processes of “reinforcement learning”[4].


This fabulous capacity for learning, so similar to human behaviour, is not, however, general but limited to a specific problem. But what if the limited problem was killing enemies? If any current lethal instrument had an algorithm of this kind coupled with image and language identification systems based on specific parameters – for example, those of a peasant population in an area classified by military strategists as “hostile”; if this system could also interact in seconds with online or previously collected databases; what would prevent these systems from self-ordering the liquidation of such enemies?


There would certainly be collateral damage’ – to use the disastrous military jargon that separates planned killings from victims not involved in the conflict – but would it not be more efficient’ and surgical than simply bombing entire villages as is done today in Yemen, Syria, Gaza, Iraq, Afghanistan or Somalia?


The same could be imagined with regard to the elimination of critical infrastructure, and a war could end in a matter of days. Forgetting its subsequent aftermath, of course.


It is precisely this argument, as cynical and fateful as it sounds, that is the argument put forward by those who promote the investment, research and development of lethal autonomous weapons.


Beyond the arguments, there are clear reasons implicit in this deadly application of artificial intelligence systems to weapons. In addition to maintaining war supremacy, which implies the possibility of eliminating any emerging resistance to geostrategic hegemony, it is a question of preserving or increasing technological supremacy. This means clear economic benefits for the winners and enormous damage for the losers.


Winners who belong to the North and losers who inhabit the global South, which will only widen the socio-economic gap and technological colonialism.


The companies involved in this arms race are promising each other million-dollar contracts and sideline profits for the use of Artificial Intelligence in both the military and civilian fields, and governments are investing to try to get the stagnant capitalism out of a fateful economic inertia.


It is precisely the arms business that is destroying the argument that promotes autonomy applied to armaments. Through the massive commercialization of these systems and their corresponding dissemination, things can quickly turn against each other, with the danger that some of this technology will be supplied by any government or arms dealer to irregular groups.


Or that, finally, in a simplified and affordable version for “domestic use”, autonomous weapons can be used by any citizen who is obscured or psychologically altered to play video games by targeting his or her own fellow citizens.


Refined killing machines: time to stop this madness


‘Crazy are not just the ones who pull the trigger – which in the case of autonomous weaponry would be difficult to discern. It is time to curb the imbalances of power, the manufacturing corporations and the governments and banks that finance their producton.


The enormous technological advance of Artificial Intelligence will not be halted, but it is possible to ban the use of AI for military purposes internationally.


A new round of the Convention on Certain Conventional Weapons (CCW) will begin on 9 April at the United Nations in Geneva, where government representatives will discuss the issue of autonomous armaments for the fifth consecutive year.


According to the StopKillerRobots Campaign, “after 3 years of exploratory meetings involving 80 states, a Group of Governmental Experts (GGE) was formalized in 2016” to address the issue. In addition to the meeting to be held from 9 to 13 April, a second meeting is scheduled for 27 to 31 August.


No substantive decisions will be taken at these meetings, but a report with proposals for the future will be produced and adopted during the Convention on Certain Conventional Weapons, which will take place between 21 and 23 November this year.


Activists point out that “empowering autonomous weapons systems to decide targets and attacks erodes the fundamental obligation under International Humanitarian Law (IHL) and violates established human rights.


The StopKillerRobots Campaign, which brings together more than 50 international non-governmental organizations, aims to achieve a ban on the development, production and use of fully autonomous weapons systems through a binding international treaty and also through national laws and other measures.


On the other hand, whether the weapons achieve full autonomy or not, academics and experts indicate that the partial application of Artificial Intelligence in this field is already producing devastating effects.


Similarly, the arms race unleashed by these new developments must be seen as squandering resources that are urgently needed to improve the lives of billions of homeless people.


The only thing that both promoters and detractors agree on is that these weapons are the third technological revolution of the war, after gunpowder and nuclear weapons. This time we are called upon to prevent this revolution from taking place.



[1] Bornstein, J., ‘DOD autonomy roadmap: autonomy community of interest’, citado en el informe SIPRI,  Mapping the development of autonomy in weapons systems, Boulanin V.- Verbruggen M., Noviembre 2017, pág 95


[3], citado en el Informe SIPRI, ibídem., página 103

[4] Neural networks are a type of computational learning structure in which information flows through a series of networked boards. Reinforcement learning is a technique of computational self-learning in which the agent learns by interacting with his environment. (based on the material “The Weaponization of Increasingly Autonomous Technologies: Artificial Intelligence, No. 8”, produced by the United Nations Institute for Disarmament Research (UNIDIR))

Clasificado en

Subscribe to America Latina en Movimiento - RSS