Experts on the malicious use of artificial intelligence and challenges to international psychological security
Today, we are closer than ever to the end of human history. Further progress of AI technologies, MUAI and its large-scale use as a psychological weapon may lead to an even greater delay in an adequate public response to the dangerous behaviors of antisocial actors.
The present publication is a result of the implementation of the research project titled “Malicious Use of Artificial Intelligence and Challenges to Psychological Security in Northeast Asia,” funded by the Russian Foundation for Basic Research (RFBR) and the Vietnam Academy of Social Sciences (VASS). The responses received from a targeted survey of nineteen experts from ten countries and their subsequent analysis aim to highlight the range of the most serious threats to international psychological security (IPS) through malicious use of artificial intelligence (MUAI) and determine how dangerous these threats are, which measures should be used to neutralize them, and what the prospects for international cooperation in this area are. This publication attempts to determine whether MUAI will increase the threat level of IPS by 2030. The publication pays special attention to the situation in Northeast Asia (NEA), where the practice of MUAI is based on a combination of a high level of development of AI technologies in leading countries and a complex of acute disagreements in the region.
Artificial intelligence (AI) technologies, despite their high significance for social development, raise threats to international psychological security (IPS) to a new level. There is a growing danger of AI being used to destabilize economies, political situations, and international relations through targeted, high-tech, psychological impacts on people’s consciousness. Meanwhile, crisis phenomena are rapidly increasing in frequency, number, and severity worldwide.
There is no need to explain here why, in 2020, the Doomsday Clock was set to 100 seconds to midnight for the first time in history and remains unchanged in 2021 (Defcon Level Warning System, 2021). Nor is there a need to explain why the UN Secretary General is serving as a megaphone for scientists, warning bluntly that failure to slow global warming will lead to more costly disasters and more human suffering in the years ahead (Dennis, 2021). And there is no need to explain why the growth in the world’s billionaires’ fortunes from 8 to 13 trillion dollars in the crisis year of 2020 (Dolan, Wang & Peterson-Withorn, 2021)—against the backdrop of record economic decline in recent decades, hundreds of millions of newly unemployed people, and, according to the UN, the growth in the number of hungry people in the world from 690 million in 2019 (Kretchmer, 2020) to 811 million in 2020 (World Health Organization, 2021)—does not contribute to solving these and other acute problems of our time.
Economic problems, the degradation of democratic institutions, social polarization, internal political and interstate conflicts against the backdrop of the ongoing COVID-19 pandemic, all under the conditions of rapid AI development, create extremely favorable ground for the malicious use of AI (MUAI). MUAI is an intentional antisocial action, whether in explicit or implicit form. Antisocial circles (from individual criminals and criminal organizations to corrupt elements in government to financial and commercial structures to the media to terrorists and neo-fascists) are already increasingly taking advantage of this situation, favorable to their purposes.
The manipulation of the public consciousness is especially destructive in historical moments of crisis. The inhumanity of fascism became apparent after the death of 50 million in the flames of the Second World War. However, the technology of manipulating the public consciousness, with the appropriate funding from certain corporate structures, ensured Hitler’s victory in the Reichstag elections in 1933—a distant year, but highly instructive for those alive today. It is hardly by accident that, today, the governments and parliamentarians of the USA, China, Russia, India, EU countries, and other states and associations to varying degrees and in different ways show growing concern about the threat of high-tech disinformation on the Internet and the role of leading media platforms that actively use AI technologies. The question is clear: can humanity collectively find a way out of an increasingly difficult situation with a quantitatively and, increasingly, qualitatively higher level of manipulation of the public consciousness?
(Continue reading the study below).
Del mismo autor
- Russian security cannot be anti-Russian 15/03/2022
- Deterring right-wing radicals and preventing threats to nuclear facilities in Ukraine 07/03/2022
- The US Strategic Provocations before and during the Olympic Games 03/01/2022
- Experts on the malicious use of artificial intelligence and challenges to international psychological security 16/12/2021
- Artificial Intelligence and challenges to international psychological security in Internet 12/02/2019
- Artificial Intelligence and Issues of National and International Psychological Security 03/12/2018