16.02.23 Analytics

Digital Policy Divergence: authoritarian states are attempting to construct digital totalitarianism while European countries fight against it

Re: Russia
The Roskomnadzor data leak, which has been widely scrutinised over the past week, shows that Russian authorities are rapidly building up a digital surveillance ecosystem which includes algorithms for monitoring and controlling digital networks, facial recognition, and political profiling. The Kremlin is trying, although thus far it has had limited success, to follow in the footsteps of China, where a robust system of ‘digital totalitarianism’ is already operational, and indeed so successful that its developments are now being shared with other authoritarian regimes. Some EU governments would also like to expand their use of digital monitoring technologies for security purposes, but they face considerable push back from civil society and MEPs who seek to ban facial recognition altogether.

Last week, a series of investigations into the wide-reaching Roskomnadzor data breach were published by independent Russian media outlets. It was revealed that the Kremlin intends to employ neural networks to find and manage banned content on the Russian sector of the Internet, as well as the use of artificial intelligence tools to predict where and when protests may occur.

The Bell have summarised the main findings from these investigations in a special review. For one thing, the leaks have revealed that, after the full-scale invasion began on February 24, Roskomnadzor began to prioritise monitoring anti-war publications online, including reports of civilian deaths in Ukraine, destruction of social infrastructure, Russian army casualties, and stories about Russian citizens refusing to participate in the war. Second, it appears as though Roskomnadzor has a large social media monitoring department which searches for and analyses critical statements relating to Vladimir Putin personally. The collected data is then used to predict public reaction to his public statements (any mention of the president's health is monitored separately). Third, we learned from this report that between February 24 and November 10, Roskomnadzor identified 169,000 news stories as fake and accused 40,000 Internet posts of 'encouraging protests'. The watchdog had taken down 102,000 and 27,000 posts respectively, blocking a further 15,600 and 3,200. Finally, it has become evident that the Radio Frequency Centre, which is effectively the executive body of Roskomnadzor, is responsible for making inquiries into foreign agents on behalf of the Ministry of Justice. 

Thus far, the search for content deemed to be objectionable has been conducted semi-automatically, using simple keyword filters, and with the results then sorted manually. At the same time, work is underway to automate this process by developing neural networks, which will serve as tools for automatic monitoring not only of opposition figures, but of all Runet users. There are currently three AI-based automated monitoring systems in development. The ‘Vepr’ system will look for ‘information pressure points’ and forecast where and when protests might erupt. ‘MIR’ will handle fully automated searches for prohibited content. ‘Oculus’ is designed to identify calls for protests in images and videos, and to recognise the faces of demonstrators. However, according to the very same ‘leaked’ Roskomnadzor papers, none of these systems are yet to be operational. Their development has been hampered by sanctions and a lack of skilled specialists who are willing to work for the Russian government.

In January 2021, Russian police began actively utilising smart cameras to identify protesters in Moscow. While it had previously only been possible for authorities to detain those who participated in these protests at the event itself, it is now possible to identify protestors retroactively using facial recognition, and thus apply administrative charges after the fact.

The campaign against this technology is led by the civic organisation Roskomsvoboda who collect stories of individuals impacted by the use of facial recognition technology. Human rights groups have pointed out that the use of such systems is, first and foremost, illegal (any individual must consent to the processing of biometric data), and that their error rate is quite high, with many people falling victim to errors in the facial recognition algorithms.

Another element of ‘digital authoritarianism’ is the use of political profiling, — in other words the use of digital technology for the systematic collection of information on individuals deemed potentially untrustworthy. The recent report ‘Network Profiling Technologies’ by the Network Freedoms Project described the authorities' accomplishments when it came to such uses of digital technologies. Systems to monitor compliance with quarantine restrictions, for example, became an important step towards legalising citizen surveillance during the pandemic. However, as noted in the report, the Interior Ministry's plans to develop a system that would integrate data from disparate regions and departments into a single database have so far faltered. 

Russia is following in the footsteps of China, the world leader in total surveillance and automated political control, in the development of digital authoritarianism. A recent report by the American Atlantic Council provided a detailed description of how China is designing and constructing its ecosystem of surveillance to monitor both its citizens and the information space. Experts argue that the Golden Shield project, also known as the Great Chinese Firewall, and the national, closed-circuit television (CCTV) network, which is used for street surveillance, lie at the core of this ecosystem. Beijing adheres to the concept of cyber sovereignty, wielding full control over China’s cyberspace and seamlessly conducting digital surveillance using artificial intelligence, big data, and biometrics collected by the government. 

According to the Atlantic Council, China has achieved these successes through fostering public-private partnerships with large technology companies (a trend that can also be observed in Russia). Not only has Beijing become the poster child for digital authoritarianism, but it is also actively exporting digital surveillance technologies to the Global South. China's success has inspired authoritarian governments around the world as they attempt to build ‘indestructible’ dictatorships based on artificial intelligence.

It is worth noting, however, that democratic governments also utilise these digital surveillance technologies and are planning to apply them more actively and extensively. A report published in 2021 by European Digital Rights (EDRI) describes how biometric-based mass surveillance (EURODAC) technologies are increasingly being applied to EU citizens. European governments first began to adopt these in order to monitor compliance with quarantine restrictions, but now the EU is increasingly employing advanced technology, artificial intelligence, and biometrics to tighten border control and stop illegal migration. According to the latest draft of a European-wide bill to regulate the use of artificial intelligence, such technology can be used to identify illegal migrants and refugees.European NGOs and human rights activistscontend that such a  use of artificial intelligence to control migration is immoral and infringes basic human rights, as it is intended to identify individuals who would be labelled as undesirable in the EU based on set criteria, with a rating assigned to each entrant. The use of such technology is opposed by civil society organisations, human rights activists, and MEPs. Germany has already passed legislation regulating the use of smart cameras. This is only permitted in exceptional circumstances, such as with the victim's voluntary consent, when there is a threat of a crime, or to comply with a court order. 

In December 2022, representatives from 195 European civil society organisations published an open letter proposing amendments to the Artificial Intelligence Act to limit any potential mass surveillance of vulnerable groups. According to Politico, the European Parliament's three largest factions are proposing a total Europe-wide ban on the use of facial recognition technologies and other artificial intelligence tools in public spaces. This proposed European artificial intelligence law would specify where surveillance cameras could and could not be installed. MEP Dragos Tudorache of the Renew Europe faction has publicly stated that Europe must ‘avoid any and all risks of mass surveillance.’

Authoritarian and democratic governments alike have participated in heated debates and made mutual accusations regarding the use of data to violate human rights. Following this, major corporations like IBM, Amazon, and Microsoft have suspended sales of facial recognition technologies to government agencies. Experts at Politico believe that attempts to ban such technologies at the EU level will face stiff opposition from national governments, which are under pressure from populist discourses surrounding national security. Proponents argue that mass digital surveillance has the potential to reduce crime. As a result, EU interior ministers are advocating for a number of exemptions to allow law enforcement agencies to use mass surveillance technologies in cases such as the search for missing children, the prevention of terrorist attacks, and the apprehension of armed criminals.