Cybersecurity Defense Technologies Digital Policy Drones Politics & Diplomacy Security & Defense Technology & Innovation
GeoTech Cues December 18, 2020

The West, China, and AI surveillance

By Kaan Sahin (Guest Author)

Risks and opportunities

It is the year 2027: China has been continually perfecting its full-fledged nationwide surveillance architecture in form of smart and secure cities as well as the social credit system. The results cannot be denied: thanks to artificial intelligence (AI), surveillance systems throughout the streets plaster the faces of jaywalkers on billboards and drivers of speeding cars are immediately informed that they are fined, leading to a new record low of traffic accidents. 

At the same time, however, the government has employed AI surveillance systems as big-brother-type instruments of repression. For instance, AI tools have been honed to the degree that they can automatically grade—be it online or offline—the degree of comments critical of government and discipline their citizens according to their statements. The punishments might range from the reduction of social benefits up to forced work in detention camps. For non-nationals, entry bans have also already been applied preemptively. Civil society groups observe that these AI surveillance applications consolidate the robustness of authoritarian regimes, lead to anticipatory change in people’s behavior in favor of the government’s positions, and compromise heavily the human dignity of their citizens.

Western governments are in a tricky situation: the effectiveness and sophistication of these systems are convincing. On the downside, authoritarian states use AI surveillance to track and control the movements of their citizens and non-nationals, collect data about their faces and gaits, and reuse the information for repressive purposes. Meanwhile, Chinese companies, which are at the forefront in developing and employing these systems, have already been busy striking deals with several countries to export and install their smart city packages. Due to the lack of internal consolidation in Western states and cooperation between them, as well as the absence of a separate approach towards AI surveillance, containing the spread of these systems and their destructive side effects has not been successful. 

Given the current state of AI surveillance as well as the speed of development, the above scenario is not an unrealistic Orwellian dystopia, but rather a potential continuation of current international trends. AI surveillance tools in various forms are spreading globally, from facial recognition and early outbreak detection to predictive policing and gait recognition. Despite different legal restrictions, authoritarian and democratic states alike are increasingly employing these instruments to track, surveil, anticipate, and even grade the behavior of their own citizens.

 

The application of these AI surveillance tools is a very important cornerstone of an emerging trend towards digital authoritarianism: the collection and application of information by states using digital tools to achieve repressive levels of societal control. These tools serve as exponential accelerants of preexistent surveillance practices, and state regimes might achieve unprecedentedly effective authoritarian rule. They could also strengthen the attractiveness of digitally driven, authoritarian practices for fragile democracies.  

Because of its high technological ambitions and authoritarian outlook, China is at the leading edge of these trends and is confronted with the allegation of exporting ‘authoritarian tech’ to other states in order to expand its political and economic influence and advertise a governance model opposed to democratic notions. Against this backdrop, Western actors like the United States, which possesses the innovation edge in most technologies, as well as the European Union (EU) and its member states, face a difficult challenge: balancing the development, use, and export of AI surveillance systems while not abandoning democratic norms like their authoritarian counterparts. 

The difficulty of this task, namely the effective use of technology on the one hand and preserving the privacy, human rights, and dignity of individuals on the other, is particularly apparent in the ongoing COVID-19 pandemic. Whereas some authoritarian states are using AI- and data-driven tracking systems to mitigate the crisis in an unrestricted fashion, a debate slowly rages in the West about state authorities’ using the crisis to inch towards a potential surveillance state.

Without a doubt, the pandemic has revealed the risks of AI surveillance tools and has the potential to further accelerate the use of technologies for social control—especially in light of an ever-more data-intensive economy and society. Thus, the crisis presents an opportunity to kick-off an international debate on how to set boundaries and use technology benevolently. 

Western governments must find a way to address this growing trend of AI surveillance, especially since it will be difficult to persuade and halt authoritarian regimes such as China from refraining from using AI surveillance—the presumed advantages are all too tempting. Here, the United States, the EU, and other liked-minded states should seize the opportunity of the increase of AI surveillance in the midst of the pandemic and must adopt a multi-layered approach: Western states have to first  figure out for themselves the right balance between the effective use of AI surveillance systems and preserving the privacy, human rights, and dignity of their own citizens. Building on that, the West should present an alternative model to digital authoritarianism that comprises the use of AI surveillance tools for democratic ends. And last, a nuanced approach towards China and other authoritarian states employing these systems has to be developed and should encompasses the will to cooperate where possible and collectively sanction if necessary.

In order to shine light on the international trends in AI surveillance, the article will first describe the associated developments with a particular focus on China, the United States, and the EU, and second, it will connect these trends to the ongoing pandemic. Last, it presents recommendations for enhancing an international approach to AI surveillance.

Defining AI surveillance

The AI Global Surveillance (AIGS) Index outlines three pivotal AI surveillance tools: smart/safe city platforms, facial recognition systems, and smart policing. These tools appear in different forms and are technically sophisticated and continuously evolving. For instance, along with facial recognition are speech and gait recognition. Irrespective of their fields of use, the advantages of these systems in the eyes of state authorities are manifold: cost efficiency, reduced reliance on human workers, precise data analyses, and, more broadly, unprecedented possibilities for societal control.

Three aspects concerning the characteristics of AI surveillance are noteworthy in that context: first, these surveillance tools per se are not unlawful, and their deployment always depends on their specific application as well as their societal context. For instance, AI surveillance tools can be used both on the battlefield and for wildlife preservation. 

Second, while AI surveillance is one of the key elements of a growing digital authoritarianism trend, other digital instruments also fuel this globally spreading development, including Internet censorship and firewalls, government-made spyware, state-supported disinformation campaigns, and other forms of surveillance via drones or GPS tracking.

And third, there are diverging views concerning the ways in which and to what extent these tools should be used and deployed, illustrated in their development and application in the Chinese, US, and European contexts.

China

There are several reasons that China is globally at the forefront of the development, use, and export of these AI surveillance systems. First, in light of Beijing’s “A Next Generation Artificial Intelligence Development Plan” and its general push for technological supremacy, the country is home to several cutting-edge AI surveillance companies and so-called unicorns (start-ups with a current valuation of $1 billion USD or more). Large companies such as Huawei, Hikvision, Dahua, and ZTE are developing these technologies in various forms, from AI-based video surveillance to full-fledged smart city packages. AI startups such as SenseTime (valued at $7.5 billion), Megvii ($4 billion), CloudWalk, and Yitu (both $2 billion) are the leading global players in facial recognition technology. In general, Chinese surveillance companies are in a dominant position, and according to estimates, they “will have 44.59 % of the global facial recognition market share by 2023.” 

Second, Chinese state authorities are striving to establish a social credit system— “a big-data-fueled mechanism, to become a powerful tool for enforcement of laws, regulations or other party-state targets […] The idea is to centralize data on natural persons and legal entities under a single identity (the Unified Social Credit Number), then rate them on the basis of that data, and treat them differently according to their behavior.” The system is neither completed nor nationwide yet. However, it is due to expand with the adoption of AI surveillance tools. The concept of collecting information about citizens in a centralized way is actually not that new, even among Western societies. The United States has criminal records and credit scores, and Member States of the EU have healthcare histories. What is new is that China is tracking and using types of data that most Western countries would refrain from. AI helps automate and scale surveillance, potentially enough to realize the worst and best tendencies. China’s strong AI-related industrial and technological sectors, authoritarian tendencies, government involvement in production and research, lax data privacy laws, enormous population, and even a certain degree of societal acceptance of state practices all create the perfect environment for AI surveillance development and deployment.

In that context, the current focal point of the international criticism of China’s AI surveillance usage is the Xinjiang region, which has  become “an unfettered surveillance laboratory for surveillance giants to hone their skills and their technology platforms without the usual constraints.” The combination of the suppression of Uighur and other minorities on the one hand and the testing and deploying of cutting-edge technology on the other is one of the most striking examples of digital authoritarianism. Recent revelations in this context showed that Huawei has allegedly developed and tested a so-called “Uighur alarm”, an AI-based face-scanning camera system that can detect persons of the Muslim minority group and alert Chinese authorities in Xinjiang. According to the reports, Huawei has developed these AI surveillance tools in cooperation with several domestic security firms. 

Third, Chinese companies lead the way in exporting AI surveillance technologies internationally to sixty-three recipient countries, with Huawei at the forefront of supplying at least fifty. Uganda, for example, acquired a nationwide system of surveillance cameras with facial recognition capabilities from Huawei in August 2019, and from 2018 onwards, state authorities in Zimbabwe have acquired facial recognition technologies from Hikivision and Cloud Walk for border security and mass surveillance. The gathered data will also be sent back to the Chinese companies’ headquarters, “allowing the company to fine-tune its software’s ability to recognize dark-skinned faces, which have previously proved tricky for its algorithms.” Other countries that have received technologies from Chinese companies include Eritrea, Kenya, Serbia, Sri Lanka, the Philippines, Uzbekistan, and Venezuela. Even though China leads in the global export of these technologies, opinions on whether Beijing has an intentional strategy for spreading digital authoritarianism as a new ideological blueprint vary. Regardless, experts fear that China will provide these technologies to other countries in the context of its Belt and Road Initiative (BRI) in order to conduct state espionage.

Chinese technology companies such as ZTE, Dahua, and China Telecom are eager to sway international standards bodies such as the International Telecommunication Union (ITU) for several AI surveillance forms, including facial recognition, video monitoring, and city and vehicle surveillance. The standards proposed by Chinese companies include broad application possibilities and rights for state authorities, like vast storage requirements for personal information and proposed application fields like cognition technology, from “the examination of people in public spaces by the police [to the] confirmation of employee attendance at work.

Irrespective of whether or not China is intentionally promoting digital authoritarianism via its export of AI surveillance tools, it is providing mechanisms for unprecedented societal control all over the world. Moreover, its domestic deployment of these tools and the notions it has presented to international standard bodies differ from the practices and ideals of liberal democracies. 

The United States

The blatant use and export of AI surveillance systems by Beijing has become an issue in the US-Chinese tech confrontation. In January 2020, the then-US Defense Secretary Mark Esper said that China is becoming “a 21st century surveillance state with unprecedented abilities to censor speech and infringe upon basic human rights. Now, it is exporting its facial recognition software and systems abroad.” This opinion was echoed by members of Congress from both parties, including House Intelligence Committee Chairman Representative Adam Schiff and Senator Marco Rubio. Democratic Senator Brian Schatz even proposed to issue the “End Support of Digital Authoritarianism Act” in the summer of 2019. It would have barred companies from countries with a bad human rights record from participating in the Face Recognition Vendor Test (FRVT) held by the National Institute of Standards and Technology (NIST), known as the gold standard for measuring consistency of facial recognition software.

The most salient reaction by US authorities occurred in July 2019, when the Commerce Department put eight Chinese companies and twenty Chinese government agencies on the entity list. Those companies and agencies are accused of “human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance against Uighurs, Kazakhs, and other members of Muslim minority groups in Xinjiang.” It is now prohibited for US companies to export high-tech equipment to the Chinese agencies and firms, among them the three facial recognition start-up unicorns SenseTime, Megvii, and Yitu, as well as the world-class surveillance camera manufacturers Dahua and Hikvision. However, some of these blacklisted companies have achieved to circumvent the sanctions and still export its products to Western countries.

However, the United States itself is confronted with accusations of hypocrisy: in the aftermath of the September 11 terror attacks, US intelligence services massively expanded their surveillance practices, which became apparent due to the Snowden revelations in 2013. Major US-based tech companies have also exported AI surveillance technologies all over the world. Surveillance systems have also been used by federal or state authorities beyond intelligence gathering—or instance, on the US-Mexico border, where “an array of high-tech companies purvey advanced surveillance equipment.” However, it would be misleading to draw parallels between the Chinese and US approaches: there are no signs that US authorities will deploy a similarly all-encompassing system to publicly surveil or even grade its citizens. Furthermore, companies headquartered in the United States are characterized by a largely transparent corporate structure within a rule-of-law framework as opposed to the Chinese model. 

In general, federal and state authorities in the United States are at the very beginning of considering the regulation of AI surveillance deployments, especially facial recognition. When it comes to existing stipulations in those areas, there are varying degrees of restrictions and controls in cities and states in the United States, leading to a patchwork of regulation across the country. In San Francisco, San Diego, and Oakland, city agencies are banned from using facial recognition technologies, while other cities such as Detroit allow a restrained use of facial recognition by their police departments. Recently, the Portland City Council prohibited the public and private use of facial recognition technology. 

In light of the absence of nation-wide regulations, federal lawmakers have embarked on passing national legislation on facial recognition technology. For instance, the “Ethical Use of Facial Recognition Act” was proposed by Senator Jeff Merkley, which would “forbid the use of the tech by federal law enforcement without a court-issued warrant until Congress comes up with better regulation.” It would also establish a commission to further assess facial recognition technology and propose guidelines. Discussion of regulating facial recognition technology has gained traction in Congressional  hearings over the last year.

In light of George Floyd’s murder and the subsequent protests against racism and police brutality all over the country in early June 2020, IBM was the first company to announce in a letter addressed to lawmakers that it will cease to sell, develop, or research general-purpose facial recognition technology. Microsoft and Amazon followed to some extent and announced their pause the sale of such technologies to police forces. Both tech giants publicly state that they wouldn’t offer facial-recognition technology to state and local police departments until proper national laws with respect of human rights are enacted, and both companies have already called on lawmakers for such regulation.

In sum, the United States has recently increased its attention to and activity around these issues at home and abroad. However, a value-based approach addressing the trend of digital authoritarianism has played a comparatively minor role in the context of the current technological rivalry with China. Further, the regulatory landscape of AI surveillance, in particular facial recognition technology, is still a patchwork across states and cities, and comprehensive national legislation is absent as of now.

The European Union

In terms of artificial intelligence and the digital realm in general, the European Union has been following a ‘human-centered approach,’ which it is eager to promote as a global and unique selling point. In the previous European Commission under President Jean-Claude Juncker (2014-2019), the High-Level Expert Group on Artificial Intelligence issued the Ethics Guidelines for Trustworthy Artificial Intelligence, promoting the idea that so-called trustworthy AI should be lawful, ethical, and robust. These guidelines already point to the necessity of “differentiating between the identification of an individual vs the tracing and tracking of an individual, and between targeted surveillance and mass surveillance”.

The new Commission under Ursula von der Leyen (2019-2024) has signaled that this human-centered approach will be further developed. In terms of AI surveillance, the leak of the first draft of the EC’s white paper on AI in January 2020 made headlines for the document’s envisioned five-year ban on the use of facial recognition systems in public areas. However, the official version issued one month later had watered down language with no mention of a potential ban. The document rather adopts a risk-based and sector-specific approach to set boundaries for AI systems, including facial recognition software. It says that the “gathering and use of biometric data of remote identification purposes, for instance through deployment of facial recognition in public areas, carries specific risks for fundamental rights.” In order to identify these risks and potential areas for regulation concerning remote biometric identification, “the Commission will launch a broad European debate on the specific circumstances, if any, which might justify such use, and common safeguards.” In that context, the white paper also puts particular emphasis on the existing data protection rules, for instance given by the EU GDPR (EU’s General Data Protection Regulation), since they only allow the processing of biometric data in very specific cases. Concrete EU legislative proposals on how to regulate AI applications are planned in the first half of 2021.

The white paper on AI and the EU in general falls short of addressing the global trend of digital authoritarianism. The EU follows an inward-looking approach but has still not displayed any grand aspirations to directly tackle the global element of the AI surveillance challenge. Against this backdrop, EU officials and high-ranking politicians from member states have been ratherhesitant to criticize China´s social credit system or the repressive use of AI surveillance in Xinjiang. 

The balancing act between an outright ban and the restrained use of these technologies is seen in their implementation. A German case illustrates the difficulty of the trade-off between effective crime control and data privacy concerns. After the Federal Ministry of the Interior tested facial recognition cameras at the Berlin-Südkreuz train station, it wanted to deploy these systems to one hundred and thirty-four railway stations and fourteen airports all over Germany. However, after lawmakers from the opposition and even coalition partners and civil society protested these plans, they were put on hold.  

The approach of French state authorities is less restrained. In October 2019, the Ministry of Interior announced its plans to use facial recognition technology in the framework of its national digital identification program, named Alicem. This would make France the first EU Member State to use the technologies for digital identity. However, the plans have provoked criticism from civil society, and questions have been raised about whether the deployment is in conformity with GDPR regulation. In another case, French regulators ruled against the use of facial recognition technology in high schools. In order to provide overall legal clarification, the French government announced in January 2020 that a legal framework for developing facial recognition for security and surveillance would be established soon. Furthermore, in the aftermath of the latest terror attacks in Nice in October 2020, several high-raking French politicians have called for the installment of AI surveillance tools in public spaces to tackle terrorism.

As with the situation in the United States, Europe is  at the beginning of regulation and finding balance. However, with the AI white paper’s risk-based and sector-specific approach and GDPR, the EU has predetermined a potential framing for promoting its notions in the international debate. 

COVID-19 and tech surveillance: A mixed blessing

The deployment of AI by several countries to monitor, track, and surveil individuals during the ongoing pandemic is controversial. As the country with the first COVID-19 cases, China has been at the forefront of using AI surveillance systems to monitor whether individuals are adhering to social distancing measures and to track the contacts of suspected or confirmed infected persons. Chinese facial recognition start-ups such as Megvii, Hanwang, and Sensetime modified their systems to identify those not wearing facemasks in public and detect fevers. Digital platforms and mobile networks are also used to track the radius of movement and to process this data in AI-driven systems. Baidu, one of the Chinse tech giants, has installed infrared and facial-recognition technology, which is capable of automatically taking pictures of each person’s face and checking more than two hundred people per minute at the Qinghe railway station in Beijing. In addition, “Chinese authorities are deploying drones to patrol public places, conduct thermal imaging, or to track people violating quarantine rules.

Moreover, state authorities have introduced the method of ‘risk scoring,’ which allocates “a color code—green, yellow, or red—that determine[s]…ability to take transit or enter buildings in China’s megacities.” The Chinese government has dubbed the extensive use of its surveillance technology for crisis mitigation as “an all-out people’s war on coronavirus.” Thus, it is no surprise that some tracing apps were developed by the joint work of commercial enterprises, state authorities, and law enforcement agencies.

Other authoritarian states are also deploying similar measures, though not usually to such an extent. For instance, AI data crunching apps for contract tracing have been frequently used by the members of the Gulf Cooperation Council, the political and economic alliance of all Arab states of the Persian Gulf. Russia, considered one of the main global drivers and implementers of AI surveillance alongside China, is using the pandemic to test its newly developed systems, too. In Moscow, for instance, a large-scale surveillance network of more than 100,000 cameras with facial recognition technology scours the city to determine whether people are violating the rules of their quarantine restrictions. 

These cases show that already-installed AI surveillance technology in predominantly authoritarian states can be repurposed to mitigate the crisis. However, at the same time, there is concern among civil society that the COVID-19 crisis will be exploited to solidify surveillance states. Even some democracies—mostly in South Asia—relied on similar tools early in the crisis. The reason, however, that states such as Taiwan or South Korea are more willing to deploy these tools is that East Asian societies tend to be more collectivist than Western ones, particularly regarding issues of privacy. Yet, they clearly illustrate the dilemma for democracies: deciding between effectively managing a crisis and the potential reduction of privacy and human rights.

Recommendations for starting the international debate

The COVID-19 pandemic is an important moment in the global use and potential containment of AI surveillance. As put by Nicolas Wright, a recognized tech-surveillance expert at the University College London (UCL): “Just as the September 11 attacks ushered in new surveillance practices in the United States, the coronavirus pandemic might do the same for many nations around the world. […] But neither the United States nor European countries have used the widespread and intrusive surveillance methods applied in East Asia. 

Therefore, the COVID-19 crisis and the recent global awareness of these issues should be taken as reason for the West to find a consolidated approach towards AI surveillance. The fact that globally operating US tech giants have also stopped or restricted their involvement in facial recognition technology might add momentum to the discussion.

Therefore, Western countries have to adapt a threefold approach: first, as seen with the domestic situation in the United States and the EU, Western governments and institutions must first find the right approach to regulate the application of AI surveillance systems domestically before engaging on the international level successfully. The AI legislative proposals expected from the European Commission will be important to set clear positions against the undemocratic use of AI surveillance tools. In the best case, these proposals will have imitation effects for other like-minded states. The US Congress should answer the calls of great tech companies and the civil society and push for federal legislation to overcome the current domestic patchwork of AI rules. However, this will be not a one-time exercise for EU and US authorities since the fast-paced developments of AI applications will need permanent and quick adaptions. 

Second, Western states need to possess and present an alternative, human  and democracy friendly model of AI surveillance to the trend of digital authoritarianism so that other governments will have an alternative. Here, potential disagreements between the EU and United States concerning AI have to be resolved. It is well-known that both actors have different notions about the degree of AI regulation since the latter prefers a more ‘laissez-faire’ approach. However, there are indications that especially for AI surveillance, the disagreements are not too great. For instance, the recently adopted Statement on Artificial Intelligence and Human Rights from the Freedom Online Coalition (FOC), of which the United States and several EU states are members, notes the importance of the preservation of human rights in light of AI developments. 

In general, international collaboration on restricting the areas for AI surveillance is therefore critical, and the increase of AI surveillance tools in the midst of the pandemic is a unique opportunity to further drive the conversation in international fora. Public leaders might begin this discussion by building on the already existing work of other organizations and countries—for example the “Recommendation of the Council on Artificial Intelligence” by the Organization for Economic Co-operation and Development (OECD). Other blueprints or references are the EU’s white paper on AI and the UN Roadmap for Digital Cooperation which foresees the “multi-stakeholder efforts on global AI cooperation […] and use of AI in a manner that is trustworthy, human rights-based, safe and sustainable, and promotes peace.

According to the study of the German think-tank Stiftung Neue Verantwortung, however, the current situation is rather characterized by a “complex web of stakeholders that shape the international debate on AI ethics [and by] outputs that are often limited to non-binding, soft AI principles.” Besides expert groups on the EU level, there is, for example, the Ad Hoc Committee on Artificial Intelligence (CAHAI) of the Council of Europe in the wider European context, the two expert groups at OECD level (comprising European states and the United States), and the Ad Hoc Expert Group for the Recommendation on the Ethics of AI at the UNESCO. In none of the mentioned fora, are all three actors—the EU, the United States, and China—common members. Therefore, finding the right way for a dialogue and developing consensus on these issues will be an enormous challenge due to the different approaches among the countries and the emerging triad of digital autocrats, fragile democracies or ‘digital deciders’, and liberal democrats. 

However, with  signals from from the incoming Biden administration pointing to more engagement in multilateralism, the EU and other like-minded states should at least for themselves agree on the basic principles surrounding AI surveillance in order to convince others to adhere to their notions. Regarding the mentioned international fora, the OECD seems to be the best suited one due to its current work regarding its AI principles and observatory as well as the membership situation. Furthermore, similar to cyber consultations in which states are exchanging views on opportunities and threats in the context of cyber space, ‘strategic AI consultations’ between foreign ministries of like-minded states can help to better grapple with the challenge of AI surveillance.

Even though the international debate on AI surveillance is occupied with the dangers it poses to human rights, there are positive examples worth mentioning. AI surveillance tools are used for taming wildfires, and AI recognition tools developed by Microsoft have been repeatedly used to detect and protect endangered species. Another field is the use of AI for medical applications. However, these positive examples have to be expanded since the scale of their impact is limited compared to digital authoritarian implementations, including the envisaged public mass surveillance of 1.5 billion people by the Chinese government. In a common effort, several ‘tech for good’ areas can be jointly developed, which would also help mature and ‘enshrine’ an alternative to ‘authoritarian tech.’

Concerning AI and data privacy, state governments, private companies, and NGOs should further develop collaboration, standards, and international awareness for privacy-enhancing technologies (PETs). According to the European Union Agency for Network and Information Security (ENISA), PETs refer to “software and hardware solutions, i.e. systems encompassing technical processes, methods or knowledge to achieve specific privacy or data protection functionality or to protect against risks of privacy of an individual or a group of natural persons.” The deployment of PETs, if well-conceived and properly executed, could strike a balance between using AI surveillance for effective crisis management on the one hand and protecting privacy on the other. This privacy-by-design approach has been repeatedly called for by Margrethe Vestager, the Executive Vice-President of the European Commission for a Europe fit for the Digital Age (Competition), and is one of the suggested principles outlined in the EU GDPR. In the United States, Senator Kirsten Gillibrand’s recently envisaged Data Protection Act contains the same themes. Similar to PETs, standards for these purposes could ease many data protection concerns (even though this will have limited effect for facial recognition). 

Third, an approach towards China and other troubling users of AI surveillance tech has to be nuanced: cooperate where possible, but impose restrictive measures such as sanctions if needed.
The discussion in international fora should include China, which has its own notion of AI ethics and regulation (Beijing AI Principles), as well as other authoritarian states. Besides international collaboration, however, governments should further scrutinize companies exporting AI surveillance tools used in human rights abuses. In light of the US-Chinese tech confrontation, companies such as Huawei and ZTE and their supply of surveillance technology as an instrument of political repression have provoked criticism from governments and civil society groups alike. In light of the growing interrelationship between technological advances and possibilities for political repression, public leaders should consider international sanctions and clearly state that punitive actions will be imposed on companies and states in response to human rights abuses, and not for reasons of economic and military competition.

Unequivocally, the private sector must be part of the debate as the endeavor requires a multi-stakeholder approach. Companies should contribute their expertise in developing and handling these technologies and clearly show the benefits and challenges in applying AI surveillance tools. In that context, private leaders can commit themselves to supplying technology only for lawful use, as has already happened to some degree with American tech leaders. In that context, IBM has recently even called on the US Department of Commerce to develop new export rules about “the type of facial recognition system most likely to be used in mass surveillance systems, racial profiling or other human rights violations.”  

For like-minded Western countries, finding areas of cooperation with authoritarian states, especially China, will be of great importance. At the same time, however, certain practices that run contrary to the values of human rights and rule of law must be clearly addressed. With the Biden administration at the helm, a more cooperative spirit in the transatlantic relationship and beyond is expected. Further, the preservation and promotion of democratic values will most probably receive more attention. AI surveillance must come to the fore in the dispute between democracy and authoritarianism. Otherwise, the dystopian scenario proposed at the beginning of this article is only a matter of time.     

Kaan Sahin is a Research Fellow in Technology and Foreign Policy at the German Council on Foreign Relations (DGAP).

Smart Partnerships Series

Mar 14, 2020

Europe’s third way

By Julian Mueller-Kaler

The Atlantic Council’s endeavor to evaluate the implications of modern technologies for society and politics continued in Brussels, where the event was co-hosted by the European Parliamentary Research Service (EPRS).

Americas China

Smart Partnerships Series

Mar 14, 2020

The price of great power politics

By Julian Mueller-Kaler

Unlike previous gatherings where a majority of participants represented government institutions, the roundtable in Germany brought together business- and private-sector representatives. Together with policy experts, they discussed the emergence of new technologies, the rise of China, and the implications of a global AI race.

Americas China

The GeoTech Center champions positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

Related Experts: Julian Mueller-Kaler and David Bray