Technology & Innovation

GeoTech Cues

January 25, 2023

AI generates new policy risk

By Jonno Evans

Over the last few months, there has been a surge of interest in artificial intelligence (AI) as a plethora of new tools have been released. Social media feeds have been awash with generative art produced by DALL.E 2 and Shakespearean sonnets written by ChatGPT. But these tools were not developed overnight. 

In 2015, Elon Musk, Sam Altman, and other investors including Reid Hoffman and Peter Thiel created OpenAI – a research company that would make its AI research open to the public. A major paper was published in 2018 which led to the first version of Generative Pre-trained Transformer (GPT) software. These language models are based around text prediction – a prompt is inserted, and the algorithm generates what it thinks should come next having been trained on a massive data set. GPT-1 was released in 2018, GPT-2 in 2019, and GPT-3 in 2020. GPT-4 is due to be released later this year and is expected to be an even bigger leap forward. OpenAI is reportedly in talks with investors that would value the company at $29bn, making it one of the most valuable startups in the world.

There are good reasons for the excitement. GPT-3 and similar models (Google, Facebook, and others all have teams working on similar projects) are incredibly powerful and are being used in increasingly creative ways. 

For instance, ChatGPT was released by OpenAI in November 2022 and enables users to generate paragraphs of text or code in a conversational style. The tool went viral with rapid user adoption: it took Netflix 3.5 years to reach 1m users, Facebook 10 months, the iPhone 74 days, and ChatGPT just 5 days. This advance has also led to the creation of new word processors like Lex, which integrates this software and generates text based on what has been written previously, as well as tools like Feather AI, which sends summaries of podcasts or videos to your inbox. 

This ability to extrapolate text is encroaching into the search market. It has been reported that Microsoft (an investor in OpenAI) is embedding ChatGPT into Bing, its search engine, which has put Google on red alert. But there are also more bespoke search engines being produced – for example, Metaphor is a general search engine designed to create links rather than text; Elicit is designed for academic research and provides summaries of research papers; and PubMed has been developed for biomedicine. 

Beyond text, tools like DALL.E 2, another application from OpenAI, as well as Stable Diffusion, Hugging Face, and Runway are focussed on generating both images and video. 

The applications that these tools are enabling are also of interest. For instance, automating email replies; creating presentations from text prompts; or writing or debugging code. Those building blocks are enabling even more creative outputs, like computer games, animations, and music, while companies like Cradle Bio are already exploring how this technology can be leveraged to improve scientific research, in their case with respect to proteins. 

Some of these tools are also inevitably being used in ways that are more problematic. Like generating clickbait New York Times-style articles from Twitter posts – ‘GPT Times’, creating synthetic reality, and for cybercrime

All these applications have already been, or are in the process of being, built with existing technology. GPT-4, the next iteration of OpenAI’s model, is expected to be released later this year, with a similar step change in functionality. But even from where we are now, it’s easy to start extrapolating some implications.

For one thing, creative jobs are going to look very different, while these AI tools are going to augment most of what we do online – an ‘autocomplete for everything’. But it will also become far more difficult to differentiate between what is authentic online and what’s not, and tools will be used for nefarious ends, including imitation, scams, and hacks. 

The second order implications are more difficult to predict but will impact how we work, how politics and campaigning operate, how our institutions function, and what issues and resources are fought over by nation states. And beyond core issues around ‘AI safety’, these are the sorts of issues that policymakers are going to have to grapple with, and in some cases, try to regulate. To take a few examples:

  1. If it is possible to replicate the voice and face of someone in real time, what does that mean for security, or the tools built to do Know Your Customer (KYC) checks using biometric data?
  2. How is copyright going to work? There are already issues with the models being used to train these AI models drawing on artists’ work, without them being compensated. The lawsuits are already starting. But what happens when it is possible to ‘create’ a song in the style of Taylor Swift recorded in Abbey Road Studios in less than a minute?
  3. Who is going to control the rents from these new and potentially vast markets, and what are the implications for inequality, as well as competition/anti-trust policy? 
  4. How will AI tools disrupt education systems beyond just automated essay writing – how can they be harnessed for delivering more tailored teaching, and how will the sort of education we need change as a result of an economy with these tools embedded? 
  5. Content moderation and misinformation are going to become even more complicated. While tools like ChatGPT return answers to prompts that appear as if they are truthful, in practice, and at present, they are largely not (see this paper for details). And they have also been found to include gender and race biases too. 
  6. Our security systems are going to be challenged. If it is going to be possible to ask GPT-4 to find people that work in a particular building that might be open to manipulation, then it is going to present profound challenges for the security services. 
  7. What systems should be put in place to ensure that the models themselves are robust and resistant to cyber-attacks? It will be important to ensure that there is confidence in the robustness of a model that is being deployed as the autocomplete for everything, perhaps leveraging tools like Advai
  8. Political campaigning will also become more of a science with rapid automated testing of arguments and narratives, and customized messaging based on individual characteristics. How will this application be regulated and managed to avoid abuse? 
  9. AI is already an increasingly important part of warfare. Companies like Anduril, Modern Intelligence, Shield AI, Helsing, and SAAB are all building in this space, including next-generation autonomous weapons, while companies like Palantir and Recorded Future are supporting Ukraine on the front line.
  10. The battle for control of semiconductor chips will only intensify, as more of the world becomes dependent on these models (and the chips that enable the models to run). Future control of the internet, and the people that spend increasing amounts of time on it, will depend on compute power and the hardware that enables it.

Policymakers, to their credit, have been thinking about these issues for many years. But the viral nature of the latest tools, and the potential power of GPT-4 and its successors, have added new urgency to these challenges. Existing government reports and strategies are a good starting point, including the UK’s National AI strategy; the EU’s AI Act (handy slide summary here); the US White House’s Blueprint for an AI Bill of Rights, and even NATO’s AI strategy. But these documents are just catching up with the status quo or setting out principles upon which future work can build, as opposed to being hard coded legislation (the EU’s plans are the most serious to date).

There is a very difficult balance to strike. These new AI tools will have a huge impact on how people work and live, and it is surely right to embrace this technology as a powerful primitive that will support innovation and productivity in a wide variety of sectors. But it is also going to be important to be mindful of its potential for misuse. Governments, policymakers, and other stakeholders must be proactive to ensure that these tools are not exploited for the wrong reasons. 

Whatever one’s views on the technology, the genie is out of the bottle. Everyone should prepare to spend far more of their time thinking about AI in the future. 

Jonno Evans OBE was Private Secretary to two Prime Ministers in 10 Downing Street as well as a British diplomat in Washington DC. He now advises technology companies at Epsilon Advisory Partners.

Logo of the Commission on the Geopolitical Impacts of New Technologies and Data. Includes an 8-point compass rose in the middle with the words "Be Bold. Be Brave. Be Benevolent" at the bottom.

GeoTech Center

Championing positive paths forward that societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

Image: Credit: DeepMind via Unsplash