AI will democratise the production of content – fake or otherwise – making the process cheaper and easier. It will also lead to many more questions about how data is used, and by whom, writes Nina Schick.
Artificial intelligence, so long in the realm of science fiction, is fast becoming a practical reality and promises to change society beyond recognition. The mass availability of data and the improving ability of machines to churn through it all, means that AI can ‘learn’ to do almost anything when ‘trained’ on the right data.
One way this is upending the world is through the birth of so-called ‘synthetic media’. That is any type of media – video, audio, image or text – that is generated or manipulated by AI.
The rapidly evolving ability of machines to generate artificial content from scratch has only been made possible in the past five years. We are only at the very beginning of the synthetic media revolution. It is no exaggeration to say that the future will be synthetic: all media will increasingly be made using AI.
And in an age of exponential technological transformation powered by data, society is inevitably struggling with the magnitude of the changes that are overtaking it. This is one more reason why now is the time to discuss and build an ethical framework for the safe deployment of synthetic media and all other data-powered applications of AI.
For example, one synthetic-text-generating model trained on AI can already generate articles that appear to have been written by a human. AI can be trained to clone someone’s voice even if they are already dead – an old recording of John F Kennedy has been used to make a clip where the former US President reads the Book of Genesis, for example. AI trained on a dataset of human faces can generate convincing fake images of people who do not exist. With the right data, AI can even be taught to insert people into videos they were not originally in. One amusing example is the YouTube project to insert the actor Nicolas Cage into every movie ever made.
A single individual can already replicate, and arguably even improve upon the visual effects created by leading Hollywood studios for next to no cost. Earlier this year, a lone YouTuber took a week using free AI software to improve upon the multimillion-dollar Hollywood production techniques used on Martin Scorsese’s blockbuster The Irishman.
Before long anyone will be able to make Hollywood-level AI-generated content using only their smartphones. Some experts estimate that within five to seven years, 90% of all video content online will be synthetic.
Even if this is an ambitious estimate, the direction of travel is clear. While there are many exciting commercial uses of synthetic media – in business this could be anything from corporate communications to employee training – it is also inevitable that it will be misused.
The power to generate high-fidelity fake content will lead to the most sophisticated mis- and disinformation ever known, especially as the amount of training data that is needed decreases. Indeed, synthetic media is already being used in malicious ways.
Too often we build exciting technology without considering how it might amplify the worst parts of human intent. We should consider the risks as well as the opportunities.
Nina Schick is an author and broadcaster. Her book, Deepfakes, explores how synthetic media will be weaponised as a tool of mis- and disinformation.
For more on how data is rapidly changing the world we live in without us having a common way to discuss and debate the effects of it, read the companion article to this piece.