Beyond human imagination: How AI is pushing the boundaries of entertainment

What you need to know about the creative pursuits of artificial intelligence

Ever wondered what you’d look like as an elf? Or wished a personalised piece of music soundtracked your day? Or that someone would write a love poem about you? With artificial intelligence tools (AI) you can create all these things before your next coffee. I’ll get OpenAI’s ChatGPT AI text tool to “write” a haiku for you:

“Words on a page shine,
Eyes dance with the lines you’ve read,
Thank you, dear reader.”

ChatGPT and similar AI tools have been making headlines recently because they’re now easier to access and use than ever before – ChatGPT’s user base has grown by a mind-blowing 100 million users in only a few months.

These tools are also incredibly convincing – you’d have believed the haiku above was written by a person if you didn’t know otherwise, right? The potential for AI in arts and entertainment is exciting, but people are worried. Creators are concerned their roles are being replaced by machines. Teachers are already banning AI tools in schools whilst researchers learn it’s almost impossible to differentiate between human written and AI produced text.

Advertisement

Granted new technology is often perceived as a threat to creative authenticity. Back in the day, reactions to early synthesisers were hilariously dramatic, with some claiming these “soulless machines” make sounds that “come from a world in which there are no humans, only devilish beings.”

It’s easy to shrug worries about AI off as more fear of innovation and limitless creativity on demand sounds like a dream come true to many. But this tech is evolving so quickly that we run the risk of ignoring genuine concerns about the future of AI. Like, how do we navigate the legal minefield of deepfakes? Who truly owns AI-generated content? Are people in the creative industries about to be permanently out of a job? And is the stuff that AI makes really any good?

AI prompts. Credit: Donato Fasano / Getty Images.
AI prompts. Credit: Donato Fasano / Getty Images.

AI is a field of computer science that involves giving machines the ability to perform tasks that typically require human-like intelligence, such as learning, problem-solving, and decision-making.

One specific brand of AI is ‘machine learning’ – providing machines with vast amounts of data so they can learn to identify patterns on their own. So, for example, a machine might not be able to tell the difference between a dog and a fox. But if it’s “fed” tens of thousands of images of dogs labelled “dog”, and images of foxes labelled “fox”, it can then learn how to differentiate between them.

‘Generative AI’ uses machine learning techniques to create original content and is a hot topic in the news right now. Unlike other types of AI that analyse data or spot patterns, generative AI can produce new and unique outputs. For instance, instead of asking a generative AI tool to differentiate between a fox and a dog, we can give an AI image system like DALL·E 2 a text-based prompt asking it to create images of a fox wearing a top hat on a date with a dog wearing a bow tie in the style of an impressionist oil painting:

And if generative AI can create paintings, it stands to reason that there are many other fields in the creative arts it can be applied, which creators in the entertainment industry are starting to realise.

Advertisement

In recent months, we’ve seen the extent at which creators have adapted to the new technology across music, gaming, film, TV and beyond. David Guetta recently said: “the future of music is in AI” after adding the AI-generated “voice” of Eminem to a new song. He even used AI to produce the lyrics: “This is the future rave sound / I’m getting awesome and underground.”

Similar tools have been used to make TV shows – or at least approximations of them. An AI-generated episode of Seinfeld was streaming continuously on Twitch from late 2022. Called ‘Nothing, Forever’ or ‘Infinite Seinfeld’, it had all the ingredients of a typical Seinfeld episode but looked like a blocky and bizarre animation. It was pulled offline from February 6, 2023 after Twitch banned it for transphobia.

Artificial intelligence has been used in gaming for years, but one of our favourite generative AI examples is AI Dungeon, a text-based fantasy game that uses AI to create new content in real time. This means the game you play isn’t constrained by how it was originally designed. Instead, you get to explore infinite possibilities for new worlds, characters and scenarios.

@deeptomcruise

Here’s how I suit up!

♬ original sound – Metaphysic.ai

If you think these AI creations sound interesting but not entirely convincing, then let us introduce you to deepfakes – realistic renderings of another person’s face or voice layered over someone else’s. DeepTomCruise is the TikTok account that became a monster viral hit for its hilarious Tom Cruise videos, except he isn’t in any of them, they’re all deepfakes. Surprisingly, Cruise has made no legal overtures to take the account down, and MetaPhysic, the company behind this account, are now using similar AI technology to help de-age the cast of the latest Tom Hanks movie.

None of these technologies are especially new, but the high degree of access people now have to them is. ChatGPT’s user base has grown by a mind-blowing 100 million users in only a few months and there are plenty of other tools you could use right now, like AIVA or Soundraw to create AI-generated music, or Midjourney and DALL·E 2 for AI-made images.

Euan Lawson, a commercial lawyer and Managing Partner at Simkins who specialises in media and entertainment law, told NME about the legal implications of AI-generated content. He says there are two major issues to consider: what data is used initially and who owns the content afterwards?

“If training data incorporates any copyright work that’s owned by a third party, like music, recordings, films, images, then, unless the use has been authorised, it’s very likely to be infringing,” Lawson says. He tells NME that the Eminem track that David Guetta produced is a prime example.

David Guetta Eminem
David Guetta and Eminem CREDIT: Getty Images

The initial data is problematic, but what about the final output? “Generally, in the UK, there must be a human author for a copyright to arise in a work,” Lawson says. Things get murky when we’re talking about computer-generated works, as the law doesn’t currently include sound recordings or films. “It remains to be seen how that definition might be interpreted where an AI has been developed by one person or business, and an unrelated person provides the prompts to generate a piece of content,” Lawson tells us.

Put simply, right now it’s almost impossible to know who can and should lay claim to a piece of AI-generated content. What’s more, laws might differ across countries and AI tools. For example, OpenAI’s DALL.E’s terms state that users own the output of the service. Free users of Midjourney are granted a non-commercial Creative Commons licence, so have no right to use any AI-generated content commercially.

Despite the many legal ramifications to consider, Lawson is surprisingly optimistic. “Creative industries are generally quick to embrace new technologies, and there will always be trade-offs between the risks and opportunities presented by AI,” he tells NME.

There’s clearly a need for greater clarity and legal frameworks to ensure that creators and rights holders are properly compensated for their work, but what are the broader concerns? Does David Guetta’s stunt signal that it’s OK for people to use anyone’s likeness or voice however they choose? A group of artists are currently suing AI image generator tools Stable Diffusion and Midjourney for allegedly using their art as the basis to create new images, which may set a new legal precedent.

Another concern is that as technology becomes increasingly more sophisticated and able to produce high-quality content, will people permanently lose their jobs to AI? Possibly. Although some people hope that the application of AI will be confined to routine task automation, like background music creation or special effects rendering, freeing up time for other aspects of the creative process.

'Spider-Man: No Way Home'. Source: Sony/Marvel Studios
‘Spider-Man: No Way Home’. Source: Sony/Marvel Studios

Alexis Wajsbrot, a visual effects supervisor and artist at Framestore who has overseen effects in movies Thor: Ragnarok and Spider-Man: No Way Home, tells us that he thinks of AI as “another tool in our arsenal to reduce time spent on intensive tasks across the VFX pipeline.”

“The work of a roto artist is painfully meticulous and time-consuming, cutting out shapes in live-action footage to be replaced by computer-generated effects,” Wajsbrot says. “AI can do a first pass, shaving off hours if not days of time for each artist.” He gives the example of using AI to create highly complex backgrounds, think vast cities and crowds or forests with hundreds, thousands of trees.

But AI is helping the Framestore team rather than replacing them. “When you get into the details of exactly what is needed in a scene or a design, that’s where AI stops and the artist’s mind is needed,” says Martin Macrae, Head of Art Department at Framestore. “You just can’t substitute a human to human discussion round the table with a text prompt.”

Tessa Thompson, Thor: Ragnarok
Tessa Thompson as Valkyrie in ‘Thor: Ragnarok’ CREDIT: Alamy Stock Photo

Is AI-generated content any good? It depends where you look and who you ask. There’s no point comparing the state-of-the-art software used by professional movie makers with the free-to-use apps you can try at home, but the quality gap is closing fast. Some argue AI could dilute creativity and produce cookie-cutter songs and mediocre movies. But that’s why, as Macrae says, there’s only so much an AI can and should be doing. AI tools may never be able to make better films or music than humans can, but could enable creators to make better films or music than they ever thought possible.

Let’s hope we can enjoy these tools while also staying grounded, recognising their limitations and learning to be more discerning about what’s real and what’s (deep)fake. Because at first glance these technologies may seem magical – creating pastiches of favourite TV shows, watching celebs perform unlikely acts, chin wagging with a chatbot that tells better stories than your partner – these are undeniably fun and cool things. But that doesn’t replace the need to seek proper consent to steal someone’s face for a video, harvest their creativity for a design, or replace their hard-won skills in the workplace.

Advertisement

More Stories:

Sponsored Stories: