In 2023, in the months following the launch of OpenAI’s Chat GPT, artificial intelligence entered the mainstream. With the wider public astonished by the seemingly boundless possibilities of the latest generative models, the AI boom began in earnest.
Fast-forward to the end of 2024 and what was once rightly seen as groundbreaking has lost some of its “wow” factor. Rather than merely toying with it, we are now learning how to truly apply AI – and we are understanding how it needs to improve in order to be genuinely impactful.
One way of measuring the pace of progress in this regard is to talk with Henry Ajder, whose expertise has helped shape global AI legislation and corporate strategy. When Ajder freely admits there are some questions he cannot answer – for the simple reason that his responses could be outdated by the time they are published – you know this is a revolution that is continuing to unfold at remarkable speed.
“If someone were to ask me how to spot a deepfake, for example,” Ajder says, “I might be able to highlight some vulnerabilities or inconsistencies that can be identified right now. But I couldn’t guarantee that’s still going to be the case tomorrow.”
Against this challenging backdrop, the issue of AI’s capacity for both positive and negative disruption is arguably more pertinent than ever. Does the evidence to date suggest we are gradually mastering a force for good or unleashing a potentially uncontrollable source of risk?
In tandem, there remains perhaps the most profound question of all: not least given the current rate of advances, will machines finally be able to achieve human levels of consciousness? As both a philosophy graduate and a world-leading AI authority, Ajder is especially fascinated by this enduring puzzle.
From enthralment to application
If a week is a long time in politics, as Harold Wilson famously remarked in the 1960s, a year may be a veritable eternity in AI. Few of the users who were enthralled by the capabilities of ChatGPT in 2023 could have imagined that by 2024, on balance, they would be frustrated by its inadequacies.
“We’re moving beyond the moment of wonder,” Ajder says. “People are recognising the limitations of the early AI models and looking for actual value creation. We’re seeing a maturation of the market, with a lot more players getting involved and a lot of fine-tuning for certain applications.”
In September, in a bid to stay ahead of the pack, OpenAI released ChatGPT o1. Unlike its predecessor, this iteration is designed to “think” before it responds to an input – allowing it to reason its way through more complex tasks and solve harder problems. Meanwhile, a growing number of businesses are developing AI systems of their own.
“I would hesitate to say generative, large language models have become boring to many people,” Ajder says. “But their integration has undoubtedly become far more commonplace, as a consequence of which we’ve now reached the actual implementation stage.
“This involves a kind of course correction, because organisations don’t want to invest millions or even billions of dollars in AI and then find they’re not getting the level of return they expected. They want something that really makes a difference to workflows and outcomes.”
The next big things?
Ajder believes there are two areas in particular where generative AI has made “giant leaps” during the past year. The first is largely creative – at least at present – whereas the second is more obviously business-centric.
“I predicted at the end of 2023 that video was going to be a big frontier that would be breached this year,” Ajder says. “OpenAI’s Sora, which creates video from text, was the first model to showcase what could be accomplished. The market has now become more saturated, and people are starting to grasp both the creative and the commercial possibilities.
“This is why one of the biggest TV and film moguls in the US, Tyler Perry, recently put on hold an $800 million plan to expand his studio complex in Atlanta. He saw what Sora is capable of and immediately paused the construction of a dozen new sound stages. He said in an interview afterwards: ‘I had no idea of what it’s able to do. It’s shocking to me.’”[1]
Coding is the other field in which breakthroughs are coming thick and fast. Cutting-edge platforms such as GitHub, the world’s largest host of source code[2], now provide an “auto-complete” function – a sort of predictive text for programmers.
Amazon CEO Andy Jassy recently revealed the company’s own AI “assistant”, Amazon Q, has saved the equivalent of 4,500 years in developer work by using this method to dramatically streamline software upgrades. It has also enhanced security, boosted productivity and delivered an estimated $260 million in annualised efficiency gains[3].
“What’s most interesting to me is not that businesses are now using these tools in a more formal and sanctioned way,” Ajder says. “It’s how they’re encouraging employees to experiment and to deploy AI more generally. Companies are increasingly seeing this as a strength for their operations, their clients and other stakeholders.”
The importance of authenticity
As AI’s power and influence grow, trust and transparency are assuming ever-greater importance. Deepfakes have yet to decide an election – which some commentators feared could happen during a year in which more than 50 countries went to the polls – but it is becoming harder to distinguish between what is real and what is not.
“The narrative of the past five or six years has been that deepfakes are a gun waiting to be fired,” Ajder says. “The expectation has been that the impact would be catastrophic. We haven’t really seen that yet, probably because the tools that are being used are still pretty nascent.
“The places where deepfakes have been effective so far have been precisely the same places where more traditional forms of disinformation have proved successful. A video of Donald Trump doing something terrible might convince Never Trumpers, for example, or a video of Kamala Harris saying she’ll turn the Democrats into a Communist party might have convinced die-hard Trump supporters. It hasn’t been the game-changer some people were worried it would be. But that’s not to say there won’t be a bigger impact in the future, of course.”
Ajder believes the deepfake phenomenon underscores a vital consideration in the AI sphere as a whole, which is that people do not like to be fooled. This applies in multiple arenas and a broad range of industries. “I think what we’re seeing now, especially when our daily lives are being flooded with so much synthetic content, is a real yearning for something more authentic and human,” he says.
“Imagine your favourite band releases a new album that’s based on the members’ life experiences. There’s a song about love, a song about grief, a song about happiness. You listen to it all and think: ‘Wow, this is amazing.’ And then you find out every single aspect of it, from the lyrics to the instrumentation to the vocals, was AI-generated – and something you absolutely loved suddenly becomes a cynical cash-grab.
“There are plenty of instances of AI-generated content that’s explicitly synthetic and which people still enjoy and engage with. But the point here is that we want to know exactly what we’re consuming.”
Probability and possibility
The prospect of computers perfectly emulating humans has existed beyond the realms of science-fiction for around three quarters of a century. Alan Turing, the genius who cracked the Enigma code during World War Two, led the way with his “imitation game”, which in many ways is still the go-to means of assessing a machine’s ability to exhibit human-like intelligence.
State-of-the-art generative AI models can now pass Turing’s test with aplomb. Looking ahead, Ajder expects a further acceleration in the shift towards agentised AI – in effect, a move from co-pilot to auto-pilot – which in turn is likely to require closer attention from policymakers, legislators, lawmakers, platforms and other sources of oversight.
But does any of this mean machines are on the verge of real consciousness? Despite the advances of the past year and those that are likely to emerge in the near future, Ajder is not yet convinced.
“The AI models we have at the moment can feel very human in certain contexts. They’re very, very carefully constructed. You can see why some people start to react towards them and think they’re expressing some form of sentience or self-awareness,” Ajder says.
“But at the end of the day, from where I’m standing, this hasn’t been achieved yet. Effectively, we’re still dealing with probability machines – programs that are based on huge amounts of data, learn very well and understand what the next token or word in a sequence should be.
“To me, based on my theory of mind, that’s not human-level consciousness. But if you were to ask me whether it’s theoretically possible – well, I would have to say yes. I think we all appreciate this journey is still only just beginning.”
About Henry Ajder
Henry Ajder is a world-leading expert on artificial intelligence, deepfakes and the synthetic revolution. Renowned for anticipating AI’s seismic impact in his acclaimed BBC documentary series, The Future Will be Synthesised, he is an expert adviser to organisations including Adobe, the European Commission, EY, Meta, the Organisation for Economic Co-operation and Development, the US government and the World Economic Forum.
Henry is the founder of Latent Space Advisory, which specialises in helping business and political leaders navigate the fast-moving AI landscape. He was previously Head of Policy at Metaphysic, Hollywood’s leading AI company, and Head of Threat Intelligence at Sensity, the world’s first deepfake-detection start-up.
Watch Henry’s Creating The Future talk.
[1] See, for example, Hollywood Reporter: “Tyler Perry puts $800m studio expansion on hold after seeing OpenAI’s Sora” – https://www.hollywoodreporter.com/business/business-news/tyler-perry-ai-alarm-1235833276/.
[2] See, for example, Wikipedia: “GitHub” – https://en.wikipedia.org/wiki/GitHub.
[3] See, for example, Yahoo! Finance: “Amazon CEO Andy Jassy says company’s AI assistant has Saved $260m and 4.5k developer years of work” –https://finance.yahoo.com/news/amazon-ceo-andy-jassy-says-213018283.html.