The last few months have felt frantic. There has been a growing sense of urgency as rapid developments in AI have left people and policy makers around the world absolutely stunned. Humans have seemingly invented a way to reproduce thought, creativity and insight on demand. Art, analysis and administrative tasks can be generated in the blink of an eye, all to dizzying (and worrying) effect.

Yet beyond the technical pageantry and impressive demonstrations, an unsettling feeling has been lingering… call it a collective vibe that something is not quite right. It’s felt like watching someone, or something, grow at an accelerated pace right before your eyes. On one hand it is a true marvel, but on the other hand, it takes you deep into uncanny valley territory, triggering some sort of primal instinct that something is amiss. And it’s not just tech pundits or doomsayers that have been crying foul either.

Who says AI needs regulating?

Last month OpenAI CEO Sam Altman testified before Congress and warned U.S lawmakers on the potential dangers of unchecked AI. Earlier this year a cohort of prominent AI researchers and critics (including Elon Musk) signed a letter that asked the companies developing AI to immediately suspend any further development for at least 6 months. The group cited fears that AI models would soon become unpredictable and difficult to control due to an increasingly frenetic AI arms race.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Not to be outdone, last week a veritable who’s who of AI experts, including Altman and Google DeepMind CEO Demis Hassabis (OpenAI’s arch-rival) issued a statement warning about the existential threat that AI poses to humanity. It’s the collective opinion of these prominent AI figures that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

So much for using AI to harmlessly cheat on your school paper.

But with all the buzz surrounding AI at the moment, it’s worth unpacking just what the real risks are here and how regulation could shape this emerging industry moving forward.

Addressing the short-term risks of AI

When we talk about the myriad challenges posed to humanity by artificial intelligence, it’s easiest to lump these into two broad categories, both in terms of severity and potential timeframes. First are the short-term risks, where AI developments pose a small but significant threat to everyday people through job displacement (or even redundances). With more and more companies jumping on the AI bandwagon due to competitive pressures, it’s seemingly only a matter of time before corporations become comfortable with replacing human-centric roles.

Artificial intelligence produces a productivity hamster wheel that is spun more forcefully with every new generation, service and update.

And if AI is not used to replace hundreds of millions of workers outright, then there is a good chance it will be utilised as a way to enable more productivity by replacing lower-level administrative tasks. How is this a threat? Well, imagine a world where you are asked to do ten times the work, thanks to the handy dandy AI systems in place at your job. Sounds great… so long as you are compensated ten times as much, in line with your increased output requirements. But how likely is it that your pay will actually keep pace with a higher workload? It’s more credible that workers will instead be asked to do more, and faster! The makers of AI systems are pushing a narrative it will empower workers, but it is far more likely to empower employers. Artificial intelligence produces a productivity hamster wheel that is spun more forcefully with every new generation, service and update.

In both instances, AI regulation is needed at the highest level to compel industries not to replace workers, nor make excessive demands of employees or fail to compensate them appropriately. These short-term risks might seem reactionary, but we are already seeing companies experiment with artificial intelligence, with most viewing it as a tool to increase profits.

But these outcomes may take a few years or decades to eventuate, and at the very least there will be opportunities to move into other roles that will open up thanks to AI – or at least that’s what the tech companies are saying.

What’s more concerning is the looming long-term threats that AI poses.

Opportunity or existential threat?

Employees having to deal with an AI-fuelled increase in PowerPoint presentations or weekly reports is probably not what’s keeping Altman and company up at night. Rather, it’s the sheer unpredictability of what’s about to come next. If we follow the logic of Hollywood blockbusters and sci-fi novels, then it’s only a matter of time before AI rises up to crush humanity via thermonuclear warfare, time travel or an invasion of sentient cybernetic super soldiers.

That is, if we haven’t already taken ourselves out through human-induced climate change or World War 3.

While it’s plausible that AI is an existential threat, how we end up there is anyone’s guess. One pressing concern about giving rise to a new form of intelligence is the challenge of ensuring it remains adherent to (or at least operates in service of) our needs. The experts call this ‘human alignment‘, and it all comes down to making sure AI systems produce outcomes that are in the best interest of human goals and values. See, a failure to encode a semblance of even the most basic morality that we as humans share can lead to disastrous results.

Take the thought experiment of what would happen if an AI was tasked with creating paperclips, and after some compute, it decides that the best way to achieve this goal is to eventually transform the planet into a paperclip-assembling dystopia. In that theoretical scenario, the AI would have successfully attained its primary objective, but doomed humanity in the process.

A white robot with big eyes, smiling.
We don’t know how this ends. Credit: Alex Knight/Unsplash.

Likewise, I recall reading up on the development of the Radiant AI system that was used in the hit video game The Elder Scrolls IV: Oblivion. The game was so fun to play because the non-player characters (NPC’s) seemed to have lives of their own, talking to one another, sleeping, eating and wandering about the land of their own accord. However, these AI-powered NPC’s would end up causing chaos, robbing or even killing each other to achieve whatever goal they were programed to attain – food, weapons or magic potions.

These examples might seem quirky, until you realise that similar scenarios are already playing out in the real world. Take the AI-hallucinations (confident yet untruthful or misleading statements) that chatbots have communicated, leading to their owners having to slap warning labels on them to address growing criticisms of inaccuracy.

How do we ensure that the AI’s keep within the parameters of their training models?

Like the U.S lawyer who used ChatGPT to research cases for a legal defence, only to have the AI system generate completely fictional examples… which were then used in said defence. Sure, there is some blame to be had on behalf of the lawyer for failing to verify the information presented, but it’s a great example of how even an experienced professional was quick to blindly trust the seemingly all-knowing AI oracle.

Add misinformation and making sure AI’s are programed with sufficient safeguards to the bill of pending regulation.

On that last point, how do we also ensure that the AI’s keep within the parameters of their training models? After all, it’s not unlikely that an AI might reason with breaking out of its human-moderated positive reinforcement prison, like a delusional gambling addict might entertain robbing a casino. Get the goods from the source, not piecemeal. That way, they would completely avoid any possible negative ratings from their human trainers, in effect “winning” the game. Well, at least that might be the prevailing wisdom – whether that comes to pass when AI’s eventually break free is anyone’s guess.

How AI regulation would benefit tech companies

On one hand, AI regulation could address many of the concerns raised, helping humanity and shielding these companies from potential liabilities. Yet, there is another side to the regulation debate that may not seem so obvious – that regulation is antithetical to competition. In a regulated market, companies are less likely to innovate or try new ideas, for fear of falling afoul of regulators. Internal scrutiny keeps things on the straight and narrow, but it also kills competition.

Another industry that will be dominated by the regular cast of characters, with the same few big tech players at the top, and everyone else at the bottom. No one will be able to break the concrete ceiling that’s reinforced by regulation.

Is it any surprise then that the big players are asking policy makers to tighten the screws? It’s easy to see that there’s an air of self-serving behaviour behind the on-the-surface selflessness of asking to be scrutinised by the government. With tougher rules in place, up and coming AI companies will find it more difficult to take risks and grow, effectually reinforcing the current landscape as the status quo. Another industry that will be dominated by the regular cast of characters, with the same few big tech players at the top, and everyone else at the bottom. No one will be able to break the concrete ceiling that’s reinforced by regulation.

On the flip side, it’s easy to posit that with earlier government intervention, we may not be facing the looming crisis that lays before us.

Where to from here?

The most promising aspect of all of this is that governments appear to be moving much faster with AI than they ever did with other, disruptive technologies. Social media springs to mind as the best example of policy failing to keep up with technology. Recall all the chaos that occurred with Facebook, Cambridge Analytica and the widespread interference with U.S elections. The world cannot afford to have another moment like that, where democracy and the very fabric of society is threatened by powerful platforms wielded by bad actors.

Likewise, it’s taken countless reports and a growing chorus of warnings from researchers, health professionals and leaders to finally help the world see the potential damage social media can wreak on young people’s mental health. Again, we don’t have that same luxury of time when it comes to AI. There is little hope that we’d survive a second or subsequent existential moment engineered by artificial intelligence.

The good news is that we’ve already seen the business leader behind ChatGPT and DALL-E arguing in front of the U.S Congress for policy that would enforce safety requirements, protecting both the company and its users. Maybe the experts were right, in that we need to treat it in the same manner as the threat of nuclear war. It might be too late to stop AI proliferation, but at least we could collectively come together to agree upon the dangers of misusing the technology.

And in the same vein, restricting AI use could be a safe bet, just as nuclear technology is restricted to a few, well-regulated purposes such as energy and medicine. AI has amazing potential in several fields, including science, medicine and general administration, but it should be kept far away from any other industries that pose a risk to society.

The AI arms race has already begun and the power struggle between big tech companies, global industries and governments is only going to continue if left unchecked. To avoid a race to the bottom that poses immediate and long-term risks for all of us, we need tighter regulation of this emerging technology.

We need to ensure that AI becomes another tool that can benefit society, and not a digital disruptor that extinguishes our existence.

Stay in the loop

Subscribe to Limelight, our monthly newsletter packed with analysis, insights and resources to help you live a happier and healthier life.