Search Comment Central

AI has officially entered the zeitgeist

Elon Musk’s recent participation in the Future of Life Institutes’ open letter has drummed up a storm - AI has officially entered the zeitgeist.

This was to be expected. AI capabilities have been rapidly improving and it was only a matter of time. Moving forward most think this will only accelerate with many predictions of near exponential growth.

It is this expectation which makes fears from artificial ‘general’ intelligence (an AI which is better at everything than humans) - more justified. Eliezer Yudkowsky, a leading AI researcher, stated that he expects:

"The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in possibly some remote chance, but as in that is the obvious thing that would happen."

But is he just another member of the anti-growth coalition? After all, people protested against the printing press and the first motorised car, and almost all other transformative technologies.

The way Artificial General Intelligence (AGI) differs is that it may be agentic, i.e. it might have its own goals. Making it very different to a printing press.

There are already concerns over current levels of AI, Bing bot’s expressed a desire to release nuclear secrets, and a pre-released version of ChatGPT manipulated a real person through a third party task app to pass a captcha test (designed to tell humans and computers apart). This doesn't sound all too aligned to human values, yet.

Thankfully there are steps that can be taken, some people are working on understanding what goes on inside these AI models to better explain the outputs. 

And when Open AI developed ChatGPT-4, they left it on ‘the shelf’ for over 6 months. Running their own safety checks and inviting 3rd party auditors. But what happens if when closer to AGI they may not be the ones to get there first? Will they still exercise necessary caution?

The potential gains from being the first to develop AGI would be immense, an opportunity to solve climate change, cure diseases and rocket economic efficiency. 

And we are already seeing nations gear up for this with the 1 of the US’s announced bills totaling $2.6bn, the UK announcing over $1bn in funding of their own and many other nations following suit.

The potential gains from being the first to develop AGI would be immense. Quote

A race to the bottom where firms and nations battle to be the first to develop AGI and safety falls by the wayside is a real concern. Take the last transformative technology - the nuclear bomb. You can see this exact principle in action.

There were initially some concerns that the heat from the explosion would form a chain reaction with potential to destroy the entire world. Now while this was eventually found to be due to overly conservative assumptions, mistakes had been made previously, and the risk posed from one here was beyond catastrophic. This was excused by the risk of the Nazis doing it first - they themselves had given up on the project believing that it would indeed form a chain reaction.

So what can we learn from this? First, knowing the level of development of other countries' technology is important.

So investing in intelligence in the level of development of other nations' AI projects would be a good start. Without a universal approach it wouldn't be possible to have exact knowledge. But tracking the sales of graphics processing units (GPUs) and sizes of data sets globally could give a strong indication.

Second, global approaches are important. Regulation will likely have slow down effects for the progress of AI development, if it is not globally enforced then there are incentives for nations to avoid meaningfully regulating, such that they can try claim the potential gains. But if AGI is misaligned in the UK, the US or China it won't matter - there will be no gains to claim.

Without a global approach it is significantly more difficult to know the level of development globally, and enforcing effective regulation everywhere is near impossible. Currently, the UK is uniquely placed to lead these global efforts. Due to the co-location of technical and political expertise within London.

Because of this the UK government should look to leverage its current position to lead a multilateral effort alongside key players in AI. If we are to reap the potential benefits of AI, safety is vital, and avoiding a race to the bottom will be a key part of that.

Whats App Image 2023 04 04 at 10 31 41 AM

Eddie Bolland is a research associate at the Adam Smith Institute, a think tank focused on tackling poverty and encouraging innovation through free-markets.

Border
Most Popular
Shutterstock 2499286165
The recent official visit of...
CV 1 216x300 1
Enkhsukh Battumur
April 23, 2025
Shutterstock 2598418713
Britain is a nation of...
Jason Reed
Jason Reed
April 25, 2025
What to read next
Shutterstock 2460763783
Southern states have been on the rise for years. Frankly, ever-improving...
Kade Thomas 4 1
Kade Thomas
February 17, 2025
Shutterstock 2506589323
The new lawyer for serial killer Lucy Letby has sent an...
1dd5115b 3824 495a 9305 4b47c163aa5d
Brian Patrick Bolger
February 10, 2025
Tej Kohli 1
London-based billionaire Tej Kohli has warned that “AI is overhyped” and...
Shutterstock 1713757231
Adrian Jennings
January 28, 2025