OpenAI founders worried “super-intelligent” AIs would destroy the world if not strictly regulated

Godfrey Elimian
OpenAI leaders worried "super-intelligent" AIs would destroy the world
Are fears of AI replacing jobs coming true?

It appears that concerns about the possible destruction that AI can do to humanity go beyond the preconceived notions or ideas that opponents of the technological revolution have spread. Even its developers and advocates are now concerned about the potential consequences of man’s pursuit of a more AI-driven world.

In a short note published on the company’s website, the leaders of OpenAI, the creators of ChatGPT, have called for stricter regulations for “super-intelligent” AIs in order to save the world from destruction in the hands of these machines.

Co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman, say that an equivalent to the International Atomic Energy Agency (IAEA), the agency in charge of checking the use and application of atomic and nuclear energy globally, is needed to protect humanity from the risk of accidentally creating something with the power to destroy it.

In the post, they called for an international regulator to begin working on how to “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose, as per Guardian.

“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they write.

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future, but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”

In the shorter term, the trio called for “some degree of coordination” among companies working on cutting-edge AI research, in order to ensure the development of ever-more powerful models integrates smoothly with society while prioritising safety. That coordination could come through a government-led project, for instance, or through a collective agreement to limit growth in artificial intelligence capability.

Read also: “Nigeria has been the biggest adopter of Artificial Intelligence in Africa” – Open AI’s Sam Altman

Sam Altman makes U-turn

I recently had the opportunity to be in the same room as Sam Altman, co-founder and CEO of OpenAI, as he spoke on the potential for generative AI to significantly improve the world and assist humans in raising the bar for their own productivity.

Sam Altman in Lagos
OPenAI’s Sam Altman said Nigeria, among all of the countries on the continent, has been the biggest adopter of their technologies

He noted that there had been much pessimism and concern about some of the associated risks that come with adopting artificial intelligence, not just in the US but globally, hence the need to solve it globally. But he also mentioned that he was more worried about over-regulation of the technology, seeing that the perceived problems are not known yet.

“Different countries are going to set their own rules but honestly, I’m worried about over-regulation. I think it’s easier to get a new technology like this wrong by over-regulating. And in general, you want to see what the actual problems are before you address them.”

Sam Altman

Now, in a surprising U-turn, it seems the OpenAI CEO might be corroborating the fears of many in that space, of which his former co-founder, Elon Musk has also expressed. Apart from Musk, researchers have previously warned of the potential risks of superintelligence for decades, but as artificial intelligence development has picked up pace, those risks have become more concrete.

So, does it mean that AI can actually destroy humanity, and if it can, in what context exactly?

Master these 5 essential skills to increase your chances of landing a high-paying AI job

Can AI destroy humanity?

The US-based Center for AI Safety (CAIS), which works to “reduce societal-scale risks from artificial intelligence”, describes eight categories of “catastrophic” and “existential” risks that AI development could pose.

While some worry about a powerful artificial intelligence completely destroying humanity, accidentally or on purpose, CAIS describes other more pernicious harms.

A world where AI systems are voluntarily handed ever more labour could lead to humanity “losing the ability to self-govern and becoming completely dependent on machines”, described as “enfeeblement”; and a small group of people controlling powerful systems could “make AI a centralising force”, leading to “value lock-in”, an eternal caste system between ruled and rulers.

These concerns have previously been voiced by nations, experts, and developers globally. One of the areas where it may alienate people, remove their employment, lead to widespread poverty, or even result in death, has been the future of work.

The global value systems and the education sector are two more. The increasingly invasive development of superintelligent robots poses a threat of ultimate alienation and devastation to long-reigning cultures and societal systems, especially in Africa.

The fight against AI tools like ChatGPT

While it’s possible that this won’t happen overnight, it might happen sooner rather than later given the current wave of research and discourse about the possibilities.

OpenAI’s leaders say these risks mean “people around the world should democratically decide on the bounds and defaults for AI systems”, but admit that “we don’t yet know how to design such a mechanism”. However, they say the continued development of power systems is worth the risk. But is it?

We believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity),” they write.

They warn it could also be dangerous to pause development. “Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on. Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.

Amidst AI wars by Google and Microsoft, Apple might be a step ahead even with ‘silence’


Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!

Register for Technext Coinference 2023, the Largest blockchain and DeFi Gathering in Africa.

Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!