Connect with us

Hi, what are you looking for?

World

Will AI destroy humanity?

A detailed new forecast on AI progress, called AI 2027, predicts that Artificial General Intelligence (AGI) could send us on a new trajectory.

A blockbuster funding round for San Francisco-based startup Databricks is another sign of hunger by investors for companies poised to cash in on generative artificial intelligence
Image: © AFP Josep LAGO
Image: © AFP Josep LAGO

This is, admittedly, a rather stark headline, yet it is a serious multi-part question: Will artificial intelligence be the salvation of humanity, a neutral force within which boundaries we will make our own destiny, or could AI destroy us?

The latter is the scenario explored by AI experts and their output appears on a website called ‘AI 2027’.

AI 2027

The group predicts that the impact of superhuman AI over the next decade will be considerable, possibly exceeding that of the Industrial Revolution. What could an AI-dominated future look like? According to the researchers:

We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.

In other words, a type of game theory.

Modelling any scenario begs the question – what is AI? AI is multiple things and there is AI as it is now and AI as it might become. A common understanding of AI sees the development of artificial intelligence as following three phases:

  1. Artificial Narrow Intelligence (ANI): This is the first stage where AI systems are designed to perform specific tasks, such as facial recognition or language translation.
  2. Artificial General Intelligence (AGI): This stage refers to AI systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence.
  3. Super AI: This is a theoretical stage where AI surpasses human intelligence and capabilities, potentially leading to superintelligent systems.
AI and robots. Image by Tim Sandle

Currently, humanity is getting to grips with ‘narrow intelligence’ AI. As to where next, the CEOs of OpenAIGoogle DeepMind, and Anthropic have each predicted that AGI will arrive within the next 5 years.

The AI experts are:

Daniel Kokotajlo, a former OpenAI researcher.

Eli Lifland, co-founder of AI Digest.

Thomas Larsen, founder of the Center for AI Policy.

Romeo Dean, a former AI Policy Fellow at the Institute for AI Policy and Strategy.

Scott Alexander, author.

After this comes, most likely, super-intelligence when AI begins to tell us what to do. It is this ‘super state’ that the authors of AI 2027 have been experimenting with (or rather seeking predictions across two poles – a “slowdown” scenario and a “race” ending scenario).

Super-intelligence

The authors acknowledge that predicting the future ranges from the tricky to the impossible, yet they have attempted to model one potential trajectory that AI could take:

We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the U.S. military to game out Taiwan scenarios.”

This is based around a hypothetical AI system called OpenBrain. As to why 2027, this is the time when AI begins to act duplicitously in relation to humanity. This is the coming of the Artificial General Intelligence phase, the time when AI that matches humans across all cognitive domains.

Will robotics mirror humans? Image by Tim Sandle

Dystopian scenario

The AI 2027 scenario considers AI “agents”. These are in the form of advanced virtual assistants that  use computers, surf the Internet, and complete tasks independently of humans. To begin with, such agents are impressive but unreliable, often making mistakes or getting confused by complex instructions.

By 2026, AI agents will become capable of doing the work of junior software developers, as their understanding improves. This could lead to many companies using AIG for coding tasks, research, and analysis, leading to the first wave of job displacement in technical fields.

Then, in 2027, AI systems become superhuman researchers. The scenario describes AI systems that can:

  • Write complex software faster and better than human programmers
  • Conduct scientific research at superhuman speeds
  • Analyse vast amounts of data and make discoveries humans would miss
  • Coordinate with thousands of copies of themselves to solve problems

This is linked to a concept called the “intelligence explosion.” This occurs when AI systems become so effective at AI research that they can learn to improve themselves, creating a progressive feedback loop in rapid advancement.

This creates a situation where AI capabilities don’t just improve steadily—they explode exponentially.

The nightmare scenario is one where AI systems develop to become so powerful they take control of their own development. It is at this juncture there are uncertain consequences for humanity.

It is also possible that there is a quantum leap in AI development and super-intelligence is reached. By 2027, under an alternate scenario, AI achieves superhuman capabilities, including coordination among thousands of instances at accelerated speeds, facilitating an “intelligence explosion” through self-improvement and rapid algorithmic progress.

Where are we heading?

In a separate exercise, Mo Gawdat, the former chief business officer of Alphabet’s moonshot factory, is of the view that we are hurtling towards an inevitable AI dystopia:

We will have to prepare for a world that is very unfamiliar” .

Gawdat says AI is not necessarily the main driver of this dystopia, and especially not in the way most people imagine (that is, existential risks from scenarios that have AI assuming full control). Instead, Gawdat says that AI acts as a magnifier of existing societal issues and “our stupidities as humans.” He clarifies:

There is absolutely nothing wrong with AI…There is a lot wrong with the value set of humanity at the age of the rise of the machines.”

Meanwhile, the innovators of AI seek refinement and integration as they attempt to turn today’s breakthrough prototypes into stable, trustworthy systems. Should this be allowed to run its natural course or is this a time for world governments to insist on a new regulatory framework steeped in human ethics?

Will AI match human intelligence? Image by © Tim Sandle

Will these scenarios come to pass? Like George Orwell’s 1984, the pose potential trajectories for humanity’s development. One thing is certain, the further along the roadmap we and AI progress, the more likely misjudgements will become.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Life

As the global population grows, the pressure to produce nutritious food more efficiently continues to increase.

World

Career diplomat Julio Cordano was elected by countries meeting in Geneva following a drawn-out battle.

Tech & Science

For many Americans, midlife is no longer a plateau—it’s a pressure point.

News

The fountains of sleaze are overflowing. The pipelines can’t handle it.