Connect with us

Hi, what are you looking for?

Tech & Science

AI has no idea what it’s doing: Does this pose a threat?

AI is reshaping Western legal and ethical landscapes at unprecedented speed but was undermining democratic values.

How do humans interact with AI models? (Barbican Centre, London) — Image by © Tim Sandle
How do humans interact with AI models? (Barbican Centre, London) — Image by © Tim Sandle

As artificial intelligence advances and extends into most social systems, it is seemingly reshaping law, ethics, and society at speed. What is the impact of this on human society? Can we classify this as a form of threat?

Dr. Maria Randazzo of Charles Darwin University warns that current regulation fails to protect rights such as privacy, autonomy, and anti-discrimination. The “black box problem” leaves people unable to trace or challenge AI decisions that may harm them.

AI and human rights

Randazzo observes how current regulation, in relation to AI, fails to prioritise fundamental human rights and freedoms such as privacy, anti-discrimination, user autonomy, and intellectual property rights – mainly thanks to the untraceable nature of many algorithmic models.

Calling this lack of transparency a “black box problem,” Randazzo goes on to explain how decisions made by deep-learning or machine-learning processes are impossible for humans to trace. Consequently, this makes things difficult for users to determine if and why an AI model has violated their rights and dignity and seek justice where necessary.

“This is a very significant issue that is only going to get worse without adequate regulation,” Randazzo states.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behavior. It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

Market-centric, state-centric, or human-centric?

Currently, the world’s three dominant digital powers – the U.S., China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.

Randazzo’s research suggests that the EU’s human-centric approach is the preferred path to protect human dignity, eschewing the U.S. and China models. However, she cautions that without a global commitment to this goal, even that approach falls short.

Human dignity in the age of Artificial Intelligence

Randazzo notes: “Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she suggests.

Randazzo concludes with: “Humankind must not be treated as a means to an end…Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” was published in the Australian Journal of Human Rights.

The research appears in the journal Australian Journal of Human Rights, with the research paper titled “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes.”

The paper is the first in a trilogy Randazzo will produce on the topic.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Business

American AI developer Anthropic plans to "lay the risks out on the table" even as it restricts deployment of a new model dubbed Mythos.

World

Oil prices surged Monday on a re-escalation of hostilities in the Middle East war after Iran closed the Strait of Hormuz at the weekend.

Tech & Science

A push to reduce reliance on foreign compute and give researchers access to more power

Business

New peer-reviewed research finds that actively questioning and refining AI output, not avoiding it, is what keeps people's reasoning sharp.