As artificial intelligence advances and extends into most social systems, it is seemingly reshaping law, ethics, and society at speed. What is the impact of this on human society? Can we classify this as a form of threat?
Dr. Maria Randazzo of Charles Darwin University warns that current regulation fails to protect rights such as privacy, autonomy, and anti-discrimination. The “black box problem” leaves people unable to trace or challenge AI decisions that may harm them.
AI and human rights
Randazzo observes how current regulation, in relation to AI, fails to prioritise fundamental human rights and freedoms such as privacy, anti-discrimination, user autonomy, and intellectual property rights – mainly thanks to the untraceable nature of many algorithmic models.
Calling this lack of transparency a “black box problem,” Randazzo goes on to explain how decisions made by deep-learning or machine-learning processes are impossible for humans to trace. Consequently, this makes things difficult for users to determine if and why an AI model has violated their rights and dignity and seek justice where necessary.
“This is a very significant issue that is only going to get worse without adequate regulation,” Randazzo states.
“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behavior. It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”
Market-centric, state-centric, or human-centric?
Currently, the world’s three dominant digital powers – the U.S., China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.
Randazzo’s research suggests that the EU’s human-centric approach is the preferred path to protect human dignity, eschewing the U.S. and China models. However, she cautions that without a global commitment to this goal, even that approach falls short.
Human dignity in the age of Artificial Intelligence
Randazzo notes: “Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she suggests.
Randazzo concludes with: “Humankind must not be treated as a means to an end…Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” was published in the Australian Journal of Human Rights.
The research appears in the journal Australian Journal of Human Rights, with the research paper titled “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes.”
The paper is the first in a trilogy Randazzo will produce on the topic.
