Scientists warn that rapid advances in AI and neurotechnology are outpacing our understanding of consciousness, creating serious ethical risks. New research contends that developing scientific tests for awareness could transform medicine, animal welfare, law, and AI development.
Yet identifying consciousness in machines, brain organoids, or patients could also force society to rethink responsibility, rights, and moral boundaries. Also, what does it mean to be unconscious? The question of what it means to be conscious has never been more urgent, the researchers argue, or more unsettling.

Defining consciousness
The researchers point out that explaining how consciousness emerges is now an urgent scientific and moral priority. A clearer understanding could eventually make it possible to develop scientific methods for detecting consciousness. This breakthrough would have far-reaching consequences for AI development, prenatal policy, animal welfare, medicine, mental health care, law, and emerging technologies such as brain-computer interfaces. This can also aid in understanding what it means to be human.
The scientists warn that if we become able to create consciousness — even accidentally — it would raise immense ethical challenges and even existential risk, in relation to AI.
The Challenge of Defining Sentience
Consciousness, commonly described as awareness of both the world around us and ourselves, remains one of science’s most difficult puzzles. Despite decades of research, scientists still lack agreement on how subjective experience emerges from biological processes.
To date, scientists have identified brain regions and neural activity linked to conscious experience, but major disagreements remain. Yet there continues to be a debate as to which brain systems are truly necessary for consciousness and how they interact to produce awareness. Some researchers even question whether this approach captures the problem correctly.
The new review examines the current state of consciousness science, future directions for the field, and the possible consequences if humans succeed in fully explaining or even creating consciousness. This includes the possibility of consciousness emerging in machines or in lab-grown brain-like systems known as “brain organoids.”
Societal benefit
The researchers argue that developing evidence-based tests for consciousness could transform how awareness is identified across many contexts. These tools could help detect consciousness in patients with brain injuries or dementia and determine when awareness arises in foetuses, animals, brain organoids, or even AI systems.
For example, in medicine, this could improve care for patients who are unresponsive and assumed to be unconscious. Furthermore, understanding the biological basis of subjective experience may help researchers develop better therapies for conditions such as depression, anxiety, and schizophrenia
Warning
While this would represent a major scientific advance, the researchers caution that it would also create difficult ethical and legal questions. Determining that a system is conscious would force society to reconsider how that system should be treated.
Such insights will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world. In the future, for instance, AI that gives the impression of being conscious raises many societal and ethical challenges.
To fully understand what conscious AI means, the researchers argue that scientific work should place greater emphasis on phenomenology (what consciousness feels like) alongside studies of function (what consciousness does).
To read the discussion, see Frontiers in Science and the paper titled “Consciousness science: where are we, where are we going, and what if we get there?”
