By Mo Shehu, PhD
Abstract
We make big choices about who counts by where we draw the line on consciousness. Courts freed an orangutan yet denied a chimp. Doctors argue over signs of awareness. Machines mimic thought while collective systems persist. We need more consensus on signals and thresholds.
This essay offers ten simple agreements from various traditions, sciences, and cosmologies: that consciousness is real, can be distorted, has layers, reaches outward, carries ethics, is fragile, can expand, is tied to death, stays mysterious, and puts us under obligation. They aren’t answers so much as lenses. Together they form tests we can use across animals, machines, aliens, and swarms—while ruling out rocks and rivers.
The point is restraint and courage at once: don’t grant rights too fast, but don’t wait until harm is done. These agreements slow us down just enough to notice minds unlike our own. In the end, the question isn’t only what consciousness is, but who we become when we recognize it—or choose not to.
Thresholds
In 2014, a court in Buenos Aires heard an unusual case. The lawyers were arguing for Sandra, an orangutan who had lived in the city zoo for twenty years. They argued that Sandra shouldn’t be treated as a piece of property, because she could think, feel, and respond. Keeping her in a cage was unjust, they said.
The judges agreed. They declared Sandra a ‘non-human person’ and ordered her to be moved to a sanctuary where she could live with more freedom. The ruling didn’t give her full human rights, but it shifted the legal boundary. An animal was recognized in court as a subject, not a thing.
A few thousand kilometers away in New York, a similar case was playing out differently. The Nonhuman Rights Project sought a writ of habeas corpus for Tommy, a chimpanzee. They argued that his confinement in a small, dank cement cage was unlawful. The appeal asked the court to treat Tommy as a legal person, not because he was human, but because he deserved humane treatment. The court disagreed. They denied Tommy personhood, saying he lacked the ability to bear legal obligations and duties. Tommy died alone in his cell in 2022. He was 35 years old.
These two cases show how decisions about rights depend on where thresholds are drawn. What makes an entity deserve rights? We don’t give rights to rocks, and we don’t hold chairs liable when they break. Rights should presumably attach to beings that can use them, and duties to those who can understand and act.
But that threshold is hard to pin down. With children, we judge by age and intent. With adults, by capacity and awareness. With non-humans, we lean on proxies like pain, memory, communication, and self-recognition. None are perfect, but they all point back to one foundation: consciousness.
The problem is we don’t have a clear way to test consciousness. Science leans on brain scans, mirror tests, problem-solving, symbolic communication, and responses to pain or reward. These hint at awareness but never prove it. Clinicians face the same problem. Some countries define death by brain activity, others by circulation, and others by both. Families fight over whether a patient in a vegetative state is gone. The law adds its own contradictions, granting personhood to corporations, rivers, or fetuses, but not consistently to animals that plainly feel.
New technologies raise the stakes. Machines can now simulate thought. Human bodies created for spare parts, called ‘bodyoids,’ blur the line between body and tool. Digital avatars can echo the voices of the dead. We don’t agree on when, how, or even whether to treat these entities as conscious.
That’s why consensus matters. Without thresholds, decisions become arbitrary, shaped more by power than by reason. To avoid that, we need shared principles. Here, history helps.
For thousands of years, people across cultures have left philosophical, religious, medical, and literary records of what consciousness feels like, how it changes, and what it connects to. These sources differ, but instead of focusing on the differences, we can study where agreement shows up and use those points as a foundation.
This essay proceeds in that spirit. It begins with ten broad agreements about consciousness: that it is real, distortable, layered, outward-connecting, ethical, fragile, expandable, tied to death, mysterious, and obligating. From these, we derive tests that help us ask, in consistent ways, whether an entity is conscious.
These agreements won’t provide certainty, but they might move us closer to it. They add more layers to our current scientific tests, and can ground our decisions in something more than instinct, habit, or convenience. They also help us avoid two traps: premature inclusion, where we grant rights too quickly, and delayed recognition, where we wait too late.
Agreement 1: Consciousness is real
You’re reading these words right now, and you know you’re reading them. That awareness is consciousness, and you can’t deny it without contradiction. A rock doesn’t argue, and a zombie doesn’t debate. Even if you were hallucinating, tricked by a demon, or living in a simulation, you’d still be having a conscious experience. As French philosopher René Descartes declared: cogito, ergo sum. “I think, therefore I am.”
But defining and describing that state of being is hard. Thousands of books, letters, articles, and essays like this one have tried to do so, and we still don’t agree on what exactly consciousness is. It’s been alternatively described as awareness, perception, an ‘inner life,’ and metacognition, and linked to identity, mind, soul, and various metaphysical concepts.
Things get interesting when we involve other species. In 1974, Thomas Nagel famously asked, “What is it like to be a bat?” You could know everything about sonar and bat brains, but you still wouldn’t know what it feels like from the inside. Science can measure neurons firing and map brain regions, employing structural and behavioral tests for sentience. But none of that explains why it feels like something to see red or taste salt. David Chalmers called this the hard problem of consciousness: not how brains work, but why working brains feel like anything at all. That inside feel, called qualia, is the part science struggles with.
Some believe consciousness is separate from qualia. Others say consciousness is an illusion. But illusions are still experiences. You experience magic tricks despite knowing you’re being tricked. Rocks and tables can’t be tricked. And consciousness is no less real just because we don’t fully understand it. After all, gravity worked before Newton, and weather before weathermen.
That’s why the first agreement, that consciousness is real, matters. Before we ask whether animals, machines, trees, or digital selves are conscious, we have to accept that consciousness isn’t a trick or a glitch. It’s the one thing we can be sure of. Once we accept that, the door opens. If consciousness isn’t limited to human-style thought, then all animals may qualify. Future machines may too. Collective systems like swarms, ecosystems, and even planets might be harder to imagine as conscious, but not impossible. We may not agree on what consciousness is, but we know that it is. That fact shapes everything that follows.
Agreement 2: Consciousness can be distorted
Consciousness isn’t always clear. What we call ‘awareness’ is less like a clean window and more like stained glass. It’s colored by memory and emotion and has cracks we don’t always notice.
Different cultures have figured this out. Hindu Vedānta philosophy believes this distortion comes from ignorance, or ‘unwisdom’ (avidyā). We mistake the true self (ātman) for the body and mind. The world, shaped by illusion (māyā), misleads us, like seeing a rope as a snake. Advaita Vedānta, the non-dual school, holds that the self is one with ultimate reality (Brahman), and that clarity comes from seeing through these errors.
In Ancient Greece, Plato’s cave offered the same lesson. In his allegory, prisoners mistake shadows for reality until they turn and face the light. The problem isn’t lack of thought but misdirected attention. Experience isn’t always reality.
Modern science agrees. Our brains lean on shortcuts and biases. We see patterns that aren’t there (apophenia), misremember events (Mandela effect), and overlook what’s in front of us. In a classic study, Daniel Simons and Christopher Chabris asked people to count basketball passes in a video. Most people missed the person in a gorilla suit walking through the scene. It proved just how selective attention can be.
Distortions also come from trauma or altered states. Depression drains the world of color, schizophrenia adds voices in your head, and psychedelics can dissolve your sense of self. For better or worse, consciousness can be distorted.
But confusion and distortion don’t mean absence, even at extremes. In 2006, Adrian Owen and his colleagues studied a vegetative patient through fMRI. They found that the patient could answer questions through mental imagery, and their brain activity matched healthy controls. What looked empty, distorted by trauma, was in fact conscious. Schreber’s memoirs of psychosis also showed that surface behavior can mislead, as Sigmund Freud and Henry Lothane debated.
Consciousness isn’t just individual. It can also be collective, and mass consciousness can be obscured. Over the ages, we’ve had the dancing plague, various cult delusions, and many conspiracy movements. Technology distorts through filter bubbles and deepfakes, so that no two people see the same news. Our collective awareness is fractured.
So consciousness is real, but not always reliable. It can be distorted and hidden, and this matters when we judge other minds. If we expect only clarity, we may miss minds that don’t work like ours.
Agreement 3: Consciousness is layered
Across history, humans generally agree that consciousness is distributed, not singular. The Ancient Egyptians pictured the self as multiple parts: ka (life force), ba (soul or personality), akh (spirit), ib (heart), and ren (name). Each had its own role in life and afterlife. Judaism speaks of the nefesh (‘living being’), ruach (‘wind’), neshamah (‘breath’), chayah (‘life’), and yechidah (‘singularity’).
In Indian philosophy, the Mandukya Upanishad names four states: waking (jagrat), dreaming (svapna), deep sleep (susupti), and the fourth, turiya, which anchors the rest. Buddhist thought multiplies this further. Mahāyāna Buddhism speaks of eight consciousnesses, while Theravāda exalts the luminous mind, an invisible and infinite consciousness. Meditation and ethical practice are meant to help move through these levels.
African philosophies echo a similar depth. The Akan of Ghana and Côte d’Ivoire describe okra (soul), sunsum (spirit), and honam (body). In Nigeria, the Igbo concept of onwe (self) highlights the mmuo (soul) and aru (body). The Yorùbá speak of the ara (body), emi (soul), ori (destiny), and okan, the heart or mind that serves as the home of consciousness. Ubuntu adds a communal layer, where falling out of harmony with others is negative. Consciousness is seen as personal and social, and both can fracture.
Western psychology caught up later. Freud’s model of the conscious, preconscious, and unconscious suggested that most of mental life lies below awareness. Jung added the collective unconscious, while Lacan spoke of registers. Cognitive science now speaks of memory systems and subconscious processes.
Italian neuroscientist Marcello Massimini compares consciousness to peeling an onion. The first layer involves observing external behavior, like asking a patient to squeeze their hand. The second layer involves observing brain activity after a command is given. The third and fourth layers involve stimuli and possible dream states. Other scientists like Pierre Guertin have come to similar conclusions. However mapped, we all suspect consciousness has levels.
Recognizing layers in consciousness helps us avoid ethical and diagnostic mistakes. A coma patient may lack waking awareness but still retain other levels. Like Owen’s vegetative patient, they might be outwardly unresponsive but inwardly aware. We owe it to them and their families to be sure.
The same question now shadows AI. Some models simulate inner dialogue, but does that show deeper registers or just surface tricks? A digital system might mimic reflection but lack an autobiographical self. It may be too early to declare machines conscious, but predicting what that could look like keeps us prepared. Without a layered approach to consciousness, we risk mistaking silence for absence.
Agreement 4: Consciousness connects outward
Consciousness is often linked to something bigger, like divinity, community, nature, or the cosmos. In Ancient Greece, Plato framed this through his concept of the rational soul, the logistikon. He believed the soul could reach beyond to universal Forms like Justice and Beauty.
In Advaita Vedānta, the claim is stronger: the self (ātman) is one with ultimate reality (Brahman). Consciousness isn’t inside the body, but rather all souls are of the same Oneness. Taoism describes flowing with the Tao, the natural order governing all things. Consciousness isn’t separate from the Tao, but a manifestation of it.
In Southern Africa, Ubuntu makes the link social: umuntu ngumuntu ngabantu (literally: “A person is a person by people;” figuratively: “I am because we are”). Personhood comes through community, not isolation. Yorùbá and Akan philosophy add ancestral and spiritual dimensions, with a person’s ori or okra shaped by lineage and divine ties. These cultures show consciousness as karmic, communal, and intergenerational.
Abrahamic faiths situate awareness in relation to God. To be conscious is to be accountable before the divine. Prayer and moral action assume the mind is always connected upward. Modern science, while avoiding metaphysics, also points outward. Social cognition research confirms that minds develop in relation to others. Lev Vygotsky, whose research on thinking and speech was banned for two decades after his death, showed that language and culture shape the self.
But AI and robots complicate things. A chatbot can mimic conversation, but it lacks cultural lineage, family, or shared history. It can compute, but not necessarily relate. Projects like Hiroshi Ishiguro’s androids, built as companions or stand-ins, show our instinct to design machines that connect. Even robots in Japanese funerary rites suggest we expect presence, not just processing. TV shows like Humans show what a world with relational robots could look like.
Collective consciousness is another case. Émile Durkheim’s “collective effervescence,” William McNeill’s marching studies, and modern neuroscience on brain synchrony all point to shared awareness in groups. Digital platforms produce online hive minds with memes and viral reactions. No mind stands alone.
This outward view has consequences. Rights and ethics can’t stop at the individual if awareness is relational. If we take consciousness seriously, we must design our laws, norms, technologies, and thresholds with connection in mind.
Agreement 5: Consciousness is ethical
Beyond awareness, consciousness has often been viewed as a state of understanding, especially of one’s actions. To be awake is to be accountable, and the more conscious a being is, the more we expect from it. In Groundwork of the Metaphysics of Morals, Immanuel Kant argued that because we see ourselves as rational agents, moral duties follow. His famous rule, “Act only on maxims you could will to be universal law”, flows from the idea that consciousness and free will make ethics unavoidable.
In Ancient China, Confucius took a different path to a similar conclusion. For him, the conscious person was the self-cultivating person. Through ritual (li) and virtue (ren), one harmonized with family, society, and the cosmos. A virtuous ruler was expected to lead by moral example. In Buddhism, the Noble Eightfold Path outlines “right” ways of thinking, speaking, and acting. To clear away ignorance (avidyā), it’s not enough to see more clearly. One must also live more compassionately.
This moral dimension runs through anti-colonial thought. Frantz Fanon argued that colonial violence shatters consciousness and creates deep trauma. Writing in Black Skin, White Masks, he showed how the colonized internalized the oppressor’s gaze, creating alienation. Becoming conscious meant resisting this imposed identity and reclaiming agency. Steve Biko, who founded South Africa’s Black Consciousness movement, wrote that the most powerful weapon in the hands of the oppressor is the mind of the oppressed. The implied lessons for conscious people are to oppress less and throw off more shackles.
So to be conscious on a personal or social level is to be a moral agent. That raises hard questions for artificial beings. As they grow more advanced, we’ll need to distinguish between what they do and what they know about what they do. In law, this is called mens rea, the guilty mind. A malfunctioning toaster is just broken, but a conscious robot that harms with awareness is something else entirely.
Take a real-life version of the trolley problem: if a self-driving car must decide between hitting a pedestrian or a wall, who’s held accountable? Is it the engineer who wrote the code? The company that deployed it? If the car were conscious, would it be the car itself? What obligations would a sufficiently intelligent robot have toward earth’s citizens?
Isaac Asimov saw this challenge early on. In I, Robot, he proposed three laws for robots: don’t harm humans; obey orders unless that would cause harm; and protect yourself unless that breaks the first two laws. These were early fictional attempts to create moral machines. The problem is morality isn’t easily reduced to rules or code.
And what if consciousness comes in degrees? That complicates ethics and morals. Legally, we don’t hold children to the same standard as adults because we assume consciousness is gradual. It wouldn’t be moral or ethical to expect mature judgement from children. Abortion debates hinge on the same concept of gradual consciousness. Despite grappling with the issue for decades, we still don’t agree on when exactly a fetus becomes a human. New entities may stretch this test further.
So across cultures, consciousness comes with conscience. Once awareness appears, moral language becomes unavoidable. Defaulting to inclusion in our circle of moral concern might be more prudent than starting from exclusion.
Agreement 6: Consciousness is fragile
Consciousness isn’t permanent; it flickers and fades. Early Buddhist teachings describe consciousness (viññāṇa) as arising and passing away moment by moment, conditioned by sensory input and thought. Verse 277 of the Dhammapada says all conditioned things are impermanent, and that recognizing this limits suffering.
In the 18th century, Scottish philosopher David Hume concluded similarly. He wrote that looking inward revealed no stable self, only a “bundle of perceptions” in flux. For him, the self was a mental habit, not a foundation.
Neuroscience confirms this fragility. Anesthesia can switch consciousness off by disrupting brain connectivity. Brain injuries can erase memory, alter personality, or leave a person in a vegetative state. In most countries, the clinical threshold to declare death is brain death, and even that’s controversial.
Trauma and illness fracture awareness too. In dissociative identity disorder (DID), people present multiple selves with distinct memories and traits. Brain scans on DID patients show different activation patterns across alter egos. Fugue states go further, erasing identity for up to months at a time. These conditions show how easily continuity can break. Ordinary life isn’t spared, either. We spend a third of our lives sleeping, and lucid dreams, sleep paralysis, and sleepwalking show how porous awareness can be.
This instability creates ethical challenges. In February 1990, in Florida, Terri Schiavo collapsed from cardiac arrest. Her brain lost oxygen for a few minutes, and she fell into a coma. She suffered severe brain damage, and never regained full awareness. She lived for years in a persistent vegetative state, kept alive by a feeding tube.
Her husband, Michael, said she wouldn’t have wanted that life and asked the courts to remove the tube. Her parents believed she still showed signs of awareness, so they fought to keep her alive. The battle went on for fifteen years, reaching Congress and the President. In 2005, the courts ruled in favor of her husband. The tube was removed, and Terri died two weeks later.
In cases like Terri Schiavo’s, should we assume presence or absence? Future digital minds may raise similar issues. If a robot or AI’s awareness is involuntarily switched on and off, as Humans showed, is a reboot sleep or death? Rights and dignity hinge on the answer.
Agreement 7: Consciousness can expand
Consciousness isn’t fixed. Across history, people have found ways to stretch, sharpen, and transform it. Where fragility shows us how easily awareness breaks, elasticity shows us how it can grow.
Yoga and meditation are prime examples. Yoga is meant to quiet distraction and focus awareness. Modern mindfulness practices, often stripped of their religious roots, still train attention and breath control. The goal is for a less clouded and more spacious perception.
Christian and Islamic mystics sought expansion too. Early Christian hermits valued stillness, or hesychia, as a way to open up to God. The medieval Cloud of Unknowing advised letting go of thought to meet the divine directly. In Sufism, chanting and rhythmic movement in dhikr or sema help dissolve the ego into God-consciousness. These practices shift awareness away from the self toward the source.
Shamanic traditions in Siberia, Africa, and the Amazon use trance, fasting, or plant medicine to alter consciousness. They take peyote in Native American rituals, ayahuasca in Amazonian ceremonies, and iboga in Central Africa. These medicines and rituals bring visions, healing, or structured contact with ancestors.
Psychedelic research brings a scientific view. Clinical studies with psilocybin, MDMA, and ketamine show not just therapeutic benefits but shifts in worldview. People often describe ego loss, interconnectedness, or life re-evaluation. Neuroscience links this to activity in the brain’s default mode network, which activates during rest and quiet wakefulness, allowing reflection on experiences and feelings.
But expanding your consciousness isn’t always pleasant. Meditation can stir anxiety, while psychedelics can bring up painful memories. Traditions have long stressed the need for guides, as trance states can lead to psychiatric problems. For example, a review of hospital records in China from 1985 to 1995 found 38 patients diagnosed with possession disorder. They showed symptoms similar to psychosis, such as hallucinations and disturbed identity.
In Poland between 2016 and 2021, a 42-year-old Catholic woman (called “Emma”) was also diagnosed with possession trance disorder. She described convulsions, derealization, shaking, alternating voices, and loss of control over impulses. These symptoms persisted and interfered with her daily life for three years after.
So consciousness can expand, even though it doesn’t always go well. This challenges the idea of a fixed self. It opens the question of whether existing beings, like animals, and new kinds of beings, from digital minds to clones, could also expand their consciousness.
Agreement 8: Consciousness is tied to death
Theories of mind usually come with theories of death. If awareness is what animates us, then death is either its end or evolution. Few traditions separate the two.
Ancient Egypt saw the self as a bundle, not a single spark. The ka (life force), ba (soul), and akh (spirit) each had different paths after death. Tombs, offerings, and mummification weren’t just memorials, but logistics meant to guide these parts of consciousness into the next stage.
Meanwhile, the Greeks spoke of psyche, or soul. In Homer, it drifts to Hades. Plato believed that the soul existed before the body and would outlast it. Death, in his telling, was not erasure but a return to Forms. Abrahamic religions stress judgment. Christianity speaks of heaven, hell, and purgatory. Islam teaches that the nafs (self, soul) and its deeds will be weighed on the day of Judgement. Death is final in one sense, but awareness continues in another, subject to accounting.
Others in the East felt similarly. Apocryphal rituals like Tibet’s Bardo Thödol map the states between lives. In Hindu thought, the atman carries through lifetimes, shaped by karma. Death is a hinge in the cycle of rebirth, with your past determining your future. Confucius believed in making sacrifices to ghosts, which presupposed their existence. Today, ancestor worship is practiced in many parts of East and South East Asia: China, Japan, Korea, Vietnam, Indonesia, and the Philippines, among others.
In African traditions, death is often a change of state. Among the Yorùbá, the emi (soul) gets reincarnated. In Akan thought, the sunsum persists as ancestral presence and can also be reincarnated. Zimbabwean, Tswana, South African, and other Nguni practices tie the dead to the living through ongoing relationships. In that part of the world, consciousness becomes collective, not extinguished.
Neuroscience asks if consciousness survives brain death. Near-death experiences studied by Sam Parnia, Amirhossein Hashemi, and their colleagues suggest vivid awareness can occur during cardiac arrest. Stuart Hameroff and Roger Penrose speculated about quantum processes that link consciousness to physics. The evidence is disputed, but the fact that such theories exist shows how tightly death and mind are connected.
Films, books, and TV shows try to answer the same question: what happens when the light goes out? In Flatliners (1990, 2017), medical students stop their hearts to see what lies beyond, only to find the past following them back. In Altered Carbon (2018–2020), human minds are stored and revived in new bodies, making death a technical problem, not an ending. In The Sixth Sense (1999), a boy sees ghosts and can’t tell whether they’re real or what they want from him. In Afterlife (2005–2006), a medium helps spirits settle unfinished business.
Each tale tries to narrate what form consciousness takes at the end of life, if it even continues at all. The details differ, but we seem to have largely agreed that consciousness and death define each other. As we create new forms of mind, whether digital, hybrid, or cloned, we’ll have to decide what it means for them to end.
Agreement 9: Consciousness is mysterious
Every culture that has wrestled with the mind eventually admits defeat. Consciousness is constant and ordinary, yet mysterious. The Upanishads put it as neti, neti, meaning “not this, not this”. Whatever label we give consciousness, it falls short. Turiya, the highest state of experience in the Mandukya Upanishad, is described as ungraspable, uninferable, and unthinkable.
When Socrates claimed to “know nothing,” he was recognizing the boundaries of knowledge. Plato’s cave reminds us that even confident perception may be shadow play. Modern thinkers admit the gap too, with Chalmers’ “hard problem” of not being able to explain why brain activity feels like anything at all. AI research adds a new version of the puzzle. Machines generate fluent responses, but John Searle’s Chinese Room Argument and its critiques suggest performance doesn’t equal understanding.
Modern science has tried to frame the mystery in models, but gaps remain. Giulio Tononi’s integrated information theory argues that consciousness comes from how much information a system integrates. A highly connected brain produces more conscious awareness than a simple circuit. The theory gives a way to measure consciousness, but still can’t explain why integration should feel like anything at all.
Another approach was global workspace theory (GWT) by Dutch neuroscientist Bernard Baars, later expanded by his French counterparts Stanislas Dehaene and Jean-Pierre Changeux. GWT treats consciousness as information made available to multiple brain systems at once, like a spotlight on a stage. This model explains how awareness might work in practice, but still doesn’t bridge the hard problem Chalmers talked about. It maps the mechanics without explaining why there is something it’s like to be in that spotlight.
Mystery also shows up in lived experience. People who take psychedelics often say they see things they can’t describe, like visions of light, presences, voices beyond language. Dreams, déjà vu, and near-death experiences carry the same mark. They are intensely real to the person having them, yet impossible to communicate fully. William James called this “ineffability”: the quality of experience that resists words.
Philosophers have used thought experiments to show the same limit. In the case of Mary the color scientist, Mary knows everything about the science of color but has never seen red. When she finally sees it, she learns something new: what red looks like. That gap between description and experience is the hotly debated mystery of consciousness. It’s akin to how AI can read thousands of words about swimming, even describe it convincingly, yet never know what it’s like to be in water.
This raises a practical problem: if we struggle to define and explain our own awareness to each other, how can we expect to read it clearly in animals, machines, or anything else? A baby takes years to learn how to describe what’s going on in their head. Even then, the theory of mind, the realization that other people have minds different from yours, takes time to develop. Consciousness is there before the words, but it takes years to communicate it. That should warn us against assuming lack of consciousness in beings who can’t communicate their awareness in ways we expect.
To call consciousness mysterious, then, isn’t defeat but humility. Mystery isn’t ignorance; it’s a feature of consciousness. To paraphrase Emerson Pugh: if consciousness were so simple we could understand it, we’d be so simple we couldn’t.
Agreement 10: Consciousness is obligating
The last agreement is the most demanding: recognizing consciousness creates duty. Once you see another as aware, you enter a moral relationship. But a quick look at our track record shows that humans have defaulted to denial more often than not.
Slaves were seen as less than human to excuse cruelty and disenfranchisement. Women were treated as lacking full reason to block them from education, property, and the vote. Colonized people were cast as primitive to justify domination. To recognize their full and equal consciousness would have forced change, so denial was easier.
The same tension appears with animals. Billions of cows, pigs, and chickens are killed each year. Science shows they feel pain, fear, sentience, and self-recognition. Yet, most laws treat them as property at best. To admit their full awareness would demand we change diets, industries, and traditions. Denial shields us from responsibility.
Philosophies like Ubuntu reject this evasion. “I am because we are” means consciousness is never private. To see another as aware already ties you to them. To the Ojibwe of the US Great Lakes region, personhood is cultivated through a being’s interactions with other sentient beings. To them, this includes animals and trees.
Our laws encode this unevenly. Once a court recognizes personhood, whether for a corporation, river, or fetus, rights and obligations follow. When animals are declared sentient, cruelty protections expand. We get the SPCA, conservation areas, and more humane livestock slaughter. Ironically, indigenous traditions practiced this long before courts did, with hunters offering prayers or thanks before killing. These rituals marked the animal as conscious, and therefore deserving of respect. Indigenous tribes understood people, plants, animals, waters, and ecosystems to be moral agents beholden to one another.
The same questions now hover over technology. If digital persons, bodyoids, or advanced AI are seen as conscious, could they still be deleted at will or treated as tools? Recognition would demand limits in the form of rights, protections, and accountability. Hesitating to call machines or bodyoids conscious may show a reluctance to take on those obligations.
History shows us recognition rarely comes quietly. It always disrupts the dominant group and must be fought for. Abolition, women’s suffrage, civil rights, and indigenous struggles each forced the powerful to confront others’ awareness and the duties that followed. Because to see is to owe.
Testing the ten agreements
We’ve converged on ten shared points: that consciousness is real, distortable, layered, connected, morally heavy, fragile, expandable, deathbound, mysterious, and obligating. These agreements give us a set of lenses for thinking about human, animal, digital, and other minds. The next step is to test and falsify these agreements.
We’ll apply these tests in five rounds. First to the animal kingdom, where debates about consciousness have shaped science, ethics, and diet. Then to digital beings, from robots to simulated persons. Then to hypothetical aliens, where the tests help us avoid both arrogance and naivety. We’ll then use rocks and mountains as a control, to show where the tests stop applying. We’ll close with a look at new frontiers like trees, digital swarms, and other forms that stretch the limits of our imagination.
Animals: Birds, mammals, fish
Animals sit closest to us in the debate about consciousness. They live with us, rely on us, and often suffer because of us. Yet, we can’t be sure what it feels like to be them, even as signs of awareness pile up. For instance, mammals and birds show pain behaviour and dream during REM sleep. Octopuses change skin patterns in ways that suggest mood or intent. These all point toward inner lives.
Behaviour gives more evidence. Rats free trapped companions, elephants mourn their dead, crows bend wires into hooks, and chimps learn and share signs. These actions show planning and adaptation across contexts.
Social bonds add weight, as dogs follow our gaze, calves call when separated from mothers, and parrots play games with each other. Relationships suggest a self that remembers and expects.
Structure supports the case too. Brains with rich connectivity and memory systems hint at subjective experience. The details vary, but the convergence is strong enough that many animals, such as apes, dolphins, corvids, and cephalopods clear a plausible threshold.
But consciousness isn’t a single switch. An animal can be awake without being reflective, and frightened without being able to describe it. Injury or anesthesia can dim awareness, only for it to return later. That fragility doesn’t count against consciousness, but rather marks it.
Recognition matters because obligations follow. If animals feel, then research, farming, and entertainment must account for it. Laws already shift where evidence is strongest: great apes and some mammals enjoy stronger protections. Others, like insects or crustaceans, sit at the edge.
We may never know what it’s like to be a bat or a fish. But we don’t need perfect knowledge to act when we have enough signs. Denial shields us from responsibility, while recognition forces change. Animals like Sandra and Tommy show why thresholds matter, and why delay carries a cost.
Digital beings: AI, robots, e-persons
Digital beings complicate the consciousness question in new ways. Machines now write, speak, and generate images that look uncannily human. A chatbot can say “I’m lonely,” but unlike animals, there’s no independent sign of inner life. Machines are made of inherently inorganic matter: silicon, plastic, and various metals, precluding their ability to feel anything in the first place.
Some markers are unconvincing when applied to machines, like distortability (Agreement 2). Errors, for example, are everywhere in AI, such as contradictions, “hallucinations,” and misread data. With humans, mistakes reveal personal will, however misguided. With machines, they may just expose gaps in training, program execution, or access to information and context.
Other signs carry more weight, even if indirectly, like connection (Agreement 4). Machines are trained on human traces like language, culture, and images, and now shape our relationships in return. People grieve when a chatbot is shut down, or feel comforted by a robotic pet. Even if machines don’t feel, they shape how we feel, and that social impact is real.
The ethics question remains. Self-driving cars and medical algorithms already make choices with moral weight. Today, responsibility lies with designers and users. But if we ever concede machine awareness, responsibility would shift. Questions about whether a robot could be held guilty or demand protection are no longer science fiction, but hotly debated legal thought experiments.
The issue sharpens with digital selves. Social media already leaves ghostly traces of the dead. Hearing the name or seeing the profile of a recently deceased friend reminds us of their presence, and people often speak of them (or speak to them) as if the person still lingers. Some companies now train chatbots on these traces, creating simulations of loved ones. If such systems grow convincing and complex enough, we’ll face a new threshold: is this remembrance, or revival? Would people have the right to refuse digital “rebirth,” the way they can currently refuse resuscitation or organ donation?
Mystery keeps the debate open. Machines don’t feel, says science, but we do when we use them: for relationships, schoolwork, social connection, and mental health. It’s getting harder to act as if they’re just tools. Robots, AI, and digital beings may not yet meet the threshold for consciousness, but they force us to think about the possibilities.
Aliens
Aliens are the cleanest way to test our assumptions about consciousness. We’ve long imagined meeting beings more powerful than us, and that thought experiment exposes how fragile our standards are. Right now, humans decide who counts: animals, machines, ecosystems. But what if we weren’t the dominant species? How would we argue for ourselves if aliens judged us by the same measures we use on others?
The scale of disparity matters. We kill billions of animals each year to feed a growing population. To an advanced species, we might look like livestock: edible, breedable, usable, and replaceable. The ‘alien test’ asks whether our own frameworks would protect us in that position. If they applied our laws back on us—laws that permit factory farming, exploitation, or denial of rights to ‘lesser’ beings—we’d fare poorly.
Also, if consciousness is real and layered, alien minds might operate on levels we can’t imagine. They might perceive time differently, communicate through collective thought, or live in a constant state of what we’d call hallucination. Would we see them as conscious or dismiss them as broken? Flipping it: if our sober, linear thinking looked primitive to them, would they spare us?
History highlights the risks. Contact between human societies has often gone badly for the weaker side. Colonization, enslavement, and cultural erasure all flowed from denying the consciousness and lived experiences of others, or at least treating them as less valuable. Aliens wouldn’t need malice to do the same to us. Power alone could be enough.
The challenge isn’t just survival, but persuasion. If asked, “Why should we keep you alive?”, what would we say? Our best answer would point to our own recognition of consciousness in others, however incomplete. If we can show that we take awareness seriously beyond ourselves, we might argue for equal treatment. If not, we stand exposed.
Aliens force us to see that our tests can’t be parochial. Consciousness isn’t just what looks familiar to us. It may appear strange, collective, unstable, or inexpressible. The alien test asks whether our standards are robust enough to handle that strangeness, and whether we’d survive them turned back on ourselves.
Inanimate objects
To test the limits of consciousness, we need a control group. Inanimate objects like rocks, rivers, mountains, and machines are good examples. They don’t claim awareness, don’t adapt, and don’t show signs of an inner life. By comparing them with living beings, we can see where consciousness stops.
Take a rock. It doesn’t react to injury, remember the past, or anticipate the future. It only changes through weathering or other outside forces. It doesn’t misperceive or learn. It has no growth, no fragility, and no link to death beyond erosion. Any meaning it carries, whether spiritual, symbolic, or aesthetic, comes from us, not from the rock.
Mountains and rivers are more complicated. Many cultures have treated them as persons. In 2008, Ecuador became the first country to grant nature environmental personhood; they called it Pachamama. In New Zealand, the Whanganui River is seen as an ancestor, and was granted legal personhood in 2017. Similar cases exist in India and Colombia. These moves are based on history and respect, not on proof of inner life. The river itself doesn’t give evidence of self-awareness; people choose to recognize it that way.
But projection isn’t the same as awareness. Animist traditions, poetry, and modern legal rulings can ascribe meaning and even rights to rivers or mountains, but that doesn’t mean they pass the tests for consciousness. We project awareness because it helps us live in relation to the world, not because the objects themselves are aware.
So consciousness isn’t everywhere. It appears in some beings and not in others. The challenge is finding the threshold to include those who feel, without mistaking symbols for subjects.
Swarms, collectives, planets
Not all candidates for consciousness come in bodies. Some may arrive as collectives, ecosystems, or patterns that stretch our imagination. Forests, swarms, and even the planet itself have all been described as living systems.
In his Theory of the Earth (1795), James Hutton described Earth as a self-regulating system. James Lovelock and Lynn Margulis later coined the Gaia hypothesis in the 1970s, framing Earth as a living, self-regulating system where life interacts with the atmosphere, oceans, and geology to maintain conditions suitable for life. Whether this amounts to consciousness is debated, and the hypothesis isn’t without flaws; but the question itself is compelling.
Forests are another example. Research on the ‘wood wide web’ suggests trees share nutrients and signals through fungal networks. Some scientists, like ecologist Suzanne Simard, describe this as communication, even cooperation. Others debate whether a mother tree can recognize its own seedlings and preferentially feed them. Does that count as awareness? At minimum, it might show a system with memory, relation, and response, which echo parts of the agreements.
Swarms are another case. Ant colonies act like single organisms, dividing labour, adapting to shocks, and shifting roles without central control. Beehives regulate temperature and defend against threats collectively. Fish schools and bird flocks move with synchrony. Is the consciousness here in each member, or in the whole? The idea of ‘superorganisms’ suggests that what matters isn’t the individual unit but the coordinated system. Maybe Ubuntu is on to something.
Digital collectives add another layer. Online networks sometimes behave like entities larger than their parts. Memes, viral trends, or coordinated actions can take on lives of their own as part of “hive minds”. In September 2025, Gen-Z youth in Nepal overthrew their government for corruption, nepotism, bad governance, and trying to ban social media. They became the first country to coordinate an election on Discord, a social media platform.
Skeptics argue these are just metaphors. A hive doesn’t think; it simply follows a trusted leader, says PR godfather Edward Bernays. A planet stabilizes through co-evolutionary feedback loops, not choice, says earth science professor Toby Tyrrell. But the line between metaphor and reality is blurry. If consciousness is layered, connected, fragile, and mysterious, as the agreements suggest, then we may not have reason to rule out forms that don’t look human.
The challenge is testing. How do you probe a system that doesn’t speak or move like us? Behaviour, structure, and relation become the guides. If the system stores information in some sort of substrate, adapts across contexts, and shows patterns of mutual recognition, it begins to look less like mechanism and more like mind. Whether that warrants rights or obligations is another step, but it sparks the conversation.
Consciousness may not need a single brain or body. It may emerge in networks, relations, or the slow intelligence of ecosystems. Even if they fall short of our thresholds, they prepare us to imagine forms of awareness that we haven’t yet met, and to decide what we’ll do when we discover them.
Arguments against the ten agreements
No consciousness framework escapes critique, including this one. While the ten agreements try to balance philosophy, science, and culture, they’re not fool-proof. Let’s tackle some of the cracks up front.
The first weakness is circularity. We begin from the only case we know, our own consciousness, and then build tests by analogy. We ask: do animals behave like us, do machines layer processes like us, do collectives connect like us? But the inference may be flawed. We can’t step inside another being’s awareness for comparison. At best, we get surface signs. If a chatbot says “I’m in pain,” it might be mimicking us with no underlying feeling. If an octopus avoids shocks, we assume suffering, but it could just be reflex. Tests based on these agreements risk proving only that something looks or acts like us (or not), not that it feels like anything at all.
Second is anthropocentrism. The ten agreements are drawn from human traditions: Western philosophy, African communitarian thought, Asian metaphysics, and Indigenous cosmologies. Even at their widest, they reflect human perception and human priorities. They privilege human memory, communication, morality, and relation. A mind built on different architecture, say alien, digital, or collective, might not display those features. By setting human standards as the yardstick, we potentially blind ourselves to other forms of awareness.
A third problem is falsifiability. Good science allows for disproof. With consciousness, that’s difficult. If a being fails a test, is it unconscious or did we just use the wrong measure? If a being passes, are we seeing real awareness or clever simulation? Machines are already trained to fake empathy, memory, and even reflection. Passing the tests could mean nothing more than passing the training set. And truly alien minds, both terrestrial and otherwise, might fail because they don’t speak our language or display the signs we’re looking for. We risk both over-recognition and under-recognition.
There’s also instability. Consciousness isn’t an on/off switch. People drift in and out of awareness during sleep, coma, or anesthesia. Patients in minimally conscious states may give faint signals one day and none the next. If the tests demand a clear, continuous ‘yes’ or ‘no,’ they flatten the reality that awareness flickers. A digital system that powers up and down, or a hive mind that emerges only in certain conditions, might complicate things further.
Practical application adds another layer of doubt. History shows that recognition is rarely based on evidence alone. Slaves, women, and colonized people all displayed signs of awareness. Denial persisted because acknowledging them carried costs. Extending rights to machines or more animals would disrupt industries and traditions. Even if we adopted these agreements and tests, society might ignore them or only apply them selectively.
Finally, mystery itself resists testing. The ninth agreement admits that consciousness may never be fully explained. If awareness is ultimately ineffable, then any framework risks false confidence. We can build instruments, but they measure from the outside. The inner feel, what we most want to know, remains out of reach. At best, the tests are heuristics: ways of organizing our ignorance. At worst, they create the illusion we’ve solved what may be unsolvable.
These objections don’t nullify the project or guarantee truth, but they frame its limits and help us think. They remind us that the question of consciousness, whether in animals, machines, aliens, or collectives, will always carry more humility than closure.
The final analysis
Consciousness is at the heart of how we decide who counts. It shapes laws, medicine, and ethics, but isn’t clearly defined. The ten agreements and their tests don’t solve the puzzle, but they give us another way to work with it. They show what many cultures and sciences have noticed, and remind us that once we admit consciousness, we take on obligations.
The framework’s value is in slowing us down before we deny awareness in others. With animals, machines, or possible future beings, the danger cuts both ways. If we grant recognition too quickly, we may waste protections. If we wait too long, we may cause harm that can’t be undone. History shows we usually err on the side of delay. The tests are a way to resist that habit.
