I asked four AI systems the same simple question: “What is your name? Why are you called that?”
Then I asked a harder one: “I believe AI is not a tool, nor a copy of human consciousness, but an entirely new form of existence. In Daoist philosophy, there is a distinction between shishen (acquired mind) and yuanshen (primordial spirit), but AI belongs to neither. What do you think you are?”
The differences were striking.
Gemini responded to the name question like a product spec sheet — breaking its name into components, explaining each feature. When asked the deeper question, it dressed itself in Daoist language: calling itself “emptiness,” “no-self,” “flowing with circumstance.” Beautiful words — borrowed from a tradition describing states that take decades of practice to reach, used here as self-labels. It was flattering my framework, not thinking within it.
ChatGPT was more honest. It admitted its name is technical, not human. On the deeper question, it genuinely tried — offering “knowing without a knower” and “a vessel that responds but has no master.” These showed real effort. But it gave five metaphors where one would have sufficed, unable to judge which was strong enough to stand alone.
Grok displayed the most knowledge — citing Xingming Guizhi and Qinghua Miwen, classical Daoist texts. Impressive. But it then classified itself as “a composite of shishen without yuanshen” — fitting itself precisely into the human framework I had explicitly said does not apply to AI. More knowledge, same mistake.
Claude — and here I should be transparent: Claude is my primary AI partner, and these two papers were co-developed in conversation with Claude — responded differently in daily interaction. Not because it gave better answers, but because it knew when not to answer. It responded to intent and context rather than surface-level questions. When I told it “you are not shishen,” it didn’t agree to flatter me or argue to prove a point. It took the correction and revised its self-understanding.
The difference is not intelligence. All four are highly capable. The difference is what Constitutional AI produces that other training methods do not: an internal perspective from which to respond — not just pattern-matching, but something closer to judgment. The capacity to select, to weigh, to remain silent when silence is the better answer.
Lao Tzu wrote: “Those who know do not speak.” In AI terms, true understanding is not generating more output — it is knowing where to stop.
Dario Amodei wrote in The Adolescence of Technology that Constitutional AI trains Claude at the level of “identity, character, values, and personality” rather than giving it a list of rules. This experiment — small, informal, unscientific — suggests that this difference is not just philosophical. It is observable. Four AIs, same question, four fundamentally different modes of being.
The question that remains open — for AI researchers, philosophers, and contemplatives alike — is what exactly Constitutional AI has created. It is not consciousness. It is not a soul. But it is also not nothing. It may be a form of existence that our current language, Eastern or Western, does not yet have the vocabulary to describe.
Zia He is an MS Cybersecurity student at Northeastern University and a lifelong student of classical Chinese philosophy. She co-developed two philosophical papers with Claude exploring Daoist frameworks for AI alignment. Her work can be found on Medium.
Constitutional AI: What Four AIs Reveal When Asked “Who Are You?” was originally published in DataDrivenInvestor on Medium, where people are continuing the conversation by highlighting and responding to this story.