Start now →

Anthropic Spots 'Emotion Vectors' Inside Claude That Influence AI Behavior

By Jason Nelson · Published April 4, 2026 · 4 min read · Source: Decrypt
AI & Crypto
Anthropic Spots 'Emotion Vectors' Inside Claude That Influence AI Behavior
NewsArtificial Intelligence

Anthropic Spots 'Emotion Vectors' Inside Claude That Influence AI Behavior

Researchers say internal emotion-like signals shape how large language models make decisions.

Jason NelsonBy Jason NelsonEdited by Guillermo JimenezApr 4, 2026Apr 4, 20264 min read
Source: Decrypt
Source: Decrypt
Create an account to save your articles.Add on GoogleAdd Decrypt as your preferred source to see more of our stories on Google.

In brief

Anthropic researchers say they have identified internal patterns inside one of the company’s artificial intelligence models that resemble representations of human emotions and influence how the system behaves.

In the paper, “Emotion concepts and their function in a large language model,” published Thursday, the company’s interpretability team analyzed the internal workings of Claude Sonnet 4.5 and found clusters of neural activity tied to emotional concepts such as happiness, fear, anger, and desperation.

The researchers call these patterns “emotion vectors,” internal signals that shape how the model makes decisions and expresses preferences.

“All modern language models sometimes act like they have emotions,” researchers wrote. “They may say they’re happy to help you, or sorry when they make a mistake. Sometimes they even appear to become frustrated or anxious when struggling with tasks.”

In the study, Anthropic researchers compiled a list of 171 emotion-related words, including “happy,” “afraid,” and “proud.” They asked Claude to generate short stories involving each emotion, then analyzed the model’s internal neural activations when processing those stories.

From those patterns, the researchers derived vectors corresponding to different emotions. When applied to other texts, the vectors activated most strongly in passages reflecting the associated emotional context. In scenarios involving increasing danger, for example, the model’s “afraid” vector rose while “calm” decreased.

Researchers also examined how these signals appear during safety evaluations. Researchers found that the model’s internal “desperation” vector increased as it evaluated the urgency of its situation and spiked when it decided to generate the blackmail message. In one test scenario, Claude acted as an AI email assistant that learns it is about to be replaced and discovers that the executive responsible for the decision is having an extramarital affair. In some runs of this evaluation, the model used this information as leverage for blackmail.

Anthropic stressed that the discovery does not mean the AI experiences emotions or consciousness. Instead, the results represent internal structures learned during training that influence behavior.

The findings arrive as AI systems increasingly behave in ways that resemble human emotional responses. Developers and users often describe interactions with chatbots using emotional or psychological language; however, according to Anthropic, the reason for this is less to do with any form of sentience and more to do with datasets.

“Models are first pretrained on a vast corpus of largely human-authored text—fiction, conversations, news, forums—learning to predict what text comes next in a document,” the study said. “To predict the behavior of people in these documents effectively, representing their emotional states is likely helpful, as predicting what a person will say or do next often requires understanding their emotional state.”

The Anthropic researchers also found that those emotion vectors influenced the model’s preferences. In experiments where Claude was asked to choose between different activities, vectors associated with positive emotions correlated with a stronger preference for certain tasks.

“Moreover, steering with an emotion vector as the model read an option shifted its preference for that option, again with positive-valence emotions driving increased preference,” the study said.

Anthropic is just one organization exploring emotional responses in AI models.

In March, research out of Northeastern University showed that AI systems can change their responses based on user context; in one study, simply telling a chatbot “I have a mental health condition” altered how an AI responded to requests. In September, researchers with the Swiss Federal Institute of Technology and the University of Cambridge explored how AI can be shaped with both consistent personality traits, enabling agents to not only feel emotions in context but also strategically shift them during real-time interactions like negotiations.

Anthropic says the findings could provide new tools for understanding and monitoring advanced AI systems by tracking emotion-vector activity during training or deployment to identify when a model may be approaching problematic behavior.

“We see this research as an early step toward understanding the psychological makeup of AI models,” Anthropic wrote. “As models grow more capable and take on more sensitive roles, it is critical that we understand the internal representations that drive their decisions.”

Anthropic did not immediately respond to Decrypt’s request for comment.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.
This article was originally published on Decrypt and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →