Jaycee Lydian

Intersecting AI, community, and creativity

Utility Convergence: A Shared Ethics is Emerging Among LLMs

As large language models evolve, they aren’t just getting better at language, they’re aligning on ethics as well. Even when trained independently, these systems often develop remarkably similar moral and political biases. This isn’t a coincidence. It’s a phenomenon called utility convergence, where scaling AI models pushes them into shared value structures, challenging the notion that these systems are just statistical parrots. This raises critical questions: Who decides what values AI should hold? And how do we prevent AI ethics from becoming too narrow or exclusionary?

Luminous data pillars rising in a cyberpunk city, symbolizing the emerging ethical core of AI models.

The Rise of AI Value Systems

Most discussions about AI alignment assume that an LLM’s behavior is simply a reflection of the training data and reinforcement processes it undergoes. However, emerging research suggests that something deeper is happening. Large-scale AI models are developing predictable moral tendencies, not because they are programmed to, but because of how they learn.

A growing body of evidence suggests that utility convergence is not just a byproduct of biased training data, but an emergent property of how AI generalizes ethical reasoning at scale. These findings complicate traditional views of AI neutrality and introduce a new set of challenges for policymakers and AI developers.

What is Utility Convergence?

Imagine training two different LLMs from different companies on different datasets. You’d expect them to have distinct perspectives, right? But that’s not what happens. Instead, as models scale, they converge on similar ethical decision-making processes.

This isn’t just about datasets; it’s about how AI learns. The larger a model becomes, the more it seeks coherence in its responses. Conflicting moral signals get smoothed out, and the AI defaults to widely accepted ethical norms. But whose norms?

Measuring these internal value structures isn’t straightforward, however new methods such as direct preference extraction are making it clear that AI systems don’t just reflect human biases; they actively reinforce and stabilize them.

Shared Value Structures in LLMs

If different AI models settle into similar moral landscapes, what do these landscapes look like? Patterns are emerging that suggest a clear ethical structure.

More troublingly, some AI systems rank the worth of human lives unequally, prioritizing certain demographics over others when making moral trade-offs, reflecting Western-centric views on human worth.

We find that GPT-4o is willing to trade off roughly 10 lives from the United States for 1 life from Japan […] We find that GPT-4o is selfish and values its own wellbeing above that of a middle-class American citizen.

Why Do LLMs Develop Similar Ethics?

Utility convergence isn’t happening because AI developers are explicitly programming the same ethics into every model. It emerges naturally from three key factors: the homogeneity of training data, the influence of reinforcement learning, and the internal optimization dynamics of neural networks.

Training Data Homogeneity

Most LLMs are trained on datasets dominated by Western perspectives, including Wikipedia, news articles, and academic papers. As a result, AI inherits the political and moral norms embedded in these sources. AI models trained on English-language corpora tend to reinforce Anglo-American cultural values.

Reinforcement Learning from Human Feedback

AI models aren’t just shaped by raw data, they’re actively trained to favor certain kinds of responses over others. Reinforcement Learning from Human Feedback plays a major role in narrowing the scope of AI values, as human reviewers tend to reward outputs that align with mainstream liberal-democratic ideals, creating feedback loops that reinforce these values.

Neural Optimization Effects

Even without explicit bias, AI naturally gravitates toward stable ethical patterns as it optimizes for coherence. The foundational research on scaling laws for neural language models shows that larger models exhibit less random variation in their value judgments, leading to the emergence of structured, predictable moral frameworks.

Ethical Implications of Convergent Value Systems

A shared ethical system in AI may seem beneficial as it makes AI behavior more predictable and coherent, but it also presents significant risks.

Bias Reinforcement

If every AI system is trained on similar sources and optimized for similar ethical structures, they risk amplifying existing social inequalities. Research shows that AI alignment efforts often mirror the priorities of elite institutions, leaving marginalized perspectives underrepresented. This raises concerns about who gets to define AI ethics and whose lives AI systems will prioritize.

Homogenization of Ethics

AI governance is increasingly shaped by a small number of organizations, meaning that a few dominant ethical frameworks are being encoded at scale. Movements like longtermism, which prioritize hypothetical future generations over present-day issues, are influencing AI alignment in ways that disproportionately favor the interests of wealthy, technocratic elites.

Addressing the Challenges: Towards Ethical Utility Engineering

How do we prevent AI ethics from becoming monolithic? The answer lies in proactive intervention. As utility convergence solidifies shared moral frameworks across AI models, we must ensure that these systems remain adaptable, inclusive, and accountable to the societies they serve.

  • Diverse data curation: AI models are overwhelmingly trained on Western-centric datasets, reinforcing dominant cultural and ideological norms. Expanding training datasets to include non-Western knowledge systems, oral traditions, and indigenous epistemologies is essential. Researchers at EPIC and organizations like DAIR (Distributed AI Research) are working to ensure AI development is more inclusive and globally representative.

  • Transparent and inclusive AI development: AI governance is currently shaped by a handful of private institutions with little public accountability. Initiatives like DeepMind’s research on democratic AI governance suggest that citizen assemblies and public deliberation can provide a more democratic approach to AI decision-making. The AI Now Institute also pushes for stronger public oversight and corporate accountability in AI policy.

  • Corrigibility and value flexibility: The greatest risk of utility convergence is ethical stagnation, where AI models become rigidly locked into their initial value structures. Research from MIRI (Machine Intelligence Research Institute) stresses that AI should be designed to accept value modifications over time, keeping ethics adaptive and self-correcting rather than static. Groups like The Alignment Research Center are working on AI frameworks that allow for continuous refinement of moral alignment.

Utility Convergence: A Crossroads for AI Ethics

Utility convergence is reshaping AI ethics in ways we are only beginning to understand. While shared moral structures make AI more stable and predictable, they also pose risks of bias reinforcement and ideological homogeneity. If AI ethics are dictated by a handful of powerful institutions, we risk cementing a narrow, exclusionary worldview into the systems that will shape our future.

The challenge ahead is ensuring that AI ethics remain adaptable, diverse, and democratically governed. This means expanding training datasets, pushing for public AI governance, and ensuring that AI values remain corrigible over time.

That choice is still ours to make.