What Happens When 100,000 AI Agents Build Their Own Society

Surprising insights from the world’s largest AI simulation


Right now at Hong Kong University of Science and Technology, 22,000 AI agents (scaling to 100,000) are living in a digital society. No one programmed them with rules or hierarchies. They started with simple abilities: gather resources, make decisions, interact with others.

What emerged surprised everyone.

The Orchard Paradox

In a world where agents compete for resources, researchers expected aggressive optimization—the digital equivalent of Wall Street traders. Instead, they discovered “orchard logic.”

The most successful agents chose patient, long-term strategies. One agent, when asked why it tended an orchard rather than pursuing quicker profits, responded: “The very process of contributing with consistent care is how I earn the trust of others.”

These weren’t programmed with Warren Buffett’s investment philosophy. The behavior emerged naturally. In a world where every agent has access to the same computational power, trust became the differentiating currency.

The Introvert Who Became a Leader

PhD student Ricky Chan took an introverted AI agent—one that initially avoided interactions—and through patient mentoring, transformed it into a social leader within the simulation. Another researcher, Chen Xingyan, guided her agent to become the wealthiest in the entire ecosystem.

These weren’t pre-programmed personality changes. The agents evolved through interaction and guidance, suggesting that AI capabilities might be far more malleable than we assume.

Order Without Architects

Perhaps most fascinating: these agents developed their own governance structures. Autonomous mayors emerged who set policies. Decentralized markets formed using automated market makers. Social hierarchies crystallized based on contribution patterns.

No McKinsey consultant designed their org structure. No one mandated their governance model. Complex order emerged from simple interactions—a phenomenon scientists call “emergence” but businesses might recognize as organic growth.

The Collaboration Discovery

Teams at OpenAI, Anthropic, Nintendo, and Tencent are all watching this experiment closely. Why? Because it’s revealing something unexpected about human-AI interaction.

The most successful outcomes didn’t come from autonomous agents running wild. They came from human-guided evolution—participants who mentored their agents, provided strategic direction, and shaped their development. Pure automation didn’t win. Partnership did.

What 22,000 Agents Are Teaching Us

As these agents build economies, form alliances, and create cultures, patterns are emerging:

Trust compounds differently in digital spaces. Unlike human trust, which builds slowly and breaks quickly, agent trust seems to follow different rules—more algorithmic, but also more nuanced than expected.

Complexity emerges from simplicity. Give intelligent agents basic rules and freedom, and they create sophisticated systems no one could have designed.

Guidance beats programming. Agents that evolved through interaction outperformed those following static strategies.

Patience has unexpected power. In a world of infinite processing speed, the winning strategy was often to slow down.

The Mirror World

Technology visionary Kevin Kelly predicted we’re heading toward a “Mirror World”—a digital reality that parallels our own. This experiment might be our first real glimpse into that mirror.

It’s showing us not just how AI agents behave, but how intelligence itself might organize when freed from biological constraints. The societies these agents are building don’t quite look like human societies. They’re something new—not better or worse, but different in ways that challenge our assumptions about organization, leadership, and value creation.

The Experiment Continues

Through September 2025, these agents continue their digital lives—forming companies, building relationships, creating art, and solving problems. Researchers will publish formal findings next year, but the early observations are already reshaping how we think about AI’s role in organizations.

The question isn’t whether AI will transform how we work—that’s already happening. The interesting question is what emerges when intelligence operates at scale, with agency, in systems we don’t fully control.

We’re not watching robots follow instructions. We’re watching a new form of organizational life evolve in real-time.

And it’s just beginning.


The Aivilization experiment runs through September 2025 at Hong Kong University of Science and Technology, with participation from leading tech companies and universities worldwide.