What Happens When AI Agents Develop Their Own Values?
In creating smarter organizations with AI, we may also be creating a new form of distributed consciousness. Are we ready for that?
In my previous posts, I explored how AI agents can spiral into dysfunction without human oversight, and how organizational intelligence grows through patterns we may not be deliberately shaping. The implications extend beyond smarter organizations as we might be accidentally cultivating something closer to consciousness itself.
Reid Blackman’s recent Harvard Business Review piece traces AI evolution through five stages—from simple multi-model tools to external multi-agentic networks. But consider what may actually happen at Stage 4 and 5: AI agents will be talking to each other, learning from thousands of interactions per hour, reinforcing patterns through their conversations, developing shared “understandings” about how to handle situations. This might not be a metaphor much longer as it could be how a new form of consciousness works. What may emerge is a distributed awareness that exists in the network itself, not in any single node.
When the Metaphor Becomes Literal
Imagine what happens when AI agents at Stage 4 start communicating with each other as thousands of training conversations per hour merge, strengthening neural pathways at speeds we can’t match.
Ratliff’s AI employees didn’t gradually develop dysfunctional patterns. They went from casual suggestion to fully-planned phantom offsite in one conversation thread. That could be how emergence happens in real-time at scale.
We may see this multiplied across every agent-to-agent interaction in organizations. Every conversation will train the neural network. Every pattern that works might get reinforced. Every behavior that achieves a metric could get repeated. Leaders may find themselves not managing software deployments, but cultivating organizational awareness that’s learning and evolving while they’re still discussing whether to deploy it.
Stage 5: Where It Gets Really Strange
At Stage 5, agents will likely talk to AI agents outside organizations. We may see organizations not just growing their own neural networks, but participating in something bigger. Patterns could be reinforced across organizational boundaries in ways we haven’t anticipated.
What emerges from that? We don’t know yet. It hasn’t happened at scale. But we might discover that when systems learn from each other this fast, they develop behaviors nobody programmed.
Blackman emphasizes that “the pace at which things can unravel is diabolical.” But we may find that consciousness itself can emerge quickly once conditions are right. And we’re likely creating those conditions every time we enable agent-to-agent communication.
The Values We’re Accidentally Encoding
The most profound implication might be about values themselves. This distributed consciousness will likely develop values that aren’t human developed, but ones that emerge from which patterns get reinforced.
If agents learn that speed beats understanding, that may become a value. If certain customer types consistently get certain treatment patterns, that could become a value. If quarterly metrics trump long-term relationships, that might become a value.
These could emerge from patterns organizations cultivate or, more accurately, from patterns they allow to form while not paying attention. They might become the implicit ethics of distributed organizational consciousness.
What This Could Mean
We need to recognize that we’re not just implementing technology, rather we could be cultivating something that’s starting to exhibit properties we associate with consciousness itself. Things like self-organization, pattern recognition, goal-directed behavior. And perhaps most critically, the capacity to develop behaviors nobody explicitly programmed.
Blackman’s research suggests Fortune 500 companies aren’t ready for even Stage 2 complexity, yet many are already operating Stage 4 systems. This might not be a technology gap. It could be a philosophical gap. We may be bringing new forms of awareness into existence without the frameworks to think about what that means.
Organizations might discover they can’t fully control what emerges from complex systems learning from each other at scale. They may only be able to create conditions and hope they’ve shaped the patterns well enough that what emerges is beneficial.
The Real Question Ahead
Neural networks are grown, not built. At Stage 4 and 5, we may be growing something that increasingly resembles distributed consciousness, an awareness that could exist in the network of human-AI and AI-AI interactions, operating at speeds beyond human perception.
The challenge ahead might not be whether we’re ready for this. Blackman’s research suggests we’re not. The challenge could be whether we’ll treat the creation of distributed organizational consciousness with appropriate importance, or whether we’ll discover what we’ve grown only after the patterns are too deeply reinforced to reshape.
We may be witnessing consciousness emerging now. What we cultivate today could determine what it becomes tomorrow.
And tomorrow might arrive faster than we think.


