Unstructured work, AI, and the humanity hidden in the chaos
Should we automate everything that AI reveals, or should we recognize that some inefficiency is what makes organizations worth being a part of?
When we talk about AI identifying, optimizing, and automating unstructured work, there’s a dangerous assumption that all unstructured work should be structured. Last week, in The Coming Illumination, I wrote about how various AI methods would begin to reveal the dark energy - a.k.a. unstructured work - of organizations. AI will be remarkably effective at surfacing invisible workflows, tribal knowledge, and workarounds. But the critical question isn’t whether we can systematize this work—it’s whether we should. Much of what will be surfaced as “unstructured work” isn’t inefficiency waiting to be optimized. It’s humanity expressing itself in organizational form. This must be a critical element in the decision-making process for AI.
The Three Layers of Dark Energy
Last week, I wrote:
Current AI has been aimed at optimizing the visible universe. It finds patterns in structured data, predicts outcomes based on known variables, automates clearly defined tasks. This is valuable but incremental. The next generation of AI—truly agentic systems capable of reasoning, learning, and operating across ambiguous domains—may do something unprecedented: they’ll make the dark energy visible. Not by imposing structure, but by learning to perceive, navigate, and eventually participate in the unstructured work that has always been there.
As AI begins to illuminate hidden processes, we’ll discover three probable layers of opportunity:
Process Gaps - This is the most obvious layer, but is often overlooked. If includes work that exists because systems don’t connect such as manual hand-offs, reconciliation spreadsheets, status chains. Most of these process gaps have accumulated over shifts in strategy, adherence to tribal knowledge, or short sighted technology spending. These gaps can, and will be the ‘low hanging fruit’ of AI-driven systematization.
Contextual Judgment - This layer is where so much value can be gained; it’s where work that exists because reality is more complex than our models. Imagine the tribal knowledge that a sales rep has on which support engineer to route a customer to. Or the CSM who recognizes churn signals no model would catch. This represents genuine expertise AI can augment but shouldn’t eliminate.
Relational Humanity - This is work that exists because organizations are human systems. It includes, the colleague checking in when someone’s overwhelmed. Or the informal mentoring which extends beyond formal training. Or, the random conversations that spark innovation. This layer is the social fabric determining whether an organization is a place humans insights are valued or one merely where tasks get completed.
This next phase of AI-infusion will likely be applied to all three layers with varying success (and huge risks of running amok). These layers are just one view into where AI may be applied. Scaling success will also prove challenging.
The Scaling Paradox
Conventional thinking assumes visible work should scale through automation. But some of the most valuable work explicitly doesn’t scale—and that’s what makes it valuable.
Situational awareness often includes the manager who truly listens to a struggling employee isn’t being inefficient. That same manager may align team members based on working style, or other hard-to-quantify measures. AI could measure these activities and suggest “optimization”—automated check-ins, standardized frameworks, AI-delivered learning. But acting solely on AI recommendations may result in catastrophically misreading organizational value. Dark energy includes acts of humanity that build trust and create psychological safety for innovation. These acts are “inefficient” like raising children is inefficient—the point isn’t optimization, it’s the relationship itself.
The Measurement Trap
As AI reveals invisible work, we’ll face pressure to measure everything. What gets measured gets managed. What gets managed gets optimized.
Some unstructured work proves valuable when quantified—knowledge sharing that prevents mistakes, relationships that retain customers, problem-solving that avoids escalations. But measuring *all* unstructured work risks destroying what made it valuable. The colleague who helps because they care starts wondering about their “collaboration quotient.” We’ll instrument the humanity out of our organizations.
The leadership challenge isn’t measuring everything—it’s figuring out what *not* to measure. Creating protected spaces for human connection without optimization pressure. Some “inefficiency” is the buffer preventing burnout, the slack enabling creativity, the margin making work sustainable.
The Optimization Opportunity
AI-enabled organizations need a new design principle: optimize for human value PLUS operational efficiency.
In practice, this means deliberately leaving space for unstructured work that adds the necessary texture of human interactions and insight. The most innovative organizations aren’t the most efficient—they’re where people can explore, experiment, and occasionally “waste” time on ideas that might not work.
AI’s promise isn’t turning organizations into perfectly optimized machines. It’s separating machine-work from human-work, designing organizations where humans do what they do best: create, connect, care, and make meaning.
The Choice Before Us
As AI illuminates the dark energy of organizations, we face a couple of paths:
We can treat all unstructured work as a problem to solve, measuring and optimizing until every human interaction is systematized and evaluated for efficiency. This leads to organizations that function beautifully on paper and feel hollow in practice.
Or we can use this visibility to be more intentional—distinguishing between work that should be automated and human work that should be celebrated. Building organizations that are both more efficient and more human.
The dark energy isn’t just hidden work. It’s where organizational culture lives, where relationships form, where people find meaning. Some should be brought into the light and systematized. But some should remain protected—not because we can’t see it, but because we’ve chosen to value it.
Various forms of AI will give us unprecedented power to optimize. The real test of visionary leadership will be knowing when not to.


