The Missing Graph: Are You Ignoring the Most Valuable Data Structure Around Your Customers?
Surveillance-based CX has always had a data architecture problem hiding beneath its ethical one — it captures nodes while the real commercial value lives in the edges.
In my last article, I laid out the case that surveillance-driven customer experience isn’t just ethically compromised, it’s likely commercially self-defeating. I want to push that argument further, because I think there’s a more technical dimension that hasn’t gotten enough attention: the surveillance model doesn’t just collect the wrong data. It actively suppresses the formation of the most valuable data structure in your customer ecosystem.
That structure is, essentially, a hidden graph. And most enterprise CX platforms are architected in ways that make it invisible.
Here’s the frame I want to borrow from something I wrote earlier: Your Company’s Neural Network Is Growing Right Now—Whether You’re Managing It or Not. The premise there was that organizational intelligence forms through repeated interaction patterns; that the data flowing through your systems aren’t just records, that data are training signal. The same logic applies to customers. Every interaction produces signal; but, the surveillance model is capturing the wrong layer of it.
When your CRM logs a customer interaction, it captures a node: this customer, this product, this timestamp, this outcome. What it systematically fails to capture is the edge activity: this customer referred that friend, this user’s endorsement influenced that group, this complaint traveled through a community forum and influenced six renewal decisions. The node-level data is what surveillance optimizes for. The edge-level data, where the interaction network resides, is where the actual commercial value lives. And because surveillance-based systems can’t obtain it consensually, they approximate it with behavioral inference that is, structurally, a poor substitute.
Think about what this means architecturally. You’re running ML models trained on incomplete graphs. You’re building recommendation engines that optimize for individual conversion signals while ignoring the network dynamics that actually drive revenue at scale. This transaction focused data capture ignores the graph layer which drives long term value, especially in consumer-oriented business.
This is the idea I’ve been building in The Great Unwinding: From Digital Panopticon to Collaborative Intelligence. The panopticon architecture that centralizes observation, actives asymmetric data control that aims for one-directional signal extraction, doesn’t just create compliance risk and erode trust. It produces structurally degraded data. The behavioral signals you extract from a relationship built on surveillance are contaminated by the defensive behaviors surveillance provokes. A customer who knows they’re being watched modifies their behavior. Your model trains on the modified behavior, not on the genuine signal. The feedback loop degrades the very intelligence it claims to build.
The technical solution is a different data architecture entirely. That’s what MyTerms, the working name for IEEE 7012, the Standard for Machine Readable Personal Privacy Terms, actually represents. At a technical level, MyTerms introduces a bilateral contract layer that sits between the customer and the company’s service system. The customer’s agent presents machine-readable terms: data scope, interaction parameters, consent expiry conditions, escalation protocols. The company’s system acknowledges and operates within those terms.
What this creates, technically, is a consent layer that enables higher-quality signal. When a customer voluntarily shares context like stated intent, explicit preferences, and defined relationship parameters, you’re working with first-party intentional data rather than third-party behavioral inference. The training signal quality difference is significant as intent data doesn’t require a proxy. It arrives labeled and permission.
The graph problem also begins to resolve under this architecture. When customers engage through contractual, agentic frameworks, the referral and influence dynamics become legible in ways surveillance never enabled. A customer who trusts the relationship will share context about their network, not because they’re being tracked, but because sharing context serves their own interests. That’s opt-in edge data. It’s not theoretical; it’s the kind of signal that has historically only been available through high-touch relationships that have been the foundation of commercial interaction for millennia. With MyTerms as a core component, these high-touch relationships now have a scalable protocol, critical for high value co-creation in an AI-agentic world.
I’ve made the case about what this unlocks in The Intention Economy Meets AI: Why Your Customers Are About to Become Your Best Partners. When permissioned AI agents operate on both sides of a customer interaction, where the customer’s agent presenting intentions, the company’s agent responding within stated terms, you have a new class of interaction data entirely. Not “what did this customer click” but “what did this customer’s agent negotiate for, and what was the resolution.” That’s a semantically richer, structurally cleaner, and commercially more valuable data type than anything the surveillance stack produces.
The reality is that MyTerms has just been approved by the IEEE, and the technical development is still in its infancy. The agentic infrastructure for customer-side AI agents is early-stage, with interoperability challenges that haven’t been fully resolved. Most enterprise CX platforms would need significant architectural work to receive and process bilateral contract terms. This is a two-to-five year transformation, not a quarterly roadmap item.
But the architectural direction is clear enough to begin planning for. The surveillance model was never optimal from a data quality standpoint; it was simply the only available approach when customers lacked the tools to present their own terms. Those tools are arriving. The companies investing now in consent-first data architectures, customer-presented preference layers, and agentic interaction frameworks will find themselves operating on a fundamentally cleaner and more valuable graph when the standards mature.
The missing graph has always been there. We just haven’t built systems capable of seeing it honestly.


