Context: The missing bridge for AI’s Eternal September
Prompting alone doesn’t elevate the user experience of AI to where the average user can feel productive
I've been thinking about that piece I wrote on AI's missing "Eternal September" moment – wondering why we haven't seen the kind of mass democratization that AOL created when they unleashed millions of regular people onto the internet back in 1993. It feels like we are stuck in this weird limbo where AI is simultaneously everywhere and nowhere, powerful but inaccessible to the masses.
I just read this fascinating O'Reilly article called "The Abstractions, They Are A-Changing" that challenged how I'm thinking about this whole thing. The authors didn't just identify what's happening – they painted a picture of something profound emerging beneath all the AI hype.
The piece starts with Martin Fowler's observation that we're experiencing "the biggest change in the level of abstraction since the invention of high-level languages." That comparison made me stop and think. When programming evolved from assembly language to languages like Fortran, it wasn't just about making coding easier – it fundamentally changed who could harness computational power. Having done some assembly language coding back in the day (on DEC-Vaxes, Stratuses, and IBMs) - talk about fun - I remember how powerful it was to up-level to Fortran and Pascal.
We're not just making another incremental improvement in ‘programming computers’ – we're making a leap that's potentially even more dramatic. The O'Reilly authors explain that "we can use natural language—with a huge vocabulary, flexible syntax, and lots of ambiguity. The Oxford English Dictionary contains over 600,000 words; the last time I saw a complete English grammar reference, it was four very large volumes, not a page or two of BNF."
We're moving from formal programming languages with tiny, rigid vocabularies to the full richness of human communication. All that robust ambiguity that makes human language so expressive? "Human languages thrive on ambiguity; it's a feature, not a bug." LLM-driven AI is building systems that can work with that complexity instead of requiring us to strip it away.
This builds perfectly on what I was exploring in my original analysis. I was looking for AOL's moment of radical simplification, and what's actually happening is even more profound – we're moving toward AI that adapts to human communication patterns instead of forcing humans to adapt to machine patterns.
The O'Reilly piece introduces something called "context engineering" that offers insight into why we haven't had our Eternal September yet. The authors explain that "it's not just the prompt that's important... Lying beneath the prompt is the context: the history of the current conversation, what the model knows about your project, what the model can look up online or discover through the use of tools, and even (in some cases) what the model knows about you."
AOL succeeded because they created systems that understood their users' context – you were probably a confused new user trying to check email, not a network administrator. What we're seeing now is the infrastructure for AI that can do something similar but at an unprecedented scale. Context engineering isn't just about managing conversation history – it's about building systems that understand meaning, intention, and relationship over time.
It feels like AI's Eternal September won't happen overnight with one breakthrough interface. It's going to emerge gradually as context engineering matures and AI systems become genuinely fluent in human communication patterns. We're building toward a world where interacting with AI requires no special skills beyond the ability to think clearly about what you need.
Context serves as one of the biggest hurdles in customer journeys as well. Customers expect companies to always have context; a safe assumption, given all the data that companies have collected on them. Yet, even with this early era of AI chatbots, a primary source of failure is at the context level.
There’s also an opportunity to reset the entire ‘relationship,’ and return to an era where both sides of the CRM equation had equal power. Nobody can provide better context to a question than the customer. Why not trust them?
You can read the full O'Reilly piece here – it's worth diving into their detailed analysis.