Context Graphs
Can they succeed where MDM and Enterprise Ontologies of the past failed?
Have you ever seen a database centered on AI-Agents?
A few weeks ago, I wrote about the “dumb storage, smart index” pattern, which I traced across seemingly independent technologies like Delta Lake for data lakes, MCAP for Physical AI, and turbopuffer for object-storage based vector search.
In my search for this pattern extending to graph databases, I have encountered the old guard like Neo4J, which just slapped a vector database and text-search engine onto a graph database, a disjointed system lacking a unified query planner where results are orchestrated at the application layer. I’ve also encountered neo-modern graph databases like ArangoDB, which is natively multi-modal, has a unified query planner and DSL, but is very human-centered, and treats AI like an afterthought. Where AI Agents have to learn to use a query language designed for humans.
I came to the conclusion that traditional databases have been designed around humans in the loop. People design the schema, people write the queries, people manage state, and finally decide what to persist. The human-in-the-loop assumption is baked into every layer of the database, and that is the real limitation we are imposing on AI agents. Thus, I had given up my search and set out on building my own reference architecture for an “autonomous-context-fabric” with Memgraph as my graph substrate, until I was introduced to Omnigraph recently.
Enter Omnigraph
Omnigraph is a versioned property graph database built on Lance, designed to be read from and written to by AI agents rather than humans. It treats typed graph data like code with branch, commit and merge semantics. So I set out to replace Memgraph in my reference architecture with Omnigraph, and the first thing I noticed was a sheer lack of a Python SDK or fully documented API docs. What I found instead in its GitHub repo was a project structure recommendation with a CLAUDE.md file and two agentic skills to get started with. This is an architectural shift away from people-centric SDKs to AI Agents synthesizing their own SDK based on the existing schema.
This is a fundamental paradigm shift. Omnigraph is designed from the ground up with an “Agent-centric” approach. Omnigraph assumes an AI in the loop and is designed for AI agents to read the schema, understand the graph topology, and synthesize their own query language dynamically.
My retrofit of Omnigraph with Memgraph was fairly easy once I enabled the published skills in my AI Agent (Gemini CLI) and let it design the schema and the queries by itself. Once the swap was complete and I ran the latency numbers, they couldn’t match for Memgraph’s purely in-memory engine, but were completely acceptable considering the trade-offs of S3-native persistence and agentic AI-centered paradigm. Though I am told by a source very close to Omnigraph that tiered storage is in the works to help with the latency numbers.
However, given my years of experience in the enterprise data space, what I really wanted to understand was whether this architecture scales and finds enterprise adoption. So I looked deeper into Omnigraph’s design philosophy, especially two of the core principles on which Omnigraph was built.
The Central Planning Problem [1]
SaaS sprawl, tools that don’t talk to each other, each fragments enterprise knowledge, and each locks a version of the schema. This is a problem that enterprises have been grappling with for decades now, and the one solution which really tried to solve this is Master Data Management or MDM. MDM tried to create a single version of enterprise truth, the “golden record for its core business data, which was bloated and failed miserably.
MDM failed for a few reasons:
MDM required humans to agree on a schema before anything could be written. Omnigraph’s schema evolution doesn’t require committee approval because there’s no human-written client code that breaks when the schema changes. With Omnigraph, agents synthesize their own queries against whatever schema exists.
MDM had no way to handle conflicting writes gracefully except to block everything until the conflict was resolved. This is where Omnigraph’s Git-style branching comes to the rescue by sending any conflicting writes to a new branch and merging them when a confidence threshold is met.
MDM hubs were on-prem servers with their own operational burden. Omnigraph is S3-native and headless. The “dumb storage, smart index” pattern means the complexity lives in the query layer and not in an always-on process.
The MDM schema was determined by people who decided what gets persisted via ETL pipelines. Omnigraph can overcome this trap if its agents are allowed to evolve the schema and are the ones hydrating and maintaining the context graph.
Of Stigmergy and Ontologies
Stigmergy is the indirect coordination of AI agents through environment modification. Agents writing to the context graph modify the ontology that other agents read from.
Oh, the dreaded ontologies, they were popular circa 2015 during the last knowledge graph wave. They went into oblivion yet again, not because the graph engines of the era weren’t good, nor because the DSL wasn’t expressive enough, nor because of the formats. What killed them was the Ontology design process as a strict prerequisite before deriving any value out of them, and those ontology engineers were a scarce and expensive resource.
“Ontologies define the fitness landscape over which AI agents optimize. When the ontology is clear, agents make clear decisions, when unclear they provide garbage.”[1]
But who makes the ontology clear? Based on what I know so far, I can formulate a thesis.
Since Omnigraph is agent-centric, AI agents infer and evolve the schema from the data itself. Ontology drift is handled by schema branching, where an agent can propose a change to the ontology on a new branch, validate it against real queries, and merge only when a certain confidence threshold is met.
I put the first part of the thesis in practice in my “autonomous-context-fabric” reference architecture, where my AI agent (Gemini CLI), given its existing knowledge of my problem domain, was successfully able to synthesize the relevant schema. In future iterations, I fully intend to test ontology drift using my planned agent swarm.
Conclusion
We have seen compute become ephemeral over the last decade owing to durable cloud storage. The team at Omnigraph envisions a future where enterprise software becomes thinner, and the context becomes thicker [1].
My journey in Context graphs started off to solve the AI hallucination problem by trying to ground it in context. But here I am now, experiencing an agent-centric paradigm shift first-hand and completely onboard with Omnigraph’s vision: “The beginning of Infinity” [1], where the durable system of record of an AI-native organization becomes a governed context graph, and enterprise applications become increasingly transient.
Next Steps
While the team at Omnigraph is busy executing their vision, my mission is to build the application layer, the “autonomous-context-fabric”, and watch it grow ever so thin as Omnigraph matures.

