I wrote a book that delves deeply into the topic of semantic space-time and its impact on empowering AI.
After two decades building graph systems and AI agents, I’ve discovered why current approaches to AI memory fundamentally miss the mark — and what we need to do about it.
We’re at a critical juncture in AI development. As we build increasingly sophisticated agents for personal assistance, emotional support, and complex decision-making, we’ve hit a wall that most practitioners don’t even recognize: our memory systems are fundamentally broken.
The problem isn’t storage capacity or retrieval speed. It’s that current AI memory architectures — whether vector embeddings, traditional knowledge graphs, or hybrid approaches — miss something essential about how understanding actually works. They store information but lose the causal relationships that drive human experience and decision-making.
The Vector Embedding Trap
Vector embeddings have dominated AI memory systems, and for good reason — they’re computationally efficient and initially produced impressive results. But as we’ve scaled from simple retrieval to complex reasoning tasks, their limitations have become glaring.
Consider this example: the sentences “I hate my wife” and “I love my wife” often end up surprisingly close in vector space, especially without temporal context. The mathematical similarity masks a semantic chasm that renders the system useless for nuanced understanding. More critically, these high-dimensional representations are opaque — we can’t explain why certain concepts cluster together or how the system reaches its conclusions.
This opacity becomes a fundamental problem when building AI agents that need to make important decisions or provide emotional support. How can we trust a system that can’t explain its reasoning?
The Knowledge Graph Limitation
Traditional knowledge graphs seem like a natural solution — they make relationships explicit and provide interpretable structure. But most implementations rely on arbitrary relationship types that don’t generalize across domains. Worse, they typically represent static facts rather than the dynamic, contextual nature of how humans actually understand and use knowledge.
The real world isn’t a collection of static facts. It’s a web of causal relationships, contextual similarities, and hierarchical structures that shift based on perspective and purpose. Current graph approaches capture snapshots but miss the motion picture.
Enter Semantic Spacetime
The solution lies in borrowing concepts from theoretical physics and applying them to knowledge representation. Just as physical spacetime unifies space and time into a single framework where meaning emerges from relationships, semantic spacetime creates a unified framework where knowledge exists in a multi-dimensional space defined by four fundamental relationship types:
NEAR/SIMILAR TO — Weighted connections indicating semantic proximity, but unlike simple similarity measures, these distances can change based on context and purpose.
LEADS TO — Causal relationships that capture how events, decisions, and states flow into one another over time. This is the temporal backbone that most current systems completely miss.
CONTAINS — Hierarchical relationships that organize knowledge into nested structures, from organizational charts to conceptual taxonomies.
EXPRESSES PROPERTY — Attribute relationships that define what entities fundamentally are, going beyond simple key-value pairs to semantic characteristics.
The Power of Causality-Focused Memory
The breakthrough insight is shifting focus from “what happened when” to “how events connect to each other.” When an AI agent understands not just that you made a career change, but the causal chain that led to that decision — the dissatisfaction, the opportunity, the risk assessment, the outcome — it can provide genuinely helpful guidance for similar future situations.
This causality-focused approach creates memory systems that can:
Trace decision patterns across different life domains
Identify intervention points where different choices might lead to better outcomes
Understand emotional causality — what events trigger certain responses and why
Provide contextual relevance that adapts to current conversation flows
Practical Implementation: Layered Graph Architecture
The theoretical framework becomes practical through layered graph architectures borrowed from social network analysis. Instead of one monolithic graph trying to capture all possible relationships, we create multiple layers where the same entities exist but with different relationship structures depending on context.
A medical concept like “pressure” exists simultaneously in:
A clinical layer with relationships to diagnoses and treatments
A patient education layer with connections to symptoms and lifestyle factors
A research layer linked to methodologies and statistical findings
The system can dynamically switch between layers or blend information from multiple layers based on conversational context, user expertise, and immediate needs.
Why This Matters Now
We’re moving beyond AI that simply processes information toward AI that genuinely understands the structured nature of human knowledge and experience. The implications are profound:
Personal AI Assistants can provide guidance based on understanding your actual decision patterns rather than generic advice.
Emotional Support Systems can recognize the causal chains that lead to emotional states and provide targeted interventions.
Enterprise Knowledge Management can capture not just what the organization knows, but how that knowledge connects to outcomes and decisions.
Educational AI can understand prerequisite knowledge chains and adapt explanations to individual learning patterns.
The Path Forward
After 23 years in technology and seven years specifically focused on graph systems, I’ve written “Semantic Spacetime” to provide both the theoretical foundation and practical guidance for implementing these concepts. The book draws heavily on Mark Burgess’s groundbreaking work on semantic space-time and promise theory, extending it specifically for AI agent memory systems.
This isn’t just another approach to knowledge representation — it’s a fundamental rethinking of how intelligent systems can model the world in ways that align with human reasoning about complex relationships and causality.
The future of AI lies not in processing more data faster, but in understanding the causal relationships that drive human experience and using that understanding to provide genuinely beneficial assistance. Semantic spacetime provides the theoretical and practical foundation for this transformation.
The question isn’t whether AI will become more sophisticated — it’s whether that sophistication will be grounded in genuine understanding or remain trapped in pattern matching. The choice we make now will determine whether AI becomes a true partner in human flourishing or simply a more elaborate form of automation.
More on Space Time and Smart Spaces
https://www.linkedin.com/in/markburgessoslo/
Mark is an author of Semantic Space Time and Promise theory
I really recommend his books
and One about a Promise theory