If you’ve been following me long enough, you know I have a couple of books about knowledge representation and memory. I started with Semantic Spacetime as a method for representing and organizing information for LLMs — and later for agents. Right now, I’m focused on context engineering, and you can read my book on it. It’s not very long, but it provides solid fundamentals for organizing knowledge.

Semantic Space Time for AI Agent Ready Graphs
This book introduces a revolutionary framework for knowledge representation and AI agent memory: Semantic Spacetime. Drawing from theoretical physics and graph theory, this framework offers a new way to understand how meaning, relationships, and causality can be structured in intelligent systems.
https://leanpub.com/sst-4-agenticai?source=post_page-----5df4fc07a5ec---------------------------------------

In my next book, Temporal Aware AI memory: Why time is a key in a memory, I discuss how to actually apply Semantic Spacetime and how to treat time itself. I try to explain what kind of clock we need to rely on for time — my clock is more about events. I go deeper and deeper into the topic of memory: why we need memory, why memory is not RAG, why we need different approaches. But this book mainly focuses on conversational memory because I was focused on building agents that maintain conversations with users.

Temporal Aware AI memory: Why time is a key in a memory
So how do you make an AI agent and conversational agent understand time? How does time shape attention? How is time important for the context engine? You will learn how to add time to knowledge graphs, how time and causality drive context, and how to make the knowledge graphs that are used for AI memory time-aware.
https://leanpub.com/time-aware-ai-memory?source=post_page-----5df4fc07a5ec---------------------------------------

Now that agents have arrived, they face new challenges that go beyond conversational memory. These challenges introduce many new needs: operational memory, decision traces, action logs, and everything related to making the agent think about its actions and improve over time.

I’ve observed the huge hype around the $100 trilion rebranding of knowledge graphs to “context graphs.” I agree it’s mainly hype, but it raises important use cases and important needs that we can’t ignore. Somehow, we need to understand decisions. Somehow, we need to make these decisions explainable. We also need to think about how the agent will operate in this swarm of agents and massive agentic systems. All of this creates new challenges — especially for enterprise and company-level agents that need to make decisions about humans. These decisions need to be explainable; these decisions need to be understandable. And we have a completely different set of challenges arriving, but we still need memory.

Given these new challenges, while still building on the existing topics, I decided to write a book on context graphs. But in practice, it’s a book that goes beyond context graphs. I explain the need to use memory together with cognitive processes: processing, causal and temporal causal analysis, maybe some topological analysis, and all the tools that don’t just store data in a proper way, but also build pipelines and cognitive procedures to work with data, analyze the data, recall and reconstruct the data.

I also focus on Promise Theory and how promises could be the basis of multi-agentic systems, and why it’s important to understand decision traces, decision graphs, and promise graphs. How promises lead to actions, how promises drive decisions, and how promises — together with data signals, rules, and training data — shape the behavior of the agent. This is also a key topic.

Beyond Context Graphs: Agentic Memory, Cognitive Processes, and Promise Graphs
Agentic Memory: Beyond context graphs. Build enterprise AI with decision traces, promise theory, causality & explainable multi-agent systems that learn.
https://leanpub.com/beyondcontextgraphs?source=post_page-----5df4fc07a5ec---------------------------------------

We’ll focus on four parts in the book:

1. What are context graphs originally?
2. Why do we need more?
3. How to make this “more” happen?
4. The basics of promise graphs and Promise Theory, together with scheduling and temporal analysis.

Scheduling and temporal analysis require us to revisit memory, time in memory, and causality. I’m happy to share parts of my memory book with the design of memory that actually supports this view — even if it wasn’t originally designed for this purpose.

I really recommend you buy the bundle. But at the same time, I understand that maybe you could start from this book and go in reverse.

About Cover

Cover presents the fundamental architecture of AI memory systems through the convergence of two ancient mythological frameworks: the Mesopotamian god Enki and the Norse Norns.

Enki: The Architect of Ordered Knowledge

In the center-left of the composition, Enki appears as the Babylonian deity who organized humanity’s access to cosmological knowledge and established the fundamental structures of civilization. In mythology, Enki was responsible for the _me_ — divine decrees that governed all aspects of existence from kingship to craftsmanship. This parallels the role of **context graphs** in AI systems: creating ordered structures that govern how agents access, organize, and apply knowledge.

Enki’s representation symbolizes:

Structural organization: Just as Enki established the cosmic order, context graphs impose structure on the chaos of raw data
Knowledge access: Enki granted humans the ability to comprehend divine wisdom; context graphs enable AI agents to navigate complex information spaces
Systematic thinking: The Babylonian emphasis on enumeration and categorization mirrors the graph-based organization of agentic memory

The Norns: Weavers of Temporal Causality

To the right, the three Norns — Urðr (What-Was), Verðandi (What-Is-Becoming), and Skuld (What-Shall-Be) — sit at the base of Yggdrasil, weaving the threads of fate and time. Their presence directly embodies the book’s central thesis about **temporal causality** in AI memory systems.

The Norns represent:

Bi-temporal tracking: Their division into past, present, and future mirrors the bi-temporal database concept (valid time vs. transaction time)
Causal chains: The weaving metaphor perfectly captures how decisions influence each other across time through causal edges
Decision traces: Just as the Norns record every action in their weaving, context graphs preserve the provenance of decisions
Temporal reasoning: The ability to query “what was known when” requires understanding time as the Norns did — not as linear progression but as interwoven threads

The Apple of Knowledge: Harvesting Structured Understanding

The golden apple held between these mythological frameworks represents the harvested knowledge that emerges from properly structured agentic memory. Unlike the raw, chaotic “information around us,” this apple symbolizes:

Refined understanding: Knowledge extracted, validated, and organized through systematic processes
Epistemological grounding: The data traces that connect claims back to their sources
Actionable intelligence: Information transformed into a form that enables genuine decision-making

The apple is not given freely — it must be **harvested** through the deliberate construction of temporal-causal structures. This agricultural metaphor emphasizes that sophisticated AI memory doesn’t emerge automatically from data accumulation; it requires architectural intentionality.

The Philosophical Synthesis

The juxtaposition of Mesopotamian and Norse mythology creates a philosophical dialogue between **order** (Enki) and **time** (the Norns) — the two fundamental dimensions of agentic memory:

1. Spatial/Structural Organization (Enki): How entities and relations are organized in semantic space, creating the graph topology that enables retrieval and reasoning

2. 
Temporal/Causal Flow (Norns): How knowledge evolves through time, how decisions influence future states, and how past context shapes present understanding

This synthesis directly addresses the book’s core argument: effective agentic memory requires both **context graphs** (structural organization of decision provenance) and **temporal causality** (understanding how knowledge and decisions evolve through time). Neither dimension alone is sufficient.