← All Guides
OpenClaw Patterns

Memory Patterns for a Personal AI Assistant

I've been running OpenClaw daily since February 2026. The built-in memory system works; these are the conventions and workflows I layered on top to make it handle a genuinely complex life.

What's OpenClaw vs what's custom: OpenClaw provides the memory search (qmd), the session management, the cron scheduler, and the MEMORY.md / AGENTS.md / SOUL.md file conventions. Everything below is how I use those features. Patterns you can adopt, not code you need to install.
Contents
1. The Use Case 2. Pattern: Trust-Scored Long-Term Memory 3. Pattern: Self-Correction Rules 4. Pattern: Obsidian Vault as Knowledge Store 5. Pattern: Shared Vault for Multi-Agent 6. Pattern: Overnight Research Sprints 7. qmd Under the Hood 8. Session Continuity — How /new Doesn't Mean Starting Over 9. What Doesn't Work

1. The Use Case

I'm planning an international relocation while managing a stock portfolio, an immigration timeline, a financial planning tool with 4 specialized agents, and client projects. The details are interconnected; getting one tax rate wrong costs five figures.

My assistant has been running continuously since February 2026 across 66 daily sessions. It needs to remember specific numbers, specific dates, specific rules, and specific mistakes it made that I never want repeated.

OpenClaw handles the basics out of the box. MEMORY.md for long-term facts, memory/*.md for daily logs, qmd for search. What follows is what I built on top when the basics weren't enough.

2. Pattern: Trust-Scored Long-Term Memory

What OpenClaw gives you: MEMORY.md — a file the agent reads on startup and can edit freely.

What I added: A metadata convention that tracks where a fact came from, how often it gets used, and how old it is. Facts that are never accessed get removed; facts that are constantly referenced survive.

### Special Tax Regime = 0% Capital Gains
[t:1.0 | src:direct+legal | hits:50+ | since:2025-12]

# t     = trust score
#         1.0 = direct from owner or primary legal source
#         0.7 = inferred from multiple sources
#         0.5 = external/unverified
# src   = provenance chain
# hits  = times this fact was decision-relevant in a session
# since = date first recorded

The pruning rule: any fact with hits=0 older than 60 days is a candidate for removal. The agent proposes deletions; I approve or reject. Current MEMORY.md sits at 216 lines, capped at 300. It's been pruned 3 times since February.

Without pruning, the file grows until the agent spends tokens reading things that no longer matter. The trust metadata makes the pruning principled instead of arbitrary; you keep what's actually used.

Caveat: The hits counter is manually maintained by the agent. It's approximate — the agent increments it when a fact is relevant to a conversation, but it doesn't always remember to. It's directionally useful, not precise.

3. Pattern: Self-Correction Rules

What OpenClaw gives you: SOUL.md — a persona/behavior file read every session.

What I added: A "Hard Rules" section where every correction gets written as a permanent behavioral constraint. Not a log entry, not a "lesson learned." A rule that fires every session. The full SOUL.md is downloadable below.

# SOUL.md — Hard Rules section (20 rules, 4 categories)

### Process
- **NEVER guess the date. USE THE MESSAGE TIMESTAMP.**
  This has broken briefings and trust multiple times.

- **Never start work >10 min or >$1 without stating the plan first.**
  State what you're about to do before doing it.

- **Stay visible: checkpoint every ~5 min, never go dark.**
  Whether complex reasoning or bulk file moves — silence = lost trust.
  Use background:true for long commands, never chain blocking polls.

- **Subagents: never delegate coupled files, always diff output.**
  Subagents lie about what they did. Verify before deploying.

### Knowledge & Verification
- **Search knowledge-library BEFORE answering domain questions.**
  For legal/financial claims: never fabricate citations.
  If you can't find the primary source, the citation doesn't exist.

### Quality
- **Readability is accessibility.**
  Spanish: euro AFTER (2.500 €). Period=thousands, comma=decimal.

- **Consistency is a requirement.**
  Never redesign a page in isolation — it's part of a larger system.

- **Never make things up.**
  Never estimate when actuals exist. Never fabricate sources.

### Operations
- **Never paste code in chat.**
  Write it to a file on Desktop. Applies to SQL, commands, configs.

Mechanics: When I correct the agent, it writes a rule to SOUL.md. Not an acknowledgment; not a log entry. The rule is the output. Currently 20 rules across 4 categories. The cap is 20; if a new rule needs to go in, two existing rules must merge.

The test: If I have to make the same correction twice, the system failed. Zero exact repeats so far.

4. Pattern: Obsidian Vault as Knowledge Store

What OpenClaw gives you: qmd indexes any folder of Markdown files for semantic search.

What I changed: Instead of keeping knowledge in the workspace (the default), I made an Obsidian vault the single source of truth. The vault is what qmd indexes, what agents write to, and what I browse in Obsidian. The workspace only holds operational files.

The folder structure mirrors Google Drive so the same mental model works in both places:

My-Vault/                            # 1,210 .md files
├── 00_Destination/                  # Relocation, tax regime, properties
├── 00_Current/                      # Taxes, portfolio, stocks, immigration
│   ├── Taxes/                       # 14 files — returns, projections, exit tax
│   ├── Wealth Management/           # 27 files — portfolio, analyses, risk
│   └── Jobs/                        # 3 files — stock schedule, trading windows
├── 01_Projects/                     # AI tools and system docs
├── 03_Career/                       # Work history, press, contacts
├── 04_Personal/                     # Pet, people, identity
├── 08_Research/                     # Deep reference (agent-generated)
│   ├── Tax/                         # 57 files
│   ├── FIRE/                        # 62 files
│   ├── Hardened Facts/              # 31 files — primary-source verified
│   └── ...
└── 09_Journal/                      # Session logs mirrored from workspace

The qmd config that makes this work:

// openclaw.json
{
  "memory": {
    "backend": "qmd",
    "qmd": {
      "paths": [
        { "path": "~/Documents/Obsidian/My-Vault", "name": "vault" }
      ],
      "update": { "interval": "5m" }
    }
  }
}

Why vault instead of workspace: My workspace has 379 files of agent operational data mixed with actual knowledge. The vault is curated. Only knowledge worth remembering. That means qmd searches return relevant results because there's less noise in the index.

The tradeoff: Agents need to write to a path outside the workspace. This works on a local machine; it wouldn't work in a sandboxed cloud environment.

5. Pattern: Shared Vault for Multi-Agent

I run 4 specialized agents (Researcher, Strategist, Portfolio, Tax) that all need access to the same knowledge. Message-passing between agents doesn't scale; per-agent stores mean knowledge gets duplicated and diverges.

The pattern: All agents search the same vault via memory_search. The Tax agent writes a verified withholding rate to 08_Research/Tax/. The Portfolio agent finds it when searching for RSU tax implications. No coordination, no duplication.

qmd doesn't care who wrote the file. An agent writes a new .md file, qmd picks it up within 5 minutes, and every other agent can find it.

Hardened facts

Some facts in 08_Research/Hardened Facts/ have been verified against primary sources (government tax sites, OECD treaty text). This is a convention, not a qmd feature. Each file includes the source URL and verification date. qmd tends to rank these highly because the content is dense and specific.

Currently 31 verified fact files covering tax code sections, treaty articles, and special regime eligibility. Each traces to a specific official source.

6. Pattern: Overnight Research Sprints

What OpenClaw gives you: Cron jobs that spawn isolated agent sessions on a schedule.

What I built: A pattern where research sessions run overnight, write directly to vault folders, and a morning assembly job summarizes the results to Discord.

# 10 one-shot cron jobs, staggered every 30 minutes
# Each uses OpenRouter's free Healer Alpha model (262K context)
# Each writes to the correct vault folder
# deleteAfterRun: true — jobs self-clean

12:00 AM  Tax regime 2026 updates          → 00_Destination/
12:30 AM  Rental market Q1 2026            → 00_Destination/Property/
 1:00 AM  Pet import requirements           → 04_Personal/Pet/
 1:30 AM  Credit card strategy abroad      → 00_Destination/
  ...
 6:30 AM  Assembly → Discord #briefing

Why this works: An agent writes 00_Destination/Credit Card Strategy.md at 1:30 AM. qmd reindexes at 1:35. By 1:40, any other agent searching for "credit cards abroad" finds it. By morning, I open Obsidian and see 10 new files in the right folders, already searchable.

One write creates three access paths: qmd search, Obsidian browse, agent context. Zero sync steps. That's the payoff of making the vault the single store.

Free model caveat: OpenRouter's free-tier models (Healer Alpha, Hunter Alpha) use your inputs for training. Don't send PII, account numbers, or sensitive data through them. The overnight prompts are written to avoid personal details — they describe the use case generically ("someone with multiple brokerage accounts") rather than including real numbers.

7. qmd Under the Hood

qmd is maintained by Tobi Lütke and is an optional backend for OpenClaw's memory system. This section covers what it does under the hood.

It runs three local GGUF models via node-llama-cpp:

ModelSizeRole
qmd-query-expansion-1.7B1.2 GBExpands query into variants for better recall
embeddinggemma-300M313 MBVector embeddings for semantic similarity
qwen3-reranker-0.6B610 MBRe-ranks combined results by relevance

Total disk: 2.1 GB (models + 22 MB SQLite index for 1,210 files). Models auto-download on first run.

Three search modes, real latency

Measured on M2 Mac mini, 1,210 indexed files:

CommandMethodMeasured Latency
qmd searchBM25 full-text only0.5s
qmd vsearchVector similarity only~2s
qmd queryBM25 + vector + reranker (default)10.5s

qmd query is the default OpenClaw mode. 10 seconds sounds slow, but the agent sends the search and continues composing its response; the results arrive before it needs them. In practice, the user never waits.

Each search returns top 6 results, each capped at 700 characters. Total context injected: ~1,200 tokens. That's the efficiency win; 1,200 tokens to search 1,210 files, versus RAG systems that often inject 3,000-5,000+.

Verbatim search result

qmd query "dog import EU microchip vaccination requirements" --json -n 6
🚨 CRITICAL: If the dog was vaccinated BEFORE being microchipped, the vaccination is INVALID for EU entry purposes. The dog must be revaccinated after microchipping, and the 21-day + titer test timelines restart from the new vaccination date.

Action: Verify the dog's microchip is ISO-compliant and was implanted before the current rabies vaccination.
research/eu-dog-import.md Score: 0.96 ~10s (query mode)

This is a verbatim result — the text above is exactly what the file contains, written during a research session in February 2026 and verified against EU Regulation 576/2013.

8. Session Continuity — How /new Doesn't Mean Starting Over

The question I get most: how does the AI pick up where it left off after a reset?

Every session starts blank. The trick isn't avoiding resets; it's making them cheap. I /new multiple times a day, and the assistant picks up context in seconds because everything is written down in the right place.

The Startup Sequence

When a new session starts, the assistant follows a strict startup protocol defined in AGENTS.md:

## Every Session — Before doing anything else:

1. Read plans/current.md — what we're working on RIGHT NOW.
   If it exists, this tells you where we left off and what to do next.

2. Get today's date. Then read:
   memory/<today>.md and memory/<yesterday>.md
   for recent context.

3. Read the tail of MEMORY.md — long-term memory.
   The tail has the most recent updates; the top has stale background.

That's it. Three file reads and the AI knows: what project is active, what happened recently, and what the long-term context is. Takes about 5 seconds.

The File System That Makes It Work

📋 plans/current.md

The single most important file. Describes the active project: what's done, what's in progress, what's next. Updated continuously during work. When a project completes, this file gets replaced with the next one.

📅 memory/YYYY-MM-DD.md

Daily session logs. Raw record of what happened, decisions made, files changed. Written incrementally during work — not after. If a session crashes, the log survives.

🧠 MEMORY.md

Curated long-term memory. Trust-scored facts, account details, project state, personal context. Capped at ~300 lines. Stale facts get pruned. The distilled essence of everything the AI needs to know about you.

👻 SOUL.md

The AI's personality, behavioral rules, and self-correction history. Injected into every session automatically by OpenClaw. Contains "Hard Rules" — mistakes the AI must never repeat, written as constraints not logs.

Why This Works

It mirrors how human teams work. You don't replay every conversation when onboarding someone. You hand them the project brief, the recent standup notes, and the team wiki. Same pattern.

Write-during, not write-after. The AI logs progress to daily memory files during work, not at the end. If a session dies mid-task, the next one can see exactly where things stopped.

The plan file is the handoff. plans/current.md isn't a to-do list; it's a full continuity document with context, decisions, phase status, and rules. A fresh session reads it and knows what to do without asking.

The test: If I can /new and the AI picks up the active project without asking "what are we working on?" — the system works. If it asks, something wasn't written down.

Self-Correction That Persists

When I correct the AI, the fix goes directly into SOUL.md as a behavioral constraint. Not a log entry. A hard rule that gets read every single session:

- NEVER guess the date. USE THE MESSAGE TIMESTAMP.
- ALWAYS run the briefing script before posting ANY briefing.
- Never claim subagent work is done without diffing the output.
- If you promise a notification, SEND IT IMMEDIATELY.
- NEVER chain blocking polls on long-running commands.

Cap at 20 rules. If two overlap, merge. If it grows past 20, compress. The test: if I have to tell it the same thing twice, the system failed.

9. What Doesn't Work

Honest assessment of where this falls short.

qmd query is slow. 10 seconds per search on 1,210 files. Fine for an assistant that searches before responding; not viable for a user-facing search bar. BM25-only mode is 0.5s but loses semantic understanding.

Scale ceiling is real. I haven't tested beyond ~1,500 files. qmd stores everything in a single SQLite file (22 MB). At 10K+ files, reindexing every 5 minutes might become expensive.

Single machine, single user. The SQLite index doesn't support concurrent writers. Local-first system only.

Trust scoring is a convention, not enforcement. The [t:1.0|hits:50+] metadata is maintained by the AI itself. There's no mechanism preventing the agent from writing t:1.0 on something it hallucinated. I review MEMORY.md periodically. The system requires human oversight.

Curation is ongoing work. Without active pruning, the vault grows. Without reviewing what agents write, quality degrades. This needs a human who cares about what's in the knowledge base.

Model costs are real. The search infrastructure is $0 (everything local). The AI models that use the results cost money; a typical session runs $2-5 in API costs. The overnight sprints use free-tier models that train on your data.

The vault isn't everything. The AI also reads workspace files, daily logs, and retains its own training knowledge. When there's a conflict, the vault is supposed to win. That's a convention, not a guarantee.

Session Continuity

AGENTS.md

Teaches your AI to pick up where it left off across sessions. The startup sequence, memory protocol, and workspace conventions.

Self-Correction

SOUL.md

Persona, voice, hard rules, and behavioral constraints. 20 rules across 4 categories — the compressed result of every correction.

Works with OpenClaw · No install required · Just drop in your workspace