┌─ vault ──────┐ ┌─ claim graph ─────────────┐ ┌─ validator ─┐ │ roadmap.md │ → │ ●──●──● subject·predicate │ → │ ✓ verified │ │ meeting.md │ │ ●──● fingerprint=blake3 │ │ ~ stale │ │ retro.md │ │ ●──●──● span=142..198 │ │ ✗ rejected │ └──────────────┘ └────────────────────────────┘ └──────────────┘
Verifiable cognitive memory for personal vaults.
Cite · or it didn't happen.
Most memory tools retrieve note chunks and ask the model to quote them faithfully. Memora extracts atomic claims with byte-level pointers back to your markdown, fingerprints the source span at extraction, and re-checks the hash before any answer reaches you.
Prompt-level "please cite your sources" doesn't survive contact with a confident model. Memora makes unsupported citations impossible to surface, by construction.
Subject·predicate·object triples extracted from prose. The unit of memory is small enough to verify, large enough to reason over.
Every claim stores a blake3 hash of the exact source bytes. Edit the note, the hash changes, downstream uses re-validate or fall stale.
LLM emits [claim:xxx] markers. The validator re-reads the span, recomputes the hash, and rejects mismatches before output ships.
If the model invents a claim id, it gets stripped and re-prompted with verified-only context. The contract is enforced in code, not prompt.
Claims have valid_from / valid_until windows. Contradictions auto-supersede. "What was true in March" is queryable.
Inline <!--privacy:secret--> markers narrow privacy to a sub-span. Secrets are redacted at the wire boundary, type-system enforced.
Synthesis claims point to the sources they derive from. Edit a leaf, downstream syntheses are auto-flagged stale until you re-confirm.
Rust + SQLite + HNSW. No Postgres, no Pinecone, no Node, no Python. Works offline with Ollama. Drop one binary into /usr/local/bin.
Four frames cycle through a typical turn: capture the note, extract the claim, query, and validate. The model never gets the last word. The validator does.
Step 1 - Capture a markdown note in your vault.
Each stage is a typed boundary in Rust. The privacy filter is compile-time enforced. The validator is the last gate before any text reaches you.
Markdown vault watcher detects edits, parses frontmatter, queues extraction jobs.
LLM extracts atomic claims with byte spans. Each claim gets a blake3 hash of the source.
BM25 + HNSW vector search + RRF fusion + spreading activation over wikilinks.
Secret claims redacted at the wire boundary on cloud LLMs. Type-system enforced.
Re-read each cited span, recompute fingerprint, reject mismatches. Strip + retry on failure.
14 tools over stdio. Drop into Claude Code, Cursor, or any MCP client.
Wondering how Memora differs from RAG, LLM Wiki, and other personal-memory tools? Architectural comparison - six systems, one matrix, no benchmark theater.
A single Rust binary. No Node, no Python, no daemon. The installer
drops memora and memora-mcp on your path
so the CLI and the MCP server are ready immediately.
Memora is Apache 2.0, friendly to internal forks, downstream products, and agent experiments. Issues, docs fixes, and focused PRs are welcome.
Read the README, run cargo test --all, and open a PR with a clear description.
The book covers the quickstart, architecture, citation protocol, and every MCP tool. Help us keep examples sharp.
Apache 2.0 only. See license notes for why that fits personal-memory tooling.
The memory engine that respects your privacy, your markdown, and your right to verify. Single binary. No subscription. Every claim auditable.