┌─ vault ──────┐         ┌─ claim graph ─────────────┐         ┌─ validator ─┐
  roadmap.md        ────  subject·predicate        verified  
  meeting.md         ──     fingerprint=blake3         ~ stale     
  retro.md           ────  span=142..198              rejected  
└──────────────┘         └────────────────────────────┘         └──────────────┘

Memora

Verifiable cognitive memory for personal vaults.

Cite · or it didn't happen.

verified citations claim graph privacy-banded local-first single binary obsidian-native mcp-native
↓ Install Read the docs → See a claim → GitHub
memora - ~/notes - claude-haiku - Anthropic
~/notes $ memora index --vault ~/notes

Indexed: 87 notes, 312 claims extracted.
Empty extractions: 4, Rate-limited: 0, Parse failures: 0, Invalid: 1.
Total errors: 1

A claim is the smallest
thing you can verify.

Most memory tools retrieve note chunks and ask the model to quote them faithfully. Memora extracts atomic claims with byte-level pointers back to your markdown, fingerprints the source span at extraction, and re-checks the hash before any answer reaches you.

Source · markdown in your vault
~/notes/semantic/projects/drift/roadmap.md
---
title: drift Serialization Decision
region: projects/drift
privacy: private
updated: 2025-09-12
---
## serialization update

drift switched serialization from JSON to MessagePack after benchmark runs showed 3x throughput.

Source context appears in roadmap.md, retro-2025-q3.md, and meeting-2025-09-12.md.
extract
fingerprint
Claim · stored in SQLite + HNSW
claim:drf75a1c… verified
subject drift
predicate uses_serialization
object messagepack
source roadmap.md
span 184 .. 281 (97 bytes)
fingerprint blake3:9a4e2f1c7b8d0a61...e3b92f7d
valid_from 2025-09-12
privacy private

Hallucinations are a
data-model problem.

Prompt-level "please cite your sources" doesn't survive contact with a confident model. Memora makes unsupported citations impossible to surface, by construction.

Atomic claims

Subject·predicate·object triples extracted from prose. The unit of memory is small enough to verify, large enough to reason over.

Span fingerprint

Every claim stores a blake3 hash of the exact source bytes. Edit the note, the hash changes, downstream uses re-validate or fall stale.

Citation validator

LLM emits [claim:xxx] markers. The validator re-reads the span, recomputes the hash, and rejects mismatches before output ships.

Verified-only retry

If the model invents a claim id, it gets stripped and re-prompted with verified-only context. The contract is enforced in code, not prompt.

Time-aware reasoning

Claims have valid_from / valid_until windows. Contradictions auto-supersede. "What was true in March" is queryable.

Privacy bands

Inline <!--privacy:secret--> markers narrow privacy to a sub-span. Secrets are redacted at the wire boundary, type-system enforced.

Provenance DAG

Synthesis claims point to the sources they derive from. Edit a leaf, downstream syntheses are auto-flagged stale until you re-confirm.

Local-first, single binary

Rust + SQLite + HNSW. No Postgres, no Pinecone, no Node, no Python. Works offline with Ollama. Drop one binary into /usr/local/bin.

Live preview

Watch a citation get tested.

Four frames cycle through a typical turn: capture the note, extract the claim, query, and validate. The model never gets the last word. The validator does.

memora - live session preview

Step 1 - Capture a markdown note in your vault.

Six stages between
your vault and an answer.

Each stage is a typed boundary in Rust. The privacy filter is compile-time enforced. The validator is the last gate before any text reaches you.

Layer 1
01

Watch + parse

Markdown vault watcher detects edits, parses frontmatter, queues extraction jobs.

Layer 2
02

Extract + fingerprint

LLM extracts atomic claims with byte spans. Each claim gets a blake3 hash of the source.

Layer 3
03

Hybrid retrieval

BM25 + HNSW vector search + RRF fusion + spreading activation over wikilinks.

Layer 3
04

Privacy filter

Secret claims redacted at the wire boundary on cloud LLMs. Type-system enforced.

Layer 3
05

Validate citations

Re-read each cited span, recompute fingerprint, reject mismatches. Strip + retry on failure.

Layer 4
06

MCP exposure

14 tools over stdio. Drop into Claude Code, Cursor, or any MCP client.

Wondering how Memora differs from RAG, LLM Wiki, and other personal-memory tools? Architectural comparison - six systems, one matrix, no benchmark theater.

Install in seconds.

A single Rust binary. No Node, no Python, no daemon. The installer drops memora and memora-mcp on your path so the CLI and the MCP server are ready immediately.

# Install memora + memora-mcp from the latest release
curl --proto '=https' --tlsv1.2 -LsSf \
  https://github.com/radotsvetkov/memora/releases/latest/download/memora-cli-installer.sh | sh

# Initialize a vault
memora init --vault ~/brain

# Index it
memora index --vault ~/brain

# Ask something - every cited claim gets validated
memora query "What did the team decide about drift's serialization format?" --vault ~/brain
git clone https://github.com/radotsvetkov/memora
cd memora

# Stable Rust, MSRV 1.75
cargo build --release

mkdir -p ~/bin
cp target/release/memora ~/bin/
cp target/release/memora-mcp ~/bin/
# Add to your Claude Code MCP config
{
  "mcpServers": {
    "memora": {
      "command": "/usr/local/bin/memora-mcp",
      "env": {
        "MEMORA_VAULT": "/absolute/path/to/your/vault"
      }
    }
  }
}

# 14 tools become available - query, capture, stale, contradictions, challenger, and more
# 1. Install Ollama and pull a small model
ollama pull llama3.2:3b
ollama pull nomic-embed-text # for embeddings

# 2. Point Memora at Ollama in .memora/config.toml
[llm]
provider = "ollama"
model = "llama3.2:3b"
endpoint = "http://127.0.0.1:11434"

# 3. Query - fully offline, fully private
memora query "summarize this week's decisions" --vault ~/brain

Built in the open.

Memora is Apache 2.0, friendly to internal forks, downstream products, and agent experiments. Issues, docs fixes, and focused PRs are welcome.

Contribute

Read the README, run cargo test --all, and open a PR with a clear description.

Documentation

The book covers the quickstart, architecture, citation protocol, and every MCP tool. Help us keep examples sharp.

License

Apache 2.0 only. See license notes for why that fits personal-memory tooling.

Cite, or it didn't happen.

The memory engine that respects your privacy, your markdown, and your right to verify. Single binary. No subscription. Every claim auditable.