Persistent memory for AI coding agents

Your AI agent doesn't know
where it learned that.

SAME is persistent memory — with receipts. Tracks where your agent's knowledge came from, flags when sources change, and works across Claude Code, Cursor, Windsurf, and every MCP tool.

~/web-app — new session
$ claude
 
◆ same session restored from handoff
Last session: Refactored auth middleware
Key decision: JWT → session tokens
Pending: Integration tests for /api/login
 
3 relevant notes surfaced
$ curl -fsSL statelessagent.com/install.sh | bash
PS> irm https://statelessagent.com/install.ps1 | iex
$ npm install -g @sgx-labs/same

Then run same demo to see it in action.

~/project — terminal
17
MCP tools
Search, save, recall, graph query — all exposed to your agent automatically
6
Claude Code hooks
Captures decisions and generates handoffs without you doing anything
17
Seed Vaults
870+ curated notes you can install in one command
0
Cloud dependencies
Everything runs locally. Your data never leaves your machine. No API keys required.
Without SAME
Re-explain your project every session
Paste the same context you pasted yesterday
Lose decisions between sessions
Watch the AI repeat mistakes you already corrected
The AI is brilliant. And amnesiac.
With SAME
Agent already knows your architecture
Context surfaces automatically when relevant
Decisions persist and compound across sessions
Guardrails catch bad patterns before they ship
Stale knowledge flags itself — your agent knows when to doubt
The AI remembers. And knows what to trust.

Every project gets its own AI memory.

Per-project isolation. Automatic context surfacing. Memory with provenance — tracks where knowledge came from and flags when sources change. A knowledge graph that traces relationships across notes and decisions. 17 MCP tools. 6 Claude Code hooks. ~12MB binary. Your data never leaves your machine.

Memory You Can Trust

Tracks where knowledge came from. Flags when sources change. Stale notes rank lower automatically. Your AI knows what to trust — and what to doubt. Trust-aware retrieval, built in. Learn more →

One vault per project

Each project gets its own isolated database at .same/, auto-detected by directory. Two sessions in different projects use completely different knowledge bases. Zero cross-pollination. Minimal config. Use same search --all when you want to search across projects.

Persistent memory

Decisions, handoffs, and context survive across sessions — even after crashes. Your agent captures decisions automatically and generates handoff notes when sessions end. Session 100 builds on session 1. See all 17 tools

Knowledge graph

Related decisions and dependencies surface through graph connections. Query paths between concepts with 1-hop expansion. same graph stats, same graph query, same graph path.

Seed vaults

17 pre-built knowledge bases with 870+ curated notes. Install in one command: same seed install claude-code-power-user. From security audits to cooking techniques. Expert knowledge from day zero.

Provider flexibility

Run with Ollama, OpenAI, any OpenAI-compatible server, or keyword-only mode. No LLM required. No GPU required. 10 embedding models, one command to switch: same model use <name>. Works out of the box. Add Ollama for semantic search.

Safety by default

Checksum-verified updates. Path containment guards. Private path filtering. Prompt injection scanning. 19 diagnostic checks via same doctor. Zero telemetry — no telemetry code exists in the binary.

Your AI doesn't need to learn from scratch.

$ same seed install claude-code-power-user

17 pre-built knowledge vaults. 870+ curated notes. One command.

Seeds grow smarter with use — your decisions build on seed knowledge. Browse all seeds →

Browse, search, and inspect — in your browser.

$ same web --open

Read-only. Localhost only. No network calls.

Browse your vault

See every note, decision, and handoff in one place. Full-text rendering with metadata.

Search visually

Same semantic search as MCP, but with a UI. See scores, distances, and which gates fired.

Inspect the index

Chunk counts, embedding stats, staleness — everything same doctor checks, live in your browser.

The numbers.

~99%
Context relevance
when surfaced
~90%
Relevant notes
found
~80%
Prompts correctly
left alone
<200ms
Added latency
per prompt

When SAME surfaces context, it's almost always relevant. When it stays quiet, it's almost always right to. These numbers come from the creator's own vault — not a synthetic benchmark. Your mileage will vary, and that's the point: the eval methodology is published so you can run it on your own notes.

105 ground-truth test cases · 273-note vault · Eval methodology published — challenge it.

Your agent gets smarter the longer you use it.

1

You explain your project

Architecture, decisions, preferences. Fifteen minutes of pasting context before you can start working.

5

Your agent remembers key decisions

Pagination strategy, auth patterns, testing conventions. It stops asking questions you've already answered.

50

Your agent knows your project

It catches you reintroducing a bug you fixed in session 12. It remembers your pagination decision from January. It knows your test naming convention without being told.

Three steps. Minimal config.

01

Install

One command. Works alongside your existing tools. Nothing to migrate. Minimal setup. Works without Ollama via keyword search.

02

Work normally

SAME hooks into your AI agent and captures decisions, patterns, and context as you work. You don't change anything about how you code.

03

Context compounds

Each session builds on the last. Your agent starts with orientation, not a blank slate. Session 100 is dramatically better than session 1.

Go binary → SQLite (vectors + FTS5 + graph) → 6 embedding providers (optional) → Claude Code / Cursor / Windsurf / Codex CLI / Gemini CLI via Hooks + MCP Architecture deep dive →

Started as a tool for my own scattered brain — I'd have ideas mid-session, forget them by the next one, and never do anything with them. Years of that. Every good idea just gone.

So I built something to catch all of it. Dump everything, let it organize itself, have it show back up right when I need it. Turns out it makes the AI better too — we both stopped losing the thread.

Sean Gleason Creator of SAME · Thirty3 Labs

Why not just use CLAUDE.md?

You should. CLAUDE.md is your starting knowledge — project rules, architecture decisions, coding conventions. SAME is the knowledge that accumulates as you work. Decisions you made in session 12. The bug you fixed last Tuesday. The pattern your team adopted in January. CLAUDE.md is what you write once. SAME is what builds up over 50 sessions without you thinking about it. They work together — SAME reads your CLAUDE.md too. And unlike built-in IDE memories, SAME works across every tool — switch from Claude Code to Cursor without losing a single decision.

Everything included. Nothing locked.

No tiers, no paywalls, no cloud lock-in. Every feature ships in the free binary.

Cross-tool memory
Works across Claude Code, Cursor, Windsurf, Codex CLI, Gemini CLI, and any MCP client
Fully local & private
Zero telemetry, zero cloud, zero API keys required. Your data never leaves your machine.
Provenance tracking, staleness detection, trust-aware retrieval. Implements OWASP ASI06 recommendations for agent memory security. Learn more →
Knowledge graph
Built-in graph with LLM enrichment and 1-hop expansion. No add-on tier.
Semantic search
6-gate relevance chain with vector, FTS5, graph, and freshness signals
Auto-capture
6 Claude Code hooks capture decisions and generate handoffs. Zero manual effort.
Scales to 50K+ notes
No 200-line ceiling. SQLite-backed, tested at scale.
Free
BSL 1.1 license, converts to Apache 2.0. No account, no signup, no cloud bill.
See detailed comparison with alternatives
SAME CLAUDE.md Mem0 Manual MCP
Setup 1 command Create a file pip + config + API key Build per tool
Cross-tool All MCP clients Claude Code only SDK-specific Varies
Privacy Fully local Local file Cloud default Depends
Cost Free Free Free tier + paid plans Free (DIY)
Knowledge graph Built-in Paid tier
Memory integrity Provenance + staleness
Zero setup Install + init Create a file Account + API key Build from scratch
Managed / hosted No — runs locally Yes Varies

Give your AI agent a memory it can trust.

$ curl -fsSL statelessagent.com/install.sh | bash
PS> irm https://statelessagent.com/install.ps1 | iex
$ npm install -g @sgx-labs/same
Already installed?
$ same demo
Try it in 30 seconds — then same init in your own project