docs

SAME Documentation

SAME (Stateless Agent Memory Engine) is a local-first persistent memory system for AI coding agents. It gives tools like Claude Code, Cursor, Windsurf, Codex CLI, and Gemini CLI the ability to remember decisions, context, and patterns across sessions. ~12MB Go binary. SQLite + vector embeddings (Ollama, OpenAI, or any compatible provider). Your data never leaves your machine.

Quickstart

Get running in minutes. No accounts, no API keys.

bash
curl -fsSL statelessagent.com/install.sh | bash

Then explore the interactive demo:

bash
same demo

Or start using it immediately with a project:

bash
# Initialize SAME in your project directory
cd ~/my-project && same init

# Set up Claude Code integration (hooks + MCP)
same setup hooks
same setup mcp

# Start Claude Code — SAME activates automatically
claude

On your first session, SAME orients the agent with relevant context (~200 tokens). On your second session, it surfaces the handoff from the first. By session 10, your agent knows your architecture, your decisions, and your patterns.

Tip

No Ollama? SAME works without it. Semantic search requires Ollama, but keyword search (FTS5), session handoffs, decisions, and all core features work out of the box.

See the homepage for the interactive terminal demo showing SAME in action.

What's New in v0.12.0

Full changelog: CHANGELOG.md

Installation

bash — macOS / Linux (recommended)
curl -fsSL statelessagent.com/install.sh | bash
Other install methods

npm

bash
npm install -g @sgx-labs/same

Windows (PowerShell)

powershell
irm statelessagent.com/install.ps1 | iex

Manual — macOS (Apple Silicon)

bash
mkdir -p ~/.local/bin
curl -fsSL https://github.com/sgx-labs/statelessagent/releases/latest/download/same-darwin-arm64 -o ~/.local/bin/same
chmod +x ~/.local/bin/same
export PATH="$HOME/.local/bin:$PATH"

Manual — Linux (x86_64)

bash
mkdir -p ~/.local/bin
curl -fsSL https://github.com/sgx-labs/statelessagent/releases/latest/download/same-linux-amd64 -o ~/.local/bin/same
chmod +x ~/.local/bin/same
export PATH="$HOME/.local/bin:$PATH"

Build from source

bash
git clone --depth 1 https://github.com/sgx-labs/statelessagent.git
cd statelessagent && make install

Requires Go 1.25+ and CGO.

Note

For semantic search, install Ollama separately. SAME pulls the nomic-embed-text model (~270MB) on first use. Total footprint under 300MB.

How It Works

SAME sits between your AI tools and a local knowledge vault. It indexes your markdown notes, captures decisions as you work, and surfaces relevant context at session start.

📄
Markdown Notes
your vault
Embeddings
Ollama, OpenAI, or compatible
SAME
SQLite + sqlite-vec
AI Tools
Claude, Cursor, Windsurf, Codex, Gemini

Session lifecycle

Three automatic phases drive the memory loop:

  1. Session Start — Loads vault context, registers this instance, detects other active instances, injects relevant decisions (~200 tokens), surfaces last session handoff.
  2. During Session — Agent-driven search via MCP. Zero auto-injection per prompt. The agent queries SAME only when it needs context. Zero tokens overhead per turn.
  3. Session End — Extracts new decisions, generates handoff summary, logs session to vault, deregisters instance. Graceful recovery if skipped.
Note

Close the terminal early? Not a problem. Next session recovers from data the IDE already persists. The stop hook is not critical path.

Hooks vs MCP — two integration points

SAME connects to your AI tools in two complementary ways:

Claude Code gets both (hooks + MCP) for the richest experience. Cursor, Windsurf, Codex CLI, and Gemini CLI use MCP only. Use same init --mcp-only to skip hooks.

Ranking — the 6-gate evaluation chain

Not every prompt needs context. SAME evaluates whether to surface anything at all through a 6-gate chain:

  1. Relevance gate — Is the prompt related to indexed knowledge?
  2. Distance threshold — Are the nearest embeddings close enough to matter?
  3. Composite scoring — Blend of semantic similarity, keyword overlap, and recency
  4. Gap detection — Is the best result meaningfully better than the second?
  5. Token budget — Does the result fit within the configured budget?
  6. Staleness check — Is the note fresh enough to be trustworthy?

~80% of prompts are correctly skipped. When SAME speaks, it matters.

Keyword-only mode

No embedding provider? SAME works out of the box with SQLite FTS5 keyword search. All core features work — session handoffs, decisions, pinned notes, context surfacing. Add Ollama, OpenAI, or any OpenAI-compatible provider when you want semantic search.


Key Concepts

What is a vault?

A vault is simply a directory of markdown files. It can be your project root, an Obsidian vault, a Logseq graph, or any folder containing .md files. When you run same init, SAME creates a .same/ subdirectory for its database and config — your existing files are never modified.

SAME auto-generates some files into your vault:

You can also write your own notes — architecture docs, patterns, research — and SAME will index and surface them when relevant.

Example: auto-generated decision entry
markdown — decisions.md
## Use cursor-based pagination (not offset)

**Date:** 2026-01-15
**Context:** API pagination for transactions list
**Decision:** Cursor-based pagination using `created_at` + `id` composite key
**Rationale:** Offset pagination causes count(*) on large tables. Rejected after prod incident on 2026-01-12.
**Status:** Accepted
Example: auto-generated session handoff
markdown — sessions/2026-01-15-auth-refactor.md
# Session Handoff: Auth Refactor

**Left off at:** Refresh token migration — routes done, middleware needs update
**Key files:** src/middleware/auth.go, src/routes/refresh.go
**Blockers:** Integration tests need new token fixtures
**Decisions made:** Context-based auth pattern (replacing header injection)
**Next steps:** Update middleware, add test fixtures, deploy to staging

What is MCP?

MCP (Model Context Protocol) is an open standard that lets AI tools call external tools. Think of it like a plugin system. SAME registers itself as an MCP server so your AI agent can search your notes, save decisions, and create handoffs — without you doing anything manually. Transport is stdio (no network server, no HTTP ports).

What is Ollama?

Ollama runs AI models locally on your machine. SAME uses it to convert your notes into numerical representations (embeddings) so it can find semantically similar content — not just keyword matches. For example, searching for "throttling" can find a note about "rate limiting."

Ollama is optional. Without it, SAME uses keyword search (SQLite FTS5). With it, you get semantic search. Install: brew install ollama on macOS, or see ollama.ai/download. SAME pulls nomic-embed-text (~270MB) automatically on first use.


Features

FeatureDescriptionOllama Required?
Semantic searchFind notes by meaning, not just keywordsYes
Per-project vault isolationEach project gets its own .same/ database, auto-detectedNo
Knowledge graphRelationship tracing across notes via same graphNo
Keyword search (FTS5)Fast full-text search as fallbackNo
same askRAG chat with cited answers from your vaultYes (chat model)
Session handoffsAuto-generated summaries for the next sessionNo
Session recoveryGraceful recovery if terminal is closed earlyNo
Decision extractionAuto-captures structured decisions from sessionsNo
Pinned notesAlways-included notes in every sessionNo
Context surfacing6-gate chain for intelligent injectionNo*
Push protectionGuard against accidental git pushesNo
MCP server (17 tools)Agent-callable tools via MCP protocolNo*
Privacy tiersThree-tier directory privacy structureNo
Cross-vault searchFederated search across multiple vaultsNo*
same demoInteractive demo experienceNo
same tutorial7 hands-on lessonsNo
same doctorComprehensive diagnostic checksNo
same webLocal web dashboard — browse, search, inspectNo

* Semantic mode requires Ollama; keyword fallback is automatic.

Memory integrity

SAME tracks where knowledge comes from and whether it's still trustworthy:

Trust-aware retrieval

Search results are weighted by trust state. Stale notes rank 25% lower. Contradicted notes rank 60% lower. Trust state is returned on all search results so your agent can reason about knowledge quality, not just relevance.

Note

Vault health: Run same health for a vault-wide trust analysis with provenance statistics and actionable recommendations.


CLI Reference

All commands are run as same <command>. Run same --help for the full list.

Setup

Commands you run once during initial setup:

same init [--mcp-only]
Initialize SAME for your project. Creates .same/ directory and config. Use --mcp-only for Cursor/Windsurf/Codex CLI/Gemini CLI (skip hooks).
same demo
Interactive demo — see SAME in action without any setup
same tutorial
7 hands-on lessons covering core workflows
same doctor
Comprehensive diagnostic checks (Ollama, hooks, index, config)
same setup hooks
Install Claude Code hooks into .claude/settings.json
same setup mcp
Register SAME as an MCP server for your AI tool
same web [--open]
Launch a local web dashboard to browse, search, and inspect your vault. Read-only, localhost only.

Search & Query

Find and retrieve knowledge from your vault:

same ask <question>
RAG chat — ask a question, get answers with citations from your vault
same search <query>
Search notes (uses semantic search if Ollama is available, keyword otherwise)
same related <path>
Find notes similar to a given file

Note Management

Create, pin, and organize notes in your vault:

same pin <path>
Always include this note in session context
same pin list
Show all pinned notes
same pin remove <path>
Unpin a note

Vault Management

Manage multiple vaults for different projects. Each project gets its own isolated database, auto-detected by directory:

same vault list
List registered vaults
same vault add <name> <path>
Register a new vault
same vault default <name>
Set default vault
same vault remove <name>
Unregister a vault

Seed Vaults

Install pre-built knowledge vaults:

same seed list
Browse available seed vaults
same seed install <name>
Install a seed vault
same seed info <name>
Show seed details (notes, domain, size)
same seed remove <name>
Remove an installed seed vault

Knowledge Graph

Inspect and traverse relationships between notes, concepts, and decisions:

same graph stats
Node/edge counts and extraction mode indicator
same graph query <node>
Depth-limited traversal from a node
same graph path <from> <to>
Shortest path between two nodes
same graph rebuild
Full re-extraction of all graph relationships

Configuration

same config show
Display current configuration
same config edit
Open config in your editor
same display full|compact|quiet
Set output verbosity (full: box with titles/tokens, compact: one-line, quiet: silent injection)
same profile use <name>
Switch precision profile: precise (fewer results, strict matching), balanced (default), broad (~2x results)

Security & Guard

Push protection for multi-agent environments:

same guard settings set push-protect on
Enable push protection (blocks git push without a ticket)
same push-allow
Create a one-time 30-second authorization ticket before pushing
same guard status
Check guard status and active tickets
same guard settings set push-timeout N
Set ticket expiration in seconds (default: 30)
same feedback <path> up|down
Rate note helpfulness (improves future surfacing)
same claim <path>
Advisory file claim for multi-agent coordination

Maintenance & Utilities

same health
Vault health score with trust analysis, provenance stats, and recommendations
same kaizen
Continuous improvement tracking — log and review friction, bugs, and ideas
same reindex [--force]
Rebuild search index. --force drops and fully reconstructs from markdown files.
same repair
Back up current database and rebuild from scratch
same watch
Auto-reindex on file changes (filesystem watcher)
same status
See what’s being tracked (vault, hooks, MCP, Ollama)
same stats
Index statistics (notes, chunks, embeddings)
same log
Recent session activity and events
same hooks
Show installed hook status
same budget
Token utilization report (how much context budget is used)
same update
SHA256-verified self-update to latest version
same version [--check]
Show version info, --check queries latest release
same ci
CI workflow generation
same bench
Search performance benchmarks
same model
Switch embedding model
same vault feed
Copy notes between vaults with PII scanning
same completion
Generate shell completions (bash, zsh, fish)
same mcp [--vault <path>]
Launch MCP server (stdio transport)

MCP Tools

SAME exposes 17 tools via MCP (Model Context Protocol). Your AI agent calls these tools on-demand to search, read, and write to your vault.

bash
same mcp --vault /path/to/notes

Read tools (9)

ToolDescription
search_notesSemantic search across your knowledge base
search_notes_filteredSearch with domain/workstream/tag filters
search_across_vaultsFederated search across multiple registered vaults
get_noteRead full note content by path
find_similar_notesDiscover related notes by similarity
get_session_contextPinned notes + latest handoff + recent activity
recent_activityRecently modified notes
reindexRe-scan and re-index the vault
index_statsIndex health and statistics

Write tools (4)

ToolDescription
save_noteCreate or update a markdown note (auto-indexed)
save_decisionLog a structured project decision
create_handoffWrite a session handoff for the next session
save_kaizenLog improvement items (friction, bugs, ideas) with provenance tracking

Memory management tools (4)

ToolDescription
mem_consolidateConsolidate related notes into knowledge summaries via LLM
mem_briefGenerate an orientation briefing from vault contents
mem_healthVault health score with trust state and provenance analysis
mem_forgetSuppress a note from search results without deleting it

MCP configuration examples

SAME registers itself automatically via same setup mcp. If you need to configure manually:

Claude Code — .claude/settings.json
json
{
  "mcpServers": {
    "same": {
      "command": "same",
      "args": ["mcp", "--vault", "/path/to/your/notes"]
    }
  }
}
Cursor — .cursor/mcp.json
json
{
  "mcpServers": {
    "same": {
      "command": "same",
      "args": ["mcp", "--vault", "/path/to/your/notes"]
    }
  }
}
Note

Write-side memory means your agent doesn't just consume context — it contributes back. Decisions, handoffs, and session notes are saved automatically.


Works With

ToolIntegrationExperience
Claude CodeHooks + MCPFull automatic context + 17 MCP tools
CursorMCP17 MCP tools (use same init --mcp-only)
WindsurfMCP17 MCP tools (use same init --mcp-only)
Codex CLIMCP17 MCP tools (use same init --mcp-only)
Gemini CLIMCP17 MCP tools (use same init --mcp-only)
ObsidianVault detectionPoint SAME at your Obsidian vault — indexes directly
LogseqVault detectionPoint SAME at your Logseq graph — indexes directly
Any MCP clientMCP server17 tools via stdio transport

Configuration

Configuration lives at .same/config.toml in your vault root. Edit with same config edit.

toml
[vault]
path = "/home/user/notes"
handoff_dir = "sessions"          # where session handoffs are saved
decision_log = "decisions.md"     # where decisions are logged

[ollama]
url = "http://localhost:11434"    # must be localhost

[embedding]
provider = "ollama"               # "ollama", "openai", "openai-compatible", "llama.cpp", "vllm", "lm-studio"
model = "nomic-embed-text"        # embedding model name

[memory]
max_token_budget = 800            # max tokens injected at session start
max_results = 2                   # max notes surfaced per query
distance_threshold = 16.2         # max embedding distance (lower = stricter)
composite_threshold = 0.65        # min composite score (higher = stricter)

[hooks]
context_surfacing = true          # inject context at session start
decision_extractor = true         # auto-extract decisions at session end
handoff_generator = true          # auto-generate handoff at session end
feedback_loop = true              # track note usefulness
staleness_check = true            # flag outdated notes

Environment variables

Override config via environment. Priority order:

  1. CLI flags (--vault)
  2. Environment variables (VAULT_PATH, OLLAMA_URL, SAME_*)
  3. Config file (.same/config.toml)
  4. Built-in defaults
Full environment variable reference
VariableDescriptionDefault
VAULT_PATHPath to markdown notesAuto-detected
OLLAMA_URLOllama API endpoint (localhost only)http://localhost:11434
SAME_DATA_DIRDatabase location<vault>/.same/data
SAME_HANDOFF_DIRHandoff directorysessions
SAME_DECISION_LOGDecision log pathdecisions.md
SAME_EMBED_PROVIDEREmbedding providerollama
SAME_EMBED_MODELEmbedding model namenomic-embed-text
SAME_EMBED_API_KEYAPI key (for OpenAI provider)
SAME_SKIP_DIRSExtra directories to skip (comma-separated)
SAME_NOISE_PATHSPaths filtered from context

Precision profiles

Tune how aggressively SAME surfaces context:

ProfileMax ResultsThresholdUse Case
precise20.75Fewer tokens, only highly relevant notes
balanced20.65Default — good coverage without noise
broad40.55~2x tokens, thorough exploration

Display modes

Control output verbosity during context surfacing:

ModeCommandDescription
fullsame display fullBox with titles, match terms, token counts (default)
compactsame display compactOne-line summary
quietsame display quietSilent context injection (context still surfaces, just no output)
Warning

Ollama URL is validated to localhost-only. SAME will reject any non-localhost URL to ensure embeddings never leave your machine.


Privacy & Security

Three-tier privacy structure

DirectoryIndexed?Committed?Purpose
Your notesYesYour choiceDocs, decisions, research
_PRIVATE/NoNoAPI keys, credentials, secrets
research/YesNoResearch, analysis — searchable but local-only

Security hardening

Tip

Clean uninstall: same guard uninstall removes the guard hooks. Delete the same binary from your PATH and rm -rf ~/.same/ to remove everything else. Your vault files (markdown) are never modified by SAME.


Troubleshooting

"No vault found"
  • Run same init from your notes folder
  • Set VAULT_PATH=/path/to/notes explicitly
  • Use same vault add myproject /path/to/notes
"Ollama not responding"
  • Check if Ollama is running: curl http://localhost:11434/api/tags
  • Set a non-default port: OLLAMA_URL=http://localhost:<port>
  • SAME falls back to keyword search automatically — this is not a blocker
Hooks not firing
  • Run same setup hooks to install
  • Verify with same status
  • Check that .claude/settings.json exists in your project root
  • Hooks are Claude Code only — Cursor/Windsurf/Codex CLI/Gemini CLI use MCP
Context not surfacing
  • Run same doctor for full diagnostics
  • Run same reindex to rebuild the index
  • Test with same search "your query" to verify search works
  • Check display mode: same display full (quiet mode hides output but still surfaces context)
"Cannot open SAME database"
  • Run same repair (backs up current DB automatically, then rebuilds)
  • Or same init for a fresh setup
  • Or same reindex --force to drop and rebuild the index
  • The database is a derived index — your markdown files are the source of truth and are never affected

FAQ

Do I need Obsidian?

No. Any directory of .md files works. Obsidian, Logseq, or just a folder of markdown.

Do I need Ollama?

Recommended but not required. Without Ollama, SAME falls back to FTS5 keyword search. You can also use OpenAI, llama.cpp, VLLM, LM Studio, OpenRouter, or any OpenAI-compatible server via SAME_EMBED_PROVIDER=openai-compatible.

Does it slow down my prompts?

50–200ms total, with embedding as the bottleneck. Search and scoring run under 5ms. Per-prompt overhead is zero tokens — the agent queries SAME via MCP only when it decides to.

Is my data sent anywhere?

No. Everything is fully local. Context surfaced to AI tools is sent as part of the normal conversation with your AI provider (same as if you pasted it manually). SAME itself makes zero network calls.

How much disk space?

5–15MB for a few hundred notes. The database is a derived index — delete it and same reindex, your notes are untouched.

Multiple vaults?

Yes. Register and switch between vaults:

bash
same vault add work ~/work-notes
same vault add personal ~/personal-notes
same vault default work
What are seeds?

Pre-built knowledge vaults. Install one and your AI has expert-level context immediately. same seed list to browse, same seed install <name> to install. 17 seeds available with 870+ curated notes across technical and lifestyle domains, all free. Seeds grow smarter as you use them — your decisions and handoffs build on top of the seed knowledge.

Can I create my own seed?

Yes. Seeds are a community project. The seed-vaults repo includes a template with everything you need. Write markdown notes, add a CLAUDE.md for governance, and submit a pull request.

Can I switch embedding models?

same model use <name> switches models. 10 models supported from 384 to 1,536 dimensions. Run same reindex --force after switching.

Does SAME work with providers other than Ollama?

Yes. SAME supports any OpenAI-compatible embedding API. Set provider = "openai-compatible" in your config and point to llama.cpp, VLLM, LM Studio, or OpenRouter. Also supports the OpenAI API directly. 6 providers, no vendor lock-in.

Is my data sent to the cloud?

By default, no. SAME uses Ollama for fully local embeddings. If you choose an API provider (OpenAI, OpenRouter), embedding vectors are sent to that provider. Your raw notes never leave your machine regardless of provider.

How is this different from CLAUDE.md?

CLAUDE.md is a notepad you maintain manually (~200 lines). SAME captures decisions automatically, provides semantic search across hundreds of notes, coordinates multiple instances, generates handoff notes, and scores knowledge by freshness and confidence. SAME is a searchable, growing knowledge base that builds itself.

What is same web?

A local web dashboard that lets you browse, search, and inspect your vault in the browser. Run same web --open to launch. It's read-only, localhost-only, and makes no network calls. You can see every note, search with the same semantic engine MCP uses, and inspect index health — all in a UI.

Can I use SAME at work on commercial projects?

Yes. The BSL 1.1 license allows free use for individual developers and small teams. Source is available. The license converts to Apache 2.0 on 2030-02-02. Same model as MariaDB, CockroachDB, and HashiCorp.


Eval Methodology

All metrics are measured against a ground-truth eval harness, not estimated.

MetricValue
Retrieval precision99.5% (105 synthetic test cases)
Mean Reciprocal Rank0.949 (105 test cases)
Coverage (recall)90.5%
Prompt overhead<200ms
Binary size~12MB
Setup timeMinutes

Harness: 105 ground-truth test cases against a 273-note vault. Tuning constants: maxDistance=16.3, minComposite=0.70, gapCap=0.65.

When SAME surfaces context, it's almost always relevant. When it stays quiet, it's almost always right to. The eval methodology is published on GitHub — challenge it.


Community & Support

License

BSL 1.1 (source-available). Free for personal, educational, hobby, research, evaluation, and commercial use by individuals and small teams. Converts to Apache 2.0 on 2030-02-02. Same model as MariaDB, CockroachDB, HashiCorp.

Built with

Go · SQLite + sqlite-vec · Ollama / OpenAI