v0.12.0 Latest

Memory integrity, trust-aware retrieval, kaizen tracking, five new MCP tools, and significant performance improvements.

Memory Integrity

  • Provenance tracking — new note_sources table records what files and notes each memory was derived from, with SHA-256 hashes at capture time
  • Trust state — every note carries a trust_state: validated, stale, contradicted, or unknown
  • Health score updated to a 5-factor model (added trust factor); same health now shows a Trust section with validated/stale/unknown counts
  • Source divergence detection — the staleness hook detects when source files change after a note was captured
  • MCP save_note accepts a sources parameter for explicit provenance
  • Graph extraction records discovered file references as provenance sources

Trust-Aware Retrieval

  • Trust penalties in search scoring — stale notes rank 25% lower, contradicted notes 60% lower. Validated and unknown notes are unaffected.
  • trust_state returned on all search results (vector, keyword, hybrid, FTS5, federated) — MCP clients can caveat answers based on trust
  • Context surfacing tags stale/contradicted notes visibly so agents know when retrieved knowledge may be outdated
  • Graph 1-hop expansion — top vector results expand through graph edges to surface related notes (decisions, references, dependencies). Max 2 supplemental results at 60% dampened score.

Kaizen

  • same kaizen command — continuous improvement tracking: log friction, bugs, and ideas as you work
  • save_kaizen MCP tool for agent-driven improvement logging with provenance tracking
  • Kaizen items surface in same health recommendations
$ same kaizen "Reindex is slow on large vaults — investigate batch size"

New MCP Tools

  • save_kaizen — log improvement items with provenance
  • mem_consolidate — merge related notes into structured knowledge using LLM
  • mem_brief — generate an orientation briefing of what matters right now
  • mem_health — vault health score with actionable recommendations
  • mem_forget — mark notes as suppressed (hidden from search, not deleted)

Performance

  • Batch embeddings — Ollama switched from /api/embeddings to /api/embed; OpenAI batching added — 50 chunks per request instead of 1
  • SQLite pragmas — 64 MB page cache, 256 MB mmap, temp_store in memory
  • Covering index for incremental reindex hash comparison
  • Parallel reindexReindexLite parallelized with 4-worker goroutine pool

Crash Resilience

  • PreCompact hook for session checkpointing — handoff notes saved before context compaction, not just on session stop
  • Separate debounce timers: 2 min for checkpoint, 5 min for full handoff

Added

  • same tips — vault hygiene, security, and model selection guidance
  • same graph enable / same graph disable to toggle graph mode without editing config
  • Automatic container detection (Docker, Kubernetes, Codespaces, Gitpod)
  • Graceful Ctrl+C cancellation during reindex and init — first press stops cleanly, second force-quits
  • Human-readable error messages for common embedding failures
  • Thinking model compatibility — strips <think>, <reasoning>, <reflection> tags from LLM responses
  • Graph extraction: [graph] model config key and SAME_GRAPH_MODEL env var
  • Graph extraction: --abort-on-error flag; Ollama structured output with JSON schema
  • Multi-stage Dockerfile with non-root user and OCI labels
  • Command aliases: same s (search), same st (status), same vault ls (vault list)
  • same consolidate, same brief, same health commands (experimental)
  • Note suppression and reconsolidation dynamics (frequently accessed notes rank higher)
  • Windows ARM64 release binary

Bug Fixes

  • Windows self-update no longer fails when a stale .old backup is locked
  • Migration failure upgrading from v0.9.1 to v0.10.0 — entry_kind index no longer created before the column exists
  • same graph stats reads from config.toml instead of only the environment variable
  • Graph extraction works with thinking/reasoning models (DeepSeek-R1, QwQ, etc.)
  • same ask, demo, and tutorial commands no longer display thinking tokens
  • Ollama and OpenAI response paths strip thinking tags at the transport layer
  • URL redirect vulnerability fixed with page whitelist in web dashboard
  • MCP SDK bumped to v1.4.0 (security fix)

Changed

  • Demo rewritten with 5 realistic sample notes and a narrative arc
  • Init onboarding redesigned: detects project language, AI tools, and git state; suggests seed vaults
  • install.sh messaging updated to honestly communicate Ollama's role in semantic search
  • Output consistency polish: standardized checkmarks, hint capitalization, footer formatting
  • Search results factor in access frequency (subtle log-scaled boost)
  • Schema migration v8: adds suppressed column to vault_notes
  • All search paths filter suppressed notes by default
View older versions on GitHub →