Open Source · Local-First · Rust

A structured memory layer
between your systems and your AI tools

Ingest context from any source. Index it locally. Retrieve it instantly. Give your AI agents the memory they deserve.

ctx — context harness
$ ctx init
Database initialized successfully.
$ ctx sync filesystem
sync filesystem
fetched: 2,847 items
upserted documents: 2,847
chunks written: 12,403
ok
$ ctx search "JWT signing key rotation"
1. [0.94] filesystem / auth-module.rs
excerpt: "JWT signing key loaded from AWS Secrets Manager..."
2. [0.81] filesystem / deployment-runbook.md
excerpt: "Key rotation procedure for production services..."

Everything you need to give AI tools memory

One binary. Zero cloud dependencies. Works with everything.

🔌

Connector-Driven

Plug in any source — filesystem, Git, S3, or write your own in Lua. Each connector normalizes data into a consistent Document model.

💾

Local-First Storage

SQLite with WAL mode. No cloud dependencies. Your data stays on your machine. Back up with a single file copy.

🔍

Hybrid Search

FTS5 keyword search, vector semantic search, and configurable hybrid merge with BM25 + cosine scoring. Deterministic results.

🤖

MCP Server

Expose context to Cursor, Claude, and any MCP-compatible AI tool. Extend with custom Lua tools that agents can discover and call.

CLI-First

Everything through the ctx command. Init, sync, search, get — composable Unix-style tooling. No GUI required.

📦

Extension Registry

Install community connectors for Jira, Confluence, Slack, RSS, and more from Git-backed registries. Write your own in Lua — no Rust compilation needed.

How it works

A clean pipeline from raw sources to AI-ready context

Sources
Filesystem, Git, S3, Lua, Community Registry
Connectors
Fetch & normalize into Documents
Chunking
Paragraph-aware splitting
SQLite Store
Documents + Chunks + FTS5 + Embeddings
Query Engine
Keyword · Semantic · Hybrid
Interfaces
CLI · MCP Server · Lua Tools

Document Model

{
  "id": "uuid",
  "source": "filesystem",
  "source_id": "docs/auth.md",
  "title": "Auth Module",
  "body": "...",
  "chunks": [
    { "index": 0, "text": "..." }
  ]
}

Configuration

[db]
path = "./data/ctx.sqlite"

[chunking]
max_tokens = 700

[retrieval]
hybrid_alpha = 0.6
final_limit = 12

[connectors.filesystem.local]
root = "./my-project"
include_globs = ["**/*.md"]

Built for real workflows

From offline-first web apps to AI-powered development

📚

Documentation Site Search

Replace Algolia or DocSearch with self-hosted semantic search. Build the index at deploy time, serve from a CDN. No third-party dependencies.

Self-Hosted CDN-Ready
🏗️

Engineering Onboarding

Index incident reports, ADRs, and runbooks. New engineers query naturally instead of searching Confluence for 45 minutes.

Knowledge Base Natural Language
📝

Personal Knowledge Agent

Index your Obsidian vault, meeting notes, or research docs. Connect it to your AI tools and ask questions across your entire body of knowledge.

Obsidian Personal AI
Read the Documentation → Everything you need to set up Context Harness for your project

Get started in 60 seconds

1

Install

$ cargo install --git https://github.com/parallax-labs/context-harness
2

Configure

$ mkdir -p config
$ cat > config/ctx.toml << 'EOF'
[db]
path = "./data/ctx.sqlite"

[chunking]
max_tokens = 700

[connectors.filesystem.local]
root = "./my-project"
include_globs = ["**/*.md", "**/*.rs"]
EOF
3

Initialize & Sync

$ ctx init --config ./config/ctx.toml
$ ctx sync filesystem --config ./config/ctx.toml
4

Search

$ ctx search "authentication flow" --config ./config/ctx.toml

Ready to give your AI tools real memory?

Star on GitHub