Ingest context from any source. Index it locally. Retrieve it instantly. Give your AI agents the memory they deserve.
One binary. Zero cloud dependencies. Works with everything.
Plug in any source — filesystem, Git, S3, or write your own in Lua. Each connector normalizes data into a consistent Document model.
SQLite with WAL mode. No cloud dependencies. Your data stays on your machine. Back up with a single file copy.
FTS5 keyword search, vector semantic search, and configurable hybrid merge with BM25 + cosine scoring. Deterministic results.
Expose context to Cursor, Claude, and any MCP-compatible AI tool. Extend with custom Lua tools that agents can discover and call.
Everything through the ctx command. Init, sync, search, get — composable Unix-style tooling. No GUI required.
Install community connectors for Jira, Confluence, Slack, RSS, and more from Git-backed registries. Write your own in Lua — no Rust compilation needed.
A clean pipeline from raw sources to AI-ready context
{
"id": "uuid",
"source": "filesystem",
"source_id": "docs/auth.md",
"title": "Auth Module",
"body": "...",
"chunks": [
{ "index": 0, "text": "..." }
]
}
[db]
path = "./data/ctx.sqlite"
[chunking]
max_tokens = 700
[retrieval]
hybrid_alpha = 0.6
final_limit = 12
[connectors.filesystem.local]
root = "./my-project"
include_globs = ["**/*.md"]
From offline-first web apps to AI-powered development
Pre-build your knowledge base at build time. Ship the SQLite database as a static asset. Your web app gets retrieval-augmented generation without a backend.
Point Context Harness at your docs and runbooks. Start the MCP server. Now Cursor, Claude, and any MCP-compatible agent can search your project knowledge.
Write custom MCP tools in Lua. Agents discover and call them dynamically. RAG-enriched Jira tickets, Slack posts, deploy triggers — without recompiling Rust.
Replace Algolia or DocSearch with self-hosted semantic search. Build the index at deploy time, serve from a CDN. No third-party dependencies.
Index incident reports, ADRs, and runbooks. New engineers query naturally instead of searching Confluence for 45 minutes.
Index your Obsidian vault, meeting notes, or research docs. Connect it to your AI tools and ask questions across your entire body of knowledge.
$ cargo install --git https://github.com/parallax-labs/context-harness
$ mkdir -p config
$ cat > config/ctx.toml << 'EOF'
[db]
path = "./data/ctx.sqlite"
[chunking]
max_tokens = 700
[connectors.filesystem.local]
root = "./my-project"
include_globs = ["**/*.md", "**/*.rs"]
EOF
$ ctx init --config ./config/ctx.toml
$ ctx sync filesystem --config ./config/ctx.toml
$ ctx search "authentication flow" --config ./config/ctx.toml
Ready to give your AI tools real memory?
Star on GitHub