Home Docs Blog Demo

Cursor & IDE Integration

Set up Context Harness as workspace-level context for Cursor, VS Code, and JetBrains IDEs.

This guide shows how to turn Context Harness into a personal knowledge layer for your IDE. Every AI interaction — chat, inline completions, code generation — gets grounded in your actual codebase, docs, and internal knowledge.

The idea

Most AI coding assistants only see the files currently open in your editor. Context Harness gives them access to your entire knowledge base — across repos, wikis, and internal tools — so they can answer questions about architecture, find relevant code patterns, and reference documentation you’d otherwise have to search for manually.

┌─────────────────┐     ┌─────────────────────┐
│  Cursor / IDE   │────▶│  Context Harness     │
│  Agent          │     │  MCP Server (:7331)  │
│                 │◀────│                      │
│  "How does auth │     │  SQLite + FTS5       │
│   work in our   │     │  + Vector Search     │
│   platform?"    │     │                      │
└─────────────────┘     │  Git repos + Jira    │
                        │  + Confluence + S3   │
                        └─────────────────────┘

Cursor setup

Start the server first: ctx serve mcp --config ./config/ctx.toml

Create .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "context-harness": {
      "url": "http://127.0.0.1:7331/mcp"
    }
  }
}

Commit this file so your whole team gets the same context.

Option 2: Global MCP

Open Cursor SettingsMCPAdd Server:

FieldValue
Namecontext-harness
URLhttp://127.0.0.1:7331/mcp

This makes Context Harness available in every Cursor workspace.

What Cursor can do with Context Harness

Once connected, Cursor’s agent automatically discovers all available tools. Try these prompts:

Search & understand:

Cross-repo context:

Write with context:

Custom tool actions:

Multi-repo workspace setup

If you work across multiple repos, set up a shared Context Harness instance:

1. Create a shared context directory:

$ mkdir -p ~/ctx-workspace/config
$ cat > ~/ctx-workspace/config/ctx.toml << 'EOF'
[db]
path = "./data/ctx.sqlite"

[embedding]
provider = "openai"
model = "text-embedding-3-small"
dims = 1536

[retrieval]
final_limit = 12
hybrid_alpha = 0.6

[server]
bind = "127.0.0.1:7331"

[connectors.git.platform]
url = "https://github.com/your-org/main-platform.git"
branch = "main"
include_globs = ["docs/**/*.md", "src/**/*.rs"]
shallow = true
cache_dir = "./data/.cache/platform"

[connectors.script.auth]
path = "connectors/git-repo.lua"
url = "https://github.com/your-org/auth-service.git"
branch = "main"
include_patterns = "src/,docs/,README.md"

[connectors.script.infra]
path = "connectors/git-repo.lua"
url = "https://github.com/your-org/infrastructure.git"
branch = "main"
include_patterns = "docs/,runbooks/"

[connectors.script.jira]
path = "connectors/jira.lua"
url = "https://your-org.atlassian.net"
project = "ENG"
api_token = "${JIRA_API_TOKEN}"
EOF

2. Start the server once:

$ cd ~/ctx-workspace
$ ctx init && ctx sync all
$ ctx embed pending
$ ctx serve mcp

3. Point all your Cursor workspaces at it:

In each repo’s .cursor/mcp.json:

{
  "mcpServers": {
    "org-context": {
      "url": "http://127.0.0.1:7331/mcp"
    }
  }
}

Now every Cursor window has access to the full org knowledge base.

Keep the index fresh

Manual sync (ad-hoc):

$ ctx sync all && ctx embed pending

Cron job (automatic):

# Sync every 2 hours
0 */2 * * * cd ~/ctx-workspace && ctx sync all && ctx embed pending

Git hook (on push):

# .git/hooks/post-push
#!/bin/bash
cd ~/ctx-workspace && ctx sync all --config ./config/ctx.toml &

Claude Desktop

Claude Desktop supports MCP servers through its config file. Start the server first (ctx serve mcp), then point Claude at the /mcp endpoint.

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

{
  "mcpServers": {
    "context-harness": {
      "url": "http://127.0.0.1:7331/mcp"
    }
  }
}

Continue.dev (VS Code / JetBrains)

Continue.dev supports MCP servers experimentally:

// ~/.continue/config.json
{
  "experimental": {
    "mcpServers": [
      {
        "name": "context-harness",
        "url": "http://127.0.0.1:7331/mcp"
      }
    ]
  }
}

Or use Continue’s context provider API for tighter integration (this uses the REST API, not MCP):

{
  "contextProviders": [
    {
      "name": "http",
      "params": {
        "url": "http://localhost:7331/tools/search",
        "title": "Knowledge Base",
        "displayTitle": "⚡ Context Harness",
        "description": "Search across all indexed repos and docs"
      }
    }
  ]
}

Windsurf / Codeium

For Windsurf (Codeium’s IDE), add Context Harness as an MCP server:

{
  "mcpServers": {
    "context-harness": {
      "serverUrl": "http://127.0.0.1:7331/mcp"
    }
  }
}

Zed

Zed supports context servers through its extensions system:

// settings.json
{
  "context_servers": {
    "context-harness": {
      "url": "http://127.0.0.1:7331/mcp"
    }
  }
}

Tips for better results

  1. Index ADRs and design docs — these give the agent architectural context
  2. Index CHANGELOG and commit messages — helps the agent understand project history
  3. Include README files — project overviews help agents understand codebases
  4. Use hybrid search — set hybrid_alpha = 0.6 for the best mix of keyword + semantic
  5. Keep chunk sizes moderatemax_tokens = 700 gives enough context per chunk
  6. Add Lua connectors for tribal knowledge — Jira, Confluence, Slack threads
  7. Filter by source — if a question is about infra, the agent can filter with "source": "script:infra"

What’s next?