Getting StartedUsing the Proxy
?

Using the Proxy

Get started with automatic context injection in under 2 minutes

The proxy is the easiest way to add Recurse to existing applications. Change one line of code—your base URL—and your AI requests automatically get enriched with context from your knowledge graph.

Prerequisites: Make sure you've completed the setup steps (AI provider key, account signup, key configuration) before starting.


Quick Start

  1. Generate your API key at API keys settings
  2. Change your base URL to point to the Recurse proxy
  3. Start making requests with automatic context injection

Configure Your Client

Point your OpenAI SDK to the Recurse proxy instead of directly to your AI provider:

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,  // Your OpenAI/Anthropic/etc key
  baseURL: 'https://api.recurse.cc/proxy/https://api.openai.com/v1/',
  defaultHeaders: {
    'X-API-Key': process.env.RECURSE_API_KEY,  // Your Recurse key
    'X-Recurse-Scope': 'my_project'
  }
});

// Use the client normally—context gets injected automatically
const completion = await client.chat.completions.create({
  model: 'gpt-4o-mini',
  messages: [
    { role: 'user', content: 'What did we decide about the API design?' }
  ]
});
from openai import OpenAI
import os

client = OpenAI(
    api_key=os.environ["OPENAI_API_KEY"],
    base_url="https://api.recurse.cc/proxy/https://api.openai.com/v1/",
    default_headers={
        "X-API-Key": os.environ["RECURSE_API_KEY"],
        "X-Recurse-Scope": "my_project"
    }
)

# Use the client normally—context gets injected automatically
completion = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "What did we decide about the API design?"}
    ]
)
curl https://api.recurse.cc/proxy/https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "X-API-Key: $RECURSE_API_KEY" \
  -H "X-Recurse-Scope: my_project" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [
      {"role": "user", "content": "What did we decide about the API design?"}
    ]
  }'

What happens: When you send a request through the proxy, Recurse retrieves relevant frames from your knowledge graph, assembles them into context bundles, enriches your request, forwards everything to your AI provider, and returns the response. Your code sees a standard OpenAI-compatible response—the context injection happens transparently.


Organize with Scopes

Scopes organize your knowledge like folders or tags. Set scopes in the request header to focus context retrieval:

defaultHeaders: {
  'X-Recurse-Scope': 'research-papers'  // Single scope
}

// Or multiple scopes (comma-separated)
defaultHeaders: {
  'X-Recurse-Scope': 'meeting-notes,team:engineering'
}

Scopes determine which parts of your knowledge graph get searched for relevant context, making responses more focused and relevant.


Enable Persistence

Add 'X-Recurse-Persist': 'true' to your headers to automatically save useful outputs back into your knowledge graph. The assistant's responses become queryable content for future requests.

defaultHeaders: {
  'X-API-Key': process.env.RECURSE_API_KEY,
  'X-Recurse-Scope': 'my_project',
  'X-Recurse-Persist': 'true'  // Enable persistence
}

When to use persistence:

  • ✅ Summarizing meetings or documents
  • ✅ Creating knowledge base entries
  • ✅ Generating reference material
  • ❌ Temporary/session-specific content
  • ❌ Every single response (consumes storage)

Learn More