IDE setup

Private open-weight models in your editor.

Cursor, Continue, Cline, Aider. All accept a custom OpenAI-compatible base URL.

Marigold exposes a POST /v1/chat/completions endpoint that any tool built on the OpenAI SDK accepts. Set the base URL, add your API key, choose a model. Your code and prompts go to private AWS infrastructure in London, not to OpenAI.

Two values to set in every tool below: base URL https://api.marigold.run/v1 and your Marigold API key. The model name is the only thing that varies between tools.

Use these exact strings in the model field of each tool's configuration. For general coding assistance, the 7B instruct model is a reasonable starting point.

Model Notes
qwen/qwen2.5-7b-instruct General purpose. Good balance of speed and capability.
qwen/qwen2.5-14b-instruct Stronger reasoning.
qwen/qwen2.5-1.5b-instruct Fast and lightweight. Suited to autocomplete-style tasks.
mistralai/mistral-7b-instruct-v0.3 Alternative instruct model.

Add Marigold as a custom model

Cursor supports any OpenAI-compatible provider via its model settings. Open Settings → Models and add a new model entry.

Settings → Models → Add model

API Provider:  OpenAI-compatible
Base URL:      https://api.marigold.run/v1
API Key:       your-marigold-api-key
Model name:    qwen/qwen2.5-7b-instruct

Alternatively, set these in your Cursor ~/.cursor/settings.json and restart.

~/.cursor/settings.json

{
  "openai.apiKey":  "your-marigold-api-key",
  "openai.baseURL": "https://api.marigold.run/v1"
}

Add Marigold to config.json

Continue stores its model configuration in ~/.continue/config.json on macOS and Linux, or %USERPROFILE%\.continue\config.json on Windows. Add one or more entries to the models array.

~/.continue/config.json

{
  "models": [
    {
      "title":    "Qwen 7B (Marigold)",
      "provider": "openai",
      "model":    "qwen/qwen2.5-7b-instruct",
      "apiBase":  "https://api.marigold.run/v1",
      "apiKey":   "your-marigold-api-key"
    },
    {
      "title":    "Qwen 14B (Marigold)",
      "provider": "openai",
      "model":    "qwen/qwen2.5-14b-instruct",
      "apiBase":  "https://api.marigold.run/v1",
      "apiKey":   "your-marigold-api-key"
    }
  ]
}

Multiple models can coexist in the array. Continue lets you switch between them from the chat panel. The title field is what appears in the dropdown.

Set the API provider to OpenAI Compatible

In VS Code, open the Cline extension settings panel. Set the API provider to OpenAI Compatible and fill in the three fields below.

Cline extension settings

API Provider:  OpenAI Compatible
Base URL:      https://api.marigold.run/v1
API Key:       your-marigold-api-key
Model ID:      qwen/qwen2.5-7b-instruct

The model ID field accepts the full Marigold model name including the namespace prefix. Cline stores this in VS Code user or workspace settings.

Pass the base URL and key on the command line or in config

Aider requires the openai/ prefix on the model name when using an OpenAI-compatible provider. Note this is Aider's own provider prefix, not the model namespace -- use openai/qwen2.5-7b-instruct, not qwen/qwen2.5-7b-instruct.

Command line

aider \
  --openai-api-base https://api.marigold.run/v1 \
  --openai-api-key  your-marigold-api-key \
  --model           openai/qwen2.5-7b-instruct

.aider.conf.yml (project root or home directory)

openai-api-base: https://api.marigold.run/v1
openai-api-key:  your-marigold-api-key
model:           openai/qwen2.5-7b-instruct

Environment variables

export OPENAI_API_BASE=https://api.marigold.run/v1
export OPENAI_API_KEY=your-marigold-api-key

aider --model openai/qwen2.5-7b-instruct

Building your own tool or script

Any SDK or library that wraps the OpenAI API accepts a custom base URL. Pass base_url (or the equivalent parameter) and your Marigold key. The model name is the full registry name with namespace prefix.

Python -- openai SDK

from openai import OpenAI

client = OpenAI(
    base_url="https://api.marigold.run/v1",
    api_key="your-marigold-api-key"
)

response = client.chat.completions.create(
    model="qwen/qwen2.5-7b-instruct",
    messages=[{"role": "user", "content": "..."}]
)

Python -- LangChain

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="qwen/qwen2.5-7b-instruct",
    openai_api_base="https://api.marigold.run/v1",
    openai_api_key="your-marigold-api-key"
)

TypeScript / Node

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.marigold.run/v1",
  apiKey:  "your-marigold-api-key",
});

const response = await client.chat.completions.create({
  model:    "qwen/qwen2.5-7b-instruct",
  messages: [{ role: "user", content: "..." }],
});

The tool says the model name is invalid

The model name must include the namespace prefix exactly as listed: qwen/qwen2.5-7b-instruct, not qwen2.5-7b-instruct. For Aider specifically, substitute the qwen/ namespace prefix with openai/ -- this is Aider's provider prefix and is required regardless of which model you are using.

I get a 401 Unauthorized response

The API key must be sent as a Bearer token in the Authorization header. Most tools handle this automatically when you set the API key field. Verify the key has not been truncated by your tool's settings panel, and that you are pointing at the correct base URL.

The first response takes a long time

The first request after a period of inactivity may take a few seconds longer while the model container initialises. Subsequent requests within the same session are faster. This applies to the Developer tier; Team and Pro tiers have higher baseline capacity.

My tool does not have a base URL field

Set the OPENAI_API_BASE environment variable to https://api.marigold.run/v1 before launching the tool. Most tools that use the OpenAI SDK read this variable automatically. Also set OPENAI_API_KEY to your Marigold key.

Private models in your editor, done properly.

Paid plans are in limited release. Leave your email and we will reach out when developer access opens.

Join the waitlist

No spam. One email when access opens.

Noted. We will be in touch.