System Prompts
Generate and customize prompt instructions from your OpenUI library.
The system prompt tells the LLM how to output valid OpenUI Lang. There are two ways to generate it.
1. CLI (recommended for most setups)
The fastest way to generate a system prompt — works with any backend language:
npx @openuidev/cli@latest generate ./src/library.tsWrite to a file:
npx @openuidev/cli@latest generate ./src/library.ts --out system-prompt.txtGenerate the component spec as JSON (for use with generatePrompt):
npx @openuidev/cli@latest generate ./src/library.ts --json-schemaThe CLI auto-detects exported PromptOptions (examples, rules) alongside your library. Use --prompt-options <name> to pick a specific export.
2. generatePrompt (programmatic)
For backends that need dynamic prompts - different tools, preambles, or feature flags per request - use generatePrompt from @openuidev/lang-core. This has no React dependency, so it works in any Node/Edge/serverless backend.
First, generate the component spec JSON via the CLI:
npx @openuidev/cli generate ./src/library.ts --json-schema --out generated/component-spec.jsonThen build the prompt at runtime:
import { generatePrompt, type PromptSpec } from "@openuidev/lang-core";
import componentSpec from "./generated/component-spec.json";
const systemPrompt = generatePrompt({
...componentSpec,
// Tool descriptions — so the LLM knows what tools exist
tools: myToolSpecs,
// Examples showing how to use your tools with Query/Mutation
toolExamples: [`tickets = Query("list_tickets", {}, {rows: []})\n...`],
// Feature flags
toolCalls: true, // Enable Query(), Mutation(), @Run (default: true if tools provided)
bindings: true, // Enable $variables, @Set, @Reset (default: true if toolCalls)
editMode: true, // Enable incremental editing (LLM outputs patches, not full regen)
inlineMode: true, // Enable text + code responses (LLM can answer questions without code)
// Custom instructions
preamble: "You build dashboards using openui-lang.",
additionalRules: ['Use @Reset after form submit, not @Set($var, "")'],
});| Flag | What it enables | Default |
|---|---|---|
toolCalls | Query(), Mutation(), @Run, built-in functions, tool workflow rules | true if tools provided |
bindings | $variables, @Set, @Reset, built-in functions, reactive filters | true if toolCalls is true |
editMode | Incremental editing - LLM outputs only changed statements | false |
inlineMode | Text + fenced code responses - LLM can answer questions without generating UI | false |
Built-in functions (@Count, @Filter, @Sort, @Each, etc.) are automatically included when either toolCalls or bindings is enabled. For static UI libraries without data fetching, they are omitted to keep the prompt concise.
library.prompt() (frontend shorthand)
If you're generating prompts client-side or in a Next.js route that already imports your library:
import { openuiLibrary, openuiPromptOptions } from "@openuidev/react-ui";
const systemPrompt = openuiLibrary.prompt(openuiPromptOptions);This is convenient but imports React components - use generatePrompt for pure backend routes.
What gets generated
The generated prompt includes:
- Syntax rules and expression types
- Component signatures (from your registered components)
- Built-in function reference (
@Count,@Filter,@Sort, etc.) — only whentoolCallsorbindingsenabled - Query/Mutation/Action workflow (if
toolCallsenabled) $variableand reactive binding rules (ifbindingsenabled)- Tool descriptions and tool examples (if
toolsprovided) - Edit mode instructions (if
editModeenabled) - Inline mode instructions (if
inlineModeenabled) - Hoisting/streaming rules
- Your optional examples and rules
Backend usage example
import OpenAI from "openai";
import { generatePrompt, type PromptSpec } from "@openuidev/lang-core";
import componentSpec from "./generated/component-spec.json";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const systemPrompt = generatePrompt({ ...componentSpec, preamble: "You are a helpful assistant." });
export async function POST(req: Request) {
const { messages } = await req.json();
const completion = await client.chat.completions.create({
model: "gpt-5.4-mini",
stream: true,
messages: [{ role: "system", content: systemPrompt }, ...messages],
});
return new Response(completion.toReadableStream(), {
headers: { "Content-Type": "text/event-stream" },
});
}