System Prompts

Generate and customize prompt instructions from your OpenUI library.

The system prompt tells the LLM how to output valid OpenUI Lang. There are two ways to generate it.

The fastest way to generate a system prompt — works with any backend language:

npx @openuidev/cli@latest generate ./src/library.ts

Write to a file:

npx @openuidev/cli@latest generate ./src/library.ts --out system-prompt.txt

Generate the component spec as JSON (for use with generatePrompt):

npx @openuidev/cli@latest generate ./src/library.ts --json-schema

The CLI auto-detects exported PromptOptions (examples, rules) alongside your library. Use --prompt-options <name> to pick a specific export.

2. generatePrompt (programmatic)

For backends that need dynamic prompts - different tools, preambles, or feature flags per request - use generatePrompt from @openuidev/lang-core. This has no React dependency, so it works in any Node/Edge/serverless backend.

First, generate the component spec JSON via the CLI:

npx @openuidev/cli generate ./src/library.ts --json-schema --out generated/component-spec.json

Then build the prompt at runtime:

import { generatePrompt, type PromptSpec } from "@openuidev/lang-core";
import componentSpec from "./generated/component-spec.json";

const systemPrompt = generatePrompt({
  ...componentSpec,

  // Tool descriptions — so the LLM knows what tools exist
  tools: myToolSpecs,

  // Examples showing how to use your tools with Query/Mutation
  toolExamples: [`tickets = Query("list_tickets", {}, {rows: []})\n...`],

  // Feature flags
  toolCalls: true, // Enable Query(), Mutation(), @Run (default: true if tools provided)
  bindings: true, // Enable $variables, @Set, @Reset (default: true if toolCalls)
  editMode: true, // Enable incremental editing (LLM outputs patches, not full regen)
  inlineMode: true, // Enable text + code responses (LLM can answer questions without code)

  // Custom instructions
  preamble: "You build dashboards using openui-lang.",
  additionalRules: ['Use @Reset after form submit, not @Set($var, "")'],
});
FlagWhat it enablesDefault
toolCallsQuery(), Mutation(), @Run, built-in functions, tool workflow rulestrue if tools provided
bindings$variables, @Set, @Reset, built-in functions, reactive filterstrue if toolCalls is true
editModeIncremental editing - LLM outputs only changed statementsfalse
inlineModeText + fenced code responses - LLM can answer questions without generating UIfalse

Built-in functions (@Count, @Filter, @Sort, @Each, etc.) are automatically included when either toolCalls or bindings is enabled. For static UI libraries without data fetching, they are omitted to keep the prompt concise.

library.prompt() (frontend shorthand)

If you're generating prompts client-side or in a Next.js route that already imports your library:

import { openuiLibrary, openuiPromptOptions } from "@openuidev/react-ui";

const systemPrompt = openuiLibrary.prompt(openuiPromptOptions);

This is convenient but imports React components - use generatePrompt for pure backend routes.

What gets generated

The generated prompt includes:

  • Syntax rules and expression types
  • Component signatures (from your registered components)
  • Built-in function reference (@Count, @Filter, @Sort, etc.) — only when toolCalls or bindings enabled
  • Query/Mutation/Action workflow (if toolCalls enabled)
  • $variable and reactive binding rules (if bindings enabled)
  • Tool descriptions and tool examples (if tools provided)
  • Edit mode instructions (if editMode enabled)
  • Inline mode instructions (if inlineMode enabled)
  • Hoisting/streaming rules
  • Your optional examples and rules

Backend usage example

import OpenAI from "openai";
import { generatePrompt, type PromptSpec } from "@openuidev/lang-core";
import componentSpec from "./generated/component-spec.json";

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const systemPrompt = generatePrompt({ ...componentSpec, preamble: "You are a helpful assistant." });

export async function POST(req: Request) {
  const { messages } = await req.json();

  const completion = await client.chat.completions.create({
    model: "gpt-5.4-mini",
    stream: true,
    messages: [{ role: "system", content: systemPrompt }, ...messages],
  });

  return new Response(completion.toReadableStream(), {
    headers: { "Content-Type": "text/event-stream" },
  });
}

On this page