End-to-End Guide
Build a complete OpenUI Chat setup from an existing app.
This guide shows a complete OpenUI Chat setup in an existing Next.js App Router project.
This path covers:
- a built-in chat layout
- an OpenAI-backed route handler
- frontend request wiring with
processMessage - the correct stream adapter and message format
- optional thread history
- optional headless customization
Prerequisites
Complete Installation first, then return here to wire the chat flow.
1. Generate the system prompt
If you want Generative UI, generate a system prompt from the component library. The backend loads this prompt and sends it to the model with each request.
If you only want plain text chat, you can skip this step and omit componentLibrary in the next examples.
npx @openuidev/cli generate ./src/library.ts --out src/generated/system-prompt.txtWhere src/library.ts exports your library:
export {
openuiLibrary as library,
openuiPromptOptions as promptOptions,
} from "@openuidev/react-ui/genui-lib";Add this as a prebuild step in package.json:
"scripts": {
"generate:prompt": "openui generate src/library.ts --out src/generated/system-prompt.txt",
"dev": "pnpm generate:prompt && next dev",
"build": "pnpm generate:prompt && next build"
}This prompt tells the model which UI components it is allowed to emit.
2. Create the streaming backend route
Create app/api/chat/route.ts:
import { readFileSync } from "fs";
import { join } from "path";
import { NextRequest } from "next/server";
import OpenAI from "openai";
const client = new OpenAI();
const systemPrompt = readFileSync(join(process.cwd(), "src/generated/system-prompt.txt"), "utf-8");
export async function POST(req: NextRequest) {
try {
const { messages } = await req.json();
const response = await client.chat.completions.create({
model: "gpt-5.2",
messages: [{ role: "system", content: systemPrompt }, ...messages],
stream: true,
});
return new Response(response.toReadableStream(), {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache, no-transform",
Connection: "keep-alive",
},
});
} catch (err) {
console.error(err);
const message = err instanceof Error ? err.message : "Unknown error";
return new Response(JSON.stringify({ error: message }), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
}The system prompt is loaded from the file generated by the CLI. The route only receives messages from the frontend — the prompt never leaves the server.
3. Render a layout and connect it to the route
FullScreen is a good baseline because it includes both the thread list and the main chat surface.
This guide uses processMessage instead of apiUrl so the request body stays explicit.
import { openAIMessageFormat, openAIReadableStreamAdapter } from "@openuidev/react-headless";
import { FullScreen } from "@openuidev/react-ui";
import { openuiLibrary } from "@openuidev/react-ui/genui-lib";
export default function Page() {
return (
<div className="h-screen">
<FullScreen
processMessage={async ({ messages, abortController }) => {
return fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messages: openAIMessageFormat.toApi(messages),
}),
signal: abortController.signal,
});
}}
streamProtocol={openAIReadableStreamAdapter()}
componentLibrary={openuiLibrary}
agentName="Assistant"
/>
</div>
);
}Why this setup matters:
processMessagegives you control over the request bodyopenAIMessageFormat.toApi(messages)converts messages to OpenAI format before sendingopenAIReadableStreamAdapter()matchesresponse.toReadableStream()componentLibrary={openuiLibrary}lets the UI render structured responses
Checkpoint
At this point, you should be able to send a message and receive streamed responses in the UI.
Guides: Connecting to LLM, Next.js Implementation, Providers
4. Connect Thread History (optional)
Stop here if you only need a working streamed chat UI.
Continue with this section only if your app also needs saved threads and message history from the backend.
If you want the UI to load saved threads and previous messages, add threadApiUrl and implement the default thread contract described in Connect Thread History.
<FullScreen
processMessage={async ({ messages, abortController }) => {
return fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messages: openAIMessageFormat.toApi(messages),
}),
signal: abortController.signal,
});
}}
threadApiUrl="/api/threads"
streamProtocol={openAIReadableStreamAdapter()}
messageFormat={openAIMessageFormat}
componentLibrary={openuiLibrary}
agentName="Assistant"
/>When using processMessage, you must call openAIMessageFormat.toApi(messages) explicitly in the request body — the messageFormat prop does not transform messages for processMessage. The messageFormat={openAIMessageFormat} prop here is for threadApiUrl: it tells the UI how to convert messages when loading saved thread history from the backend.
5. Switch layouts or go headless (optional)
This step does not change your backend contract. It only changes the UI layer that sits on top of the same chat and thread wiring.
Once the backend contract is working, you can keep the same chat wiring and swap the UI layer.
- Use Copilot for a sidebar layout
- Use BottomTray for a floating widget
- Use Headless Intro and Custom UI Guide for full UI control
You now have
- a streaming
/api/chatroute - a connected chat layout
- the correct OpenAI message conversion and stream adapter
- optional GenUI support
- a clear path to thread history and headless customization