A line-oriented language designed for streaming, token efficiency, and type safety
An alternative to Vercel JSON renderer and A2UI that uses ~40% fewer tokens than equivalent JSON structures. Define your component library with Zod schemas and parse LLM responses into renderable components.
Line-oriented syntax means the UI renders line-by-line. No waiting for valid JSON closing braces.
Uses ~40% fewer tokens than equivalent JSON structures, significantly reducing inference cost and latency.
Strictly typed against your Zod schemas. If the generated code does not match your definition, it does not render.
Compare the same UI component in both formats
{
"type": "Card",
"props": {
"title": "Welcome Back",
"description": "Continue your journey"
},
"children": [
{
"type": "Input",
"props": {
"label": "Email",
"placeholder": "Enter your email",
"type": "email"
}
},
{
"type": "Input",
"props": {
"label": "Password",
"placeholder": "Enter password",
"type": "password"
}
},
{
"type": "Button",
"props": {
"label": "Sign In",
"variant": "primary"
}
}
]
}root = Card([header, emailField, passwordField, signInButton])
header = CardHeader("Welcome Back", "Continue your journey")
emailField = FormControl("Email", Input("email", "Enter your email", "email"))
passwordField = FormControl("Password", Input("password", "Enter password", "password"))
signInButton = Button("Sign In", "action:signIn", "primary")
Click through each step to see the complete workflow
Create your component library with Zod schemas and generate the system prompt
import { defineComponent, createLibrary } from '@openuidev/lang-react';
import { z } from 'zod';
const MyCard = defineComponent({
name: 'MyCard',
description: 'Displays a titled content card.',
props: z.object({
title: z.string(),
}),
component: ({ props }) => <div>{props.title}</div>,
});
export const myLibrary = createLibrary({
components: [MyCard, ...otherComponents],
});
export const systemPrompt = myLibrary.prompt(); // Generated system prompt
Real-world applications where OpenUI Lang excels
Generate complex data visualizations and metric cards from natural language queries.
Stream UI components in real-time as the LLM generates responses.
Build adaptive forms that change based on user input or context.