thesys|
OpenUI

Benchmarks

Token efficiency and latency comparison of OpenUI Lang vs JSON-based streaming formats.

OpenUI Lang is designed to be token-efficient and streaming-first. This page presents a reproducible benchmark comparing it against two JSON-based alternatives across seven real-world UI scenarios.

Formats Compared

FormatDescription
OpenUI LangLine-oriented DSL streamed directly by the LLM
Thesys C1 JSONNormalized component tree JSON (component + props)
Vercel JSON-RenderJSONL stream of JSON Patch (RFC 6902) operations

Same output, different representations

All three formats encode exactly the same UI. Here is the same simple table in each:

OpenUI Lang (148 tokens)

root = Stack([title, tbl])
title = TextContent("Employees (Sample)", "large-heavy")
tbl = Table(cols, rows)
cols = [Col("Name", "string"), Col("Department", "string"), Col("Salary", "number"), Col("YoY change (%)", "number")]
rows = [["Ava Patel", "Engineering", 128000, 6.5], ["Noah Kim", "Sales", 94000, 3.2], ["Mia Rodriguez", "Marketing", 88000, 4.1], ["Ethan Chen", "Finance", 102000, 2.4], ["Sophia Johnson", "HR", 79000, 5.0]]

Vercel JSON-Render (340 tokens)

{"op":"add","path":"/root","value":"stack-1"}
{"op":"add","path":"/elements/textcontent-2","value":{"type":"TextContent","props":{"text":"Employees (Sample)","size":"large-heavy"},"children":[]}}
{"op":"add","path":"/elements/col-4","value":{"type":"Col","props":{"label":"Name","type":"string"},"children":[]}}
{"op":"add","path":"/elements/col-5","value":{"type":"Col","props":{"label":"Department","type":"string"},"children":[]}}
{"op":"add","path":"/elements/col-6","value":{"type":"Col","props":{"label":"Salary","type":"number"},"children":[]}}
{"op":"add","path":"/elements/col-7","value":{"type":"Col","props":{"label":"YoY change (%)","type":"number"},"children":[]}}
{"op":"add","path":"/elements/table-3","value":{"type":"Table","props":{"rows":[...]},"children":["col-4","col-5","col-6","col-7"]}}
{"op":"add","path":"/elements/stack-1","value":{"type":"Stack","props":{},"children":["textcontent-2","table-3"]}}

Thesys C1 JSON (357 tokens)

{
  "component": {
    "component": "Stack",
    "props": {
      "children": [
        { "component": "TextContent", "props": { "text": "Employees (Sample)", "size": "large-heavy" } },
        { "component": "Table", "props": { "columns": [...], "rows": [...] } }
      ]
    }
  },
  "error": null
}

Token Count Results

Generated by GPT-5.2 at temperature 0. Token counts measured with tiktoken using the gpt-5 model encoder.

ScenarioVercel JSON-RenderThesys C1 JSONOpenUI Langvs Vercelvs C1
simple-table340357148-56.5%-58.5%
chart-with-data520516231-55.6%-55.2%
contact-form893849294-67.1%-65.4%
dashboard2,2472,2611,226-45.4%-45.8%
pricing-page2,4872,3791,195-52.0%-49.8%
settings-panel1,2441,205540-56.6%-55.2%
e-commerce-product2,4492,3811,166-52.4%-51.0%
TOTAL10,1809,9484,800-52.8%-51.7%

OpenUI Lang uses roughly half the tokens of both JSON alternatives across all scenarios.


Estimated Latency

Latency scales linearly with output token count at a given generation speed. At 60 tokens/second (typical for hosted frontier models):

ScenarioVercel JSON-RenderThesys C1 JSONOpenUI LangSpeedup vs Vercel
simple-table5.67s5.95s2.47s2.3x faster
chart-with-data8.67s8.60s3.85s2.3x faster
contact-form14.88s14.15s4.90s3.0x faster
dashboard37.45s37.68s20.43s1.8x faster
pricing-page41.45s39.65s19.92s2.1x faster
settings-panel20.73s20.08s9.00s2.3x faster
e-commerce-product40.82s39.68s19.43s2.1x faster

The latency advantage compounds with UI complexity. A pricing page or dashboard — the kinds of UIs where Generative UI delivers the most value — render 2–3× faster with OpenUI Lang.


Methodology

Model

GPT-5.2, temperature 0. Same system prompt and user prompt for every scenario. Each format is derived from the same LLM output, not independently generated.

Conversion

The LLM generates OpenUI Lang. Thesys C1 JSON is a normalized AST projection (component + props) that drops parser metadata (type, typeName, partial, __typename). Vercel JSONL is produced by an RFC 6902-compliant converter that walks the same AST.

Token Counting

All formats measured with tiktoken using the gpt-5 model encoder — the same tokenizer family as GPT-5.2. Whitespace and formatting is included as-is in the count.

Latency Model

Assumes constant throughput (60 tok/s). Real latency also depends on TTFT and network. Streaming advantage is most visible for the last element to render, not just overall time.

Why is JSON-Render heavier than expected?

Vercel JSON-Render encodes each element as a separate {"op":"add","path":"/elements/id","value":{...}} line. The op, path, value, type, props, and children keys repeat for every node. For deeply nested UIs (dashboards, pricing pages), the structural repetition accumulates significantly — averaging 2.1× the tokens of OpenUI Lang across our scenarios.


Reproducing the Benchmark

The benchmark scripts live in js/benchmarks/. To regenerate:

# 1. Generate samples (calls OpenAI — requires OPENAI_API_KEY in your shell)
cd js/benchmarks
pnpm generate

# 2. Run the token/latency report (offline, no API calls)
pnpm bench

Source files:

  • generate-samples.ts — calls OpenAI, converts output to all three formats, saves to samples/
  • run-benchmark.ts — reads saved samples, counts tokens, prints the tables
  • thesys-c1-converter.ts — AST → normalized Thesys C1 JSON converter
  • vercel-jsonl-converter.ts — RFC 6902-compliant AST → JSONL converter
  • schema.json — full JSON Schema for the default component library (auto-generated by library.toJSONSchema())
  • system-prompt.txt — system prompt for the default component library (auto-generated by library.prompt())

On this page