Benchmarks

Token efficiency and latency comparison of OpenUI Lang vs YAML, Vercel JSON-Render, and Thesys C1 JSON.

OpenUI Lang is designed to be token-efficient and streaming-first. This page presents a reproducible benchmark comparing it against three structured alternatives across seven real-world UI scenarios: YAML, Vercel JSON-Render, and Thesys C1 JSON.

Formats Compared

FormatDescription
OpenUI LangLine-oriented DSL streamed directly by the LLM
YAMLYAML root / elements spec payload
Vercel JSON-RenderJSONL stream of JSON Patch (RFC 6902) operations
Thesys C1 JSONNormalized component tree JSON (component + props)

Same output, different representations

All four formats encode exactly the same UI. Here is the same simple table in each:

OpenUI Lang (148 tokens)

root = Stack([title, tbl])
title = TextContent("Employees (Sample)", "large-heavy")
tbl = Table(cols, rows)
cols = [Col("Name", "string"), Col("Department", "string"), Col("Salary", "number"), Col("YoY change (%)", "number")]
rows = [["Ava Patel", "Engineering", 132000, 6.5], ["Marcus Lee", "Sales", 98000, 4.2], ["Sofia Ramirez", "Marketing", 105000, 3.1], ["Ethan Brooks", "Finance", 118500, 5.0], ["Nina Chen", "HR", 89000, 2.4]]

YAML (316 tokens)

root: stack-1
elements:
  textcontent-2:
    type: TextContent
    props:
      text: Employees (Sample)
      size: large-heavy
  table-3:
    type: Table
    props:
      rows:
        - [...]
    children:
      - col-4
      - col-5
      - col-6
      - col-7
  stack-1:
    type: Stack
    props: {}
    children:
      - textcontent-2
      - table-3

Vercel JSON-Render (340 tokens)

{"op":"add","path":"/root","value":"stack-1"}
{"op":"add","path":"/elements/textcontent-2","value":{"type":"TextContent","props":{"text":"Employees (Sample)","size":"large-heavy"},"children":[]}}
{"op":"add","path":"/elements/col-4","value":{"type":"Col","props":{"label":"Name","type":"string"},"children":[]}}
{"op":"add","path":"/elements/col-5","value":{"type":"Col","props":{"label":"Department","type":"string"},"children":[]}}
{"op":"add","path":"/elements/col-6","value":{"type":"Col","props":{"label":"Salary","type":"number"},"children":[]}}
{"op":"add","path":"/elements/col-7","value":{"type":"Col","props":{"label":"YoY change (%)","type":"number"},"children":[]}}
{"op":"add","path":"/elements/table-3","value":{"type":"Table","props":{"rows":[...]},"children":["col-4","col-5","col-6","col-7"]}}
{"op":"add","path":"/elements/stack-1","value":{"type":"Stack","props":{},"children":["textcontent-2","table-3"]}}

Thesys C1 JSON (357 tokens)

{
  "component": {
    "component": "Stack",
    "props": {
      "children": [
        { "component": "TextContent", "props": { "text": "Employees (Sample)", "size": "large-heavy" } },
        { "component": "Table", "props": { "columns": [...], "rows": [...] } }
      ]
    }
  },
  "error": null
}

Token Count Results

Generated by GPT-5.2 at temperature 0. Token counts measured with tiktoken using the gpt-5 model encoder.

ScenarioYAMLVercel JSON-RenderThesys C1 JSONOpenUI Langvs YAMLvs Vercelvs C1
simple-table316340357148-53.2%-56.5%-58.5%
chart-with-data464520516231-50.2%-55.6%-55.2%
contact-form762893849294-61.4%-67.1%-65.4%
dashboard2,1282,2472,2611,226-42.4%-45.4%-45.8%
pricing-page2,2302,4872,3791,195-46.4%-52.0%-49.8%
settings-panel1,0771,2441,205540-49.9%-56.6%-55.2%
e-commerce-product2,1452,4492,3811,166-45.6%-52.4%-51.0%
TOTAL9,12210,1809,9484,800-47.4%-52.8%-51.7%

OpenUI Lang uses up to 61.4% fewer tokens than YAML, 67.1% fewer than Vercel JSON-Render, and 65.4% fewer than Thesys C1 JSON.


Estimated Latency

Latency scales linearly with output token count at a given generation speed. At 60 tokens/second (typical for hosted frontier models):

ScenarioYAMLVercel JSON-RenderThesys C1 JSONOpenUI LangSpeedup vs YAMLSpeedup vs Vercel
simple-table5.27s5.67s5.95s2.47s2.14x faster2.30x faster
chart-with-data7.73s8.67s8.60s3.85s2.01x faster2.25x faster
contact-form12.70s14.88s14.15s4.90s2.59x faster3.04x faster
dashboard35.47s37.45s37.68s20.43s1.74x faster1.83x faster
pricing-page37.17s41.45s39.65s19.92s1.87x faster2.08x faster
settings-panel17.95s20.73s20.08s9.00s1.99x faster2.30x faster
e-commerce-product35.75s40.82s39.68s19.43s1.84x faster2.10x faster

The latency advantage compounds with UI complexity. A contact form renders up to 3.0× faster, and even complex dashboards and pricing pages — the kinds of UIs where Generative UI delivers the most value — render 2–3× faster with OpenUI Lang.


Methodology

Model

GPT-5.2, temperature 0. Same system prompt and user prompt for every scenario. Each format is derived from the same LLM output, not independently generated.

Conversion

The LLM generates OpenUI Lang. Thesys C1 JSON is a normalized AST projection (component + props) that drops parser metadata (type, typeName, partial, __typename). The YAML payload and Vercel JSON-Render output are two serializations of the same json-render spec projection (root, elements, optional state): JSONL emits RFC 6902 patches, while YAML is serialized with yaml.stringify(..., { indent: 2 }).

Token Counting

All formats measured with tiktoken using the gpt-5 model encoder — the same tokenizer family as GPT-5.2. Whitespace and formatting is included as-is in the count. For YAML, the benchmark counts the document payload only and excludes the outer yaml-spec fence.

Latency Model

Assumes constant throughput (60 tok/s). Real latency also depends on TTFT and network. Streaming advantage is most visible for the last element to render, not just overall time.

Why is JSON-Render heavier than expected?

Vercel JSON-Render encodes each element as a separate {"op":"add","path":"/elements/id","value":{...}} line. The op, path, value, type, props, and children keys repeat for every node. For deeply nested UIs (dashboards, pricing pages), the structural repetition accumulates significantly — up to 3.0× the tokens of OpenUI Lang across our scenarios.


Reproducing the Benchmark

The benchmark scripts live in benchmarks/. To regenerate:

# 1. Generate samples (calls OpenAI — requires OPENAI_API_KEY in your shell)
cd benchmarks
pnpm generate

# 2. Run the token/latency report (offline, no API calls)
pnpm bench

Source files:

  • generate-samples.ts — calls OpenAI, converts output to all four formats, saves to samples/
  • run-benchmark.ts — reads saved samples, counts tokens, prints the tables
  • thesys-c1-converter.ts — AST → normalized Thesys C1 JSON converter
  • vercel-spec-converter.ts — AST → shared json-render spec projection (root / elements)
  • vercel-jsonl-converter.ts — shared spec → RFC 6902 JSONL converter
  • yaml-converter.ts — shared spec → YAML document converter
  • schema.json — full JSON Schema for the default component library (auto-generated by library.toJSONSchema())
  • system-prompt.txt — system prompt for the default component library (auto-generated by library.prompt())

On this page