The Layout Control Language for Human-AI Collaboration
Current focus: slides
See how different AI models render the same mockup
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ โโโ โ
โ โMโ ockup โ
โ โโโ โ
โ The Markdown for Slides โ
โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ ๐ โ โ ๐ค โ โ ๐จ โ โ
โ โ Draft โ โโโโ โ Generate โ โโโโ โ Image โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโ โโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
> Cover slide with workflow
When humans collaborate with AI on visual outputs, there's a control problem. Pure text prompts give inconsistent, unpredictable layouts. Precise formats (JSON, XML) are unreadable and hard to edit.
The missing layer: a human-readable, AI-parseable intermediate representation for layout โ like ControlNet uses edge maps to guide image generation, but for structured visual content.
ASCII art is visual and intuitive, anyone can sketch and edit
Clear structure that any LLM can understand and generate
Plain text diffs, easy to track changes
Define what goes where, let AI handle the aesthetics
Design principles: WYSIWYG, Intent-Driven, AI-Native
Real-world slides converted to mockup format