Stop wasting tokens.Feed your LLM exactly what it needs.

One command. Every file. Any AI model.

src/
lib/
repo
Zigzag Logo
ZigZag
Loading latest release…

One tool. Zero overhead.

No servers. No databases. No API keys. Just your code — packaged and ready for any AI.

Zero Infrastructure

No servers, no vector databases, no API keys. Run it once, get a file. That’s it. Zero cost, zero setup, zero maintenance.

$0

Infrastructure cost

Optimized for LLMs

Strips repetitive code, adds metadata, and generates token-optimized reports. Split huge codebases into chunks with --chunk-size.

~68%

Token reduction

Live Context with Watch Mode

OS-level file events update your report the moment you save. Your LLM always has the latest code.

Works with Any LLM

ChatGPT, Claude, Gemini, local models. One file, every AI.

Multi-Root Projects

Frontend, backend, docs — scan them all simultaneously. Multiple paths generate individual reports plus a unified combined.html dashboard.

HTML Dashboard

Interactive single-file dashboard with charts, virtual-scroll source viewer, and syntax highlighting. No server required.

Upload to ZagForge

Push your scan result to ZagForge with --upload. Authenticates via ZAGFORGE_API_KEY or ~/.zagforge/credentials.

Built with Zig

Extreme performance for massive codebases. Run zigzag bench for a per-phase timing breakdown.

What happens when you run ZigZag.

ZigZag Dashboard Preview
Live Preview
Interactive Dashboard
Single-file HTML with charts, virtual-scroll source viewer, and PrismJS syntax highlighting. No server required.

0.8s

3,763 files scanned

Lightning-Fast Scanning
Built with Zig for extreme native performance. Scans your entire codebase in under a second.
.md
.llm.md
.html
.json

4 formats, one pass

Multiple Output Formats
Markdown, token-optimized LLM reports, JSON, and an interactive HTML dashboard — all in one pass.
Watching
Changed
Live Watch Mode
OS-level file events rebuild your reports the instant you save. Always-fresh context.
ChatGPT
Works with Any AI
One file, every model. Drop into ChatGPT, Claude, Gemini, Llama, or any local LLM.

Why not just copy-paste?

You could. But here's what you'd be missing.

Copy-paste
RAG / Vector DB
ZigZag
Full codebase context
Zero infrastructure
Token-optimized output
Stays in sync on save
Works offline
Setup time
Minutes
Hours
Seconds
Cost
Free
$$$
Free

Get started in 10 seconds.

Works on macOS, Linux, and Windows.

macOS / Linux

Via Homebrew package manager

Recommended

Step 1 — Tap the repository

brew tap LegationPro/zigzag

Step 2 — Install

brew install zigzag
Star on GitHub

Questions? Answers.

Everything you need to know before getting started.