Bun’s New --cpu-prof-md Flag Generates AI‑Friendly Markdown Profiling, Prompting a Node.js Response

Bun introduces the --cpu-prof-md flag that outputs CPU profiling data as structured Markdown for large language models, earning praise from Vue creator Evan You and inspiring Node.js core contributor Matteo Collina to release a pprof‑to‑md converter, highlighting a shift toward AI‑oriented CLI tools.

Node.js Tech Stack
Node.js Tech Stack
Node.js Tech Stack
Bun’s New --cpu-prof-md Flag Generates AI‑Friendly Markdown Profiling, Prompting a Node.js Response

Bun founder Jarred Sumner announced a seemingly modest yet powerful new runtime flag, --cpu-prof-md, which earned an explicit endorsement from Vue.js creator Evan You, who said “this is very good and Node should have this too.” The announcement also prompted Node.js core contributor Matteo Collina to respond overnight.

Traditionally, backend developers perform CPU profiling by generating a .cpuprofile file, loading it into Chrome DevTools or a dedicated viewer, and manually inspecting the resulting flame graph to locate hot functions. This multi‑step process is time‑consuming and visual‑heavy.

With the new flag, developers can simply run: bun --cpu-prof-md script.js The command produces a Markdown report designed for large language models (LLMs) rather than a binary file. The report includes three sections:

Top 10 Hotspots – the most time‑consuming functions.

Call Tree – a hierarchical view of function calls.

Function Details – in‑depth analysis of each hotspot.

Jarred noted that this format lets LLMs such as Claude read and grep the data easily. Consequently, a developer can copy the Markdown into Claude and ask, “Where is my code slow? How can I improve it?” and receive concrete optimization suggestions based on the precise profiling numbers.

Matteo Collina quickly built pprof-to-md, a tool that converts Node.js’s native .cpuprofile (pprof) format into the same LLM‑friendly Markdown. The utility is open‑sourced at platformatic/pprof-to-md on GitHub: https://github.com/platformatic/pprof-to-md This episode illustrates a broader trend: CLI tools are evolving from outputs aimed solely at humans (colored text, progress bars) or scripts (JSON, plain text) toward a third category—outputs optimized for AI consumption. Binary profiling files are opaque to LLMs, whereas Markdown serves as the “native language” for these models, enabling token‑efficient, semantically clear data exchange.

The article outlines a reimagined debugging workflow:

Your service slows down.

Run node --cpu-prof-md app.js (assuming Node adopts Matteo’s approach).

The terminal emits a Markdown report.

An IDE‑integrated AI assistant (e.g., Cursor, Copilot, Windsurf) parses the report automatically.

The AI replies with actionable advice, such as “the regex on line 45 causes an 80% CPU spike; rewrite it as …”.

This shift moves debugging from manual flame‑graph analysis to “automatic diagnosis” powered by AI. While Bun has often been caricatured as a “fast‑only” runtime, its focus on developer experience (DX) captures a changing developer behavior: developers increasingly rely on AI to read logs and performance data.

The rapid community response also demonstrates the vitality of the open‑source ecosystem—no single vendor dominates, and contributions flow freely between projects.

For ordinary developers, the practical takeaway is that future performance optimization may be reduced to a single prompt to an LLM, thanks to AI‑friendly profiling outputs.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

LLMNode.jsBunCPU profilingAI debuggingCLI toolspprof-to-md
Node.js Tech Stack
Written by

Node.js Tech Stack

Focused on sharing AI, programming, and overseas expansion

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.