My AI Adoption Journey: Lessons from the Terraform and Ghostty Creator

The author, Mitchell Hashimoto—co‑founder of HashiCorp and creator of Terraform and Ghostty—shares a step‑by‑step, candid account of adopting AI agents, detailing six phases from abandoning chatbots to continuously running agents, the concept of “harness engineering,” and practical insights on when and how to integrate AI into a developer workflow.

Architecture Musings
Architecture Musings
Architecture Musings
My AI Adoption Journey: Lessons from the Terraform and Ghostty Creator

Background

Mitchell Hashimoto, co‑founder of HashiCorp and one of the core authors of Terraform, has shifted his focus to Ghostty, a Zig‑written, GPU‑accelerated terminal emulator. He uses his experience to introduce the term harness engineering , a phrase later popularized by OpenAI and other industry voices.

Step 1: Abandon Chatbots

Hashimoto advises stopping the use of chat‑based AI (e.g., ChatGPT, Gemini) for serious coding tasks because the interaction is inefficient: the model often produces incorrect output that must be manually corrected. He illustrates this with a personal anecdote where Gemini reproduced a SwiftUI command‑panel UI for Ghostty, but subsequent attempts on larger brownfield projects resulted in poor, copy‑paste‑heavy workflows.

He defines an agent as an LLM capable of looping conversations and invoking external actions such as file I/O, program execution, and HTTP requests.

Step 2: Replicate Your Own Work

He forced an agent to redo all of his manually completed tasks, comparing the agent’s output against his own without showing the reference solution. This painful iteration revealed the agent’s strengths and weaknesses, emphasizing the importance of breaking conversations into clear, actionable tasks, separating planning from execution, and giving the agent a self‑validation mechanism.

Split conversations into discrete, executable tasks.

Separate “planning” and “execution” sessions for vague requirements.

Provide agents with a way to verify their own results to avoid regressions.

He notes that real efficiency gains come from recognizing when not to use an agent.

Step 3: End‑of‑Day Agents

He allocated the last 30 minutes of his workday to launch one or more agents, using them for deep‑research sessions, exploratory ideas, and issue/PR triage. For triage, he wrote scripts that invoked gh (GitHub CLI) to let agents sort issues, but he disabled direct replies, requiring a daily report instead.

He stresses avoiding continuous agent loops to prevent disruptive notifications and to keep the human in control of when to interrupt the agent.

Step 4: Outsource the “Sure‑Things”

Having identified tasks where agents reliably produce near‑perfect solutions, he began delegating those tasks entirely, while he focused on higher‑value work. He also turned off desktop notifications to reduce context‑switch costs.

Step 5: Polish Your Harness

He formalizes the practice of harness engineering : whenever an agent repeats a mistake, design a fix that prevents the error in the future. This takes two forms:

Improved implicit prompts (e.g., updating AGENTS.md with corrective instructions).

Concrete tools such as custom scripts for screenshots or filtered test runs, referenced in AGENTS.md so the agent knows they exist.

He continuously refines these harnesses as he observes “stupid” agent behavior.

Step 6: Keep an Agent Running

His goal is to have at least one agent active at all times, asking himself, “Is there something the agent can do right now?” He prefers a single, well‑managed agent over multiple concurrent ones, balancing deep, slower models (e.g., Amp’s deep mode) with practical productivity.

Conclusion

Through six iterative steps, Hashimoto demonstrates measurable efficiency improvements, a clearer understanding of AI’s capabilities and limits, and a disciplined workflow that treats AI as a tool rather than a magical solution. He emphasizes that the real value lies in knowing when to involve an agent, continuously refining harnesses, and maintaining human oversight.

TerraformAI adoptionsoftware productivityAgent EngineeringHarness EngineeringGhostty
Architecture Musings
Written by

Architecture Musings

When the AI wave arrives, it feels like we've reached the frontier of technology. Here, an architect records observations and reflections on technology, industry, and the future amid the upheaval.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.