Terminal with AI prompt showing ai@dev:~$

A student asked the master: “What is the most powerful interface?” The master opened a terminal and typed nothing. “But you have done nothing,” said the student. “Exactly,” replied the master. “The interface that requires nothing is infinitely powerful.”

What does a good interface look like when AI does most of the work?

Last month, I explored what “good code” might look like if AI becomes the primary reader and writer. The argument was straightforward: if we’re optimizing code for whoever reads it most, and AI increasingly handles that role, our definition of quality might shift.

But there’s a related question worth exploring. If AI becomes the primary actor in development, what happens to the interfaces we’ve built for humans?

Something interesting happened in 2025 that might offer some clues.


The 2025 Convergence

Between February and September 2025, every major AI lab released a command-line coding agent:

The open source community followed suit. OpenCode emerged as a provider-agnostic alternative, and Charmbracelet released Crush, bringing their signature terminal aesthetics to AI coding agents.

That Cursor example is the tell. Cursor built its entire business around being a visual AI-native IDE. When an IDE company adds a CLI mode, something has shifted in how we think about interfaces.

It’s a terminal renaissance. As TechCrunch reported, “AI coding tools are shifting to a surprising place: The terminal.” The numbers support it. By July 2025, Claude Code alone had attracted 115,000 developers processing 195 million lines of code per week. By November, it reached $1 billion in annualized revenue.

Either all these companies copied each other blindly, or they independently discovered the same constraint. The pattern suggests something about what interfaces work best when AI is doing the heavy lifting.


The Traditional Assumption

A young programmer boasted: “My interface has a thousand buttons.” The old sysadmin replied: “My interface has one prompt. It can do a thousand things.” The young programmer added more buttons. The old sysadmin deleted his prompt and let the machine decide.

We’ve lived by a simple rule for decades: powerful tools require elaborate interfaces.

More capabilities meant more buttons, more menus, more visual scaffolding. IDEs sprouted panels for debugging, refactoring, version control, testing. Each feature demanded screen real estate. The logic seemed to make sense. Human cognition needs help managing complexity. Visual interfaces offload that cognitive burden onto the screen.

This framework shaped software design: more power required more visual structure.

But what if the audience changes?


The Thesis

Master Foo was asked: “How do I make my program more intelligent?” Master Foo deleted half the code. The student protested: “But now it does less!” Master Foo smiled: “No. Now it understands more.”

Here’s the pattern I’m observing: as AI becomes more capable of handling complex tasks, the interfaces wrapping those capabilities tend to simplify.

The sophistication didn’t disappear. It migrated.

When AI handles the complex work, the interface can become simpler because the intelligence no longer resides primarily in the user’s head. You don’t need elaborate visual menus when you can describe your intent in plain language and have AI figure out the execution details.

This might seem counterintuitive. We built GUIs specifically to make technology more accessible. But maybe that accessibility was solving for a different constraint: the gap between human intent and machine capability. If AI can bridge that gap through natural language, the elaborate visual scaffolding we built to help humans navigate complexity becomes less essential.

At least, that’s the theory.


Why CLI Might Work Better for Agents

Several technical arguments suggest why command-line interfaces might be well-suited for AI coding agents.

Text is the native modality

For current language models, text remains the most efficient way to communicate. Every GUI interaction requires translation: button clicks map to functions, visual states encode into data structures, spatial arrangements convert to logical operations.

The terminal skips that translation layer. Natural language intent flows directly into text commands. When you describe what you want to an AI agent, that description is already in the format the model works best with.

Low-level stack access

Warp founder Zach Lloyd put it well: “The terminal occupies a very low level in the developer stack, so it’s the most versatile place to be running agents.”

A shell agent has direct access to the entire system. File operations, process management, network calls, package installation. The terminal doesn’t impose constraints on what operations are possible. IDE-based coding agents, by contrast, work within the boundaries their extension APIs define.

Unix philosophy alignment

A pipe dreams of water. A Unix pipe dreams of text. Both carry what flows through them without judgment. The wise programmer builds pipes, not dams.

Fifty years ago, Unix established principles that turn out to be well-suited for AI agents:

  • Programs should do one thing well
  • The output of every program can become input to another
  • Prefer text streams as the universal interface

An AI agent in a terminal can pipe outputs between tools, chain utilities together, and leverage decades of battle-tested commands (grep, awk, sed, git, curl) without reinventing them. This composability is harder to achieve through GUI-based toolchains where each integration is a separate island.

End-to-end development coverage

Writing code is only part of software development. Deployment, testing, infrastructure, version control all happen through command-line tools. A terminal-based agent can handle the entire workflow without context-switching between visual interfaces.

Scriptable and embeddable

This might be the most underappreciated advantage. The same CLI agent you use interactively can be embedded into scripts, pipelines, and automations. Just as you use sed, awk, or grep both ad-hoc and programmatically, CLI coding agents fit naturally into CI/CD pipelines, pre-commit hooks, or batch processing workflows.

Many of these tools support non-interactive mode for exactly this reason. Pass a prompt as a command-line argument, get structured output back. Your coding agent becomes another composable utility in your toolchain, not a separate application requiring human interaction.


The Intent Clarity Argument

This is the part I find most compelling for my own workflow.

GUI interfaces communicate intent through indirect means: clicking buttons, selecting menu items, dragging elements, navigating visual hierarchies. Each interaction requires the system to interpret what the user meant based on where they clicked and what state they were in.

CLI interfaces communicate intent through explicit text. There’s no ambiguity about what “git commit -m ‘fix auth bug’” means. The intent is the command.

When AI interprets your intent, explicit text is unambiguous. Describing what you want in natural language maps directly to CLI interactions because both use text as the medium. There’s no translation from “I clicked this button” to “I want this action.”

The terminal becomes what one developer called a “universal interface” - a common language that can interact with any tool, any system, any workflow. When AI is mediating between human intent and system action, that universality matters.


When Non-Coders Use the Terminal

A lawyer approached the terminal with fear. “I am not a programmer,” she said. The terminal replied: “Neither am I. Tell me what you need.” She typed her request in plain English. The code wrote itself. Who was the programmer?

Here’s something that would have seemed absurd five years ago: non-technical professionals are building their own automation systems. No engineers. No tickets filed with IT. Just knowledge workers armed with CLI agents, solving their own problems.

This became concrete for me last week at the inaugural Claude Code meetup in my city. I expected a room full of developers. Instead, I found a surprising number of non-technical people sharing their experiences using Claude Code.

The pattern is emerging across industries. At Anthropic itself, surprising use cases emerged: lawyers built phone tree systems to help team members connect with the right attorney, marketers generated hundreds of ad variations in seconds, and data scientists created complex visualizations without knowing JavaScript. As Anthropic observed, “agentic coding isn’t just accelerating traditional development. It’s dissolving the boundary between technical and non-technical work.”

Guides now exist for non-technical professionals - product managers, designers, and executives - using Claude Code for tasks that have nothing to do with software development. One product manager describes it simply: “The key is to forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent.”

These aren’t technical people dabbling in low-code platforms. They’re professionals working directly in terminals, describing what they want in plain English and watching the code write itself.

If this pattern holds more broadly, it suggests something about the relationship between capability and convenience. The terminal’s friction didn’t disappear. Those permission dialogs weren’t false obstacles. The learning curve remained real. But the results mattered more than interface comfort.

The traditional wisdom held that simpler interfaces democratize technology. But capability trumps convenience. When AI removes the problem-solving barrier, terminal friction becomes an acceptable trade-off. The hard part was never the interface. It was understanding the domain, architecting solutions, debugging edge cases. AI handles that complexity. What remains is a simple text exchange: you describe what you want, the AI executes, you see the results.

The terminal barrier doesn’t dissolve because interfaces got easier. It dissolves because the real barrier disappeared.


Connecting to the November Thesis

In my previous post, I argued that code readability might matter less if AI becomes the primary reader and writer. The same logic applies one layer up.

When AI is the primary actor in development:

  • Human-oriented interface complexity becomes friction, not assistance
  • The elaborate visual metaphors we built to help humans navigate complexity may not help AI at all
  • Optimizing for whoever uses the interface most frequently starts to favor machine-friendly formats

The CLI could be thought of as another “intermediate representation” - like minified JavaScript or compiled binaries. Something we don’t need to fully understand because we trust the toolchain handling it.

This might be the same pattern at a different layer of the stack. In November, I speculated about code. Here, I’m speculating about the interfaces around code. The underlying question is identical: who are we optimizing for?


The Transparency Trade

The novice hides his work, fearing judgment. The journeyman shows his work, seeking approval. The master’s work shows itself, requiring neither. When AI works in silence, who watches the watcher?

There’s another argument worth considering. When AI has agency, when it can modify files, run commands, install packages, you need visibility into what it’s doing.

Terminal output provides that visibility naturally. Every command appears on screen. Every operation leaves a trace. You can watch the agent work: “searching for authentication logic… found three matches… reading auth.py… modifying line 47…”

IDE copilots often hide this process. Suggestions appear. Code materializes. You might see a diff, but you don’t see the reasoning that produced it. The AI’s decision-making stays behind the suggestion.

Terminal verbosity might be a feature when AI has agency, not a bug. Every action is visible, every command recorded. The transparency is built into the medium itself.


The Risks

A student declared: “The command line has won!” The master said: “Show me this ‘command line’ you speak of.” The student pointed to the terminal. The master closed it. “Where is your victory now?” The student was enlightened: tools do not win. Problems get solved.

I should be clear about what could go wrong if this pattern continues.

Transition period danger

The most concerning scenario is AI that’s reliable enough to trust but not reliable enough to verify. If we shift to CLI-based workflows where humans aren’t reading every line of AI-generated code, we need that AI to be genuinely trustworthy. We’re not there yet.

Premature trust in terminal-based agents, combined with reduced human oversight, could lead to accumulated errors that are hard to catch and expensive to fix.

Skill concentration

If CLI agents become the dominant way to build software, the broad human ability to work with sophisticated visual development tools could shrink. Debugging expertise becomes more specialized. When AI fails or produces novel bugs, organizations need people who can work from first principles, and there might be fewer of them.

Platform dependence

If 2-3 providers dominate CLI coding agents, they effectively control a critical part of the software development toolchain. Pricing power, feature decisions, and policy choices become centralized. This is a concentration risk worth watching.


Where This Might Lead

If this pattern continues, we might see something like this:

Short term: CLI agents remain power user tools. Most developers stick with IDE-based workflows. The terminal renaissance stays a enthusiast phenomenon.

Medium term: CLI-first workflows become standard for AI-assisted development. Organizations build processes around agents that work through text interfaces. Visual IDEs remain for tasks where spatial reasoning matters (design, visualization, complex debugging).

Long term: Mike Merrill, co-creator of Terminal-Bench, made a bold prediction: “Our big bet is that there’s a future in which 95% of LLM-computer interaction is through a terminal-like interface.”

I don’t know if that’s right. But it’s a hypothesis worth tracking.


The Bottom Line

Before enlightenment: chop wood, carry water, click buttons. After enlightenment: chop wood, carry water, type commands. The work remains. Only the interface changes. Or does it?

Maybe I’m wrong about all of this.

Maybe IDE-based coding agents will adapt, adding capabilities that make visual workflows just as efficient. Maybe multimodal models will get good enough at interpreting visual interfaces that the text advantage disappears. Maybe the domains where GUIs remain necessary (design, data visualization, collaborative work) will prove to be the majority of what developers actually do.

But the convergence is hard to ignore. Every major AI lab arrived at the same interface architecture within six months. They did so despite having different corporate cultures, different technical stacks, different strategic priorities.

The pattern suggests something about what works when AI becomes the primary actor in a workflow. And it might be the same principle from November, applied to a different layer: when we ask “what’s a good interface?”, the answer depends on who we’re optimizing for.

If AI is increasingly the one doing the work, interfaces that communicate intent clearly to AI might matter more than interfaces that visualize state beautifully for humans.

Same question as before. Different layer of the stack. Who is the audience?