Featured image of post Intent Alignment: The Overlooked Core Engineering Capability in the AI Coding Era

Intent Alignment: The Overlooked Core Engineering Capability in the AI Coding Era

Examines the critical importance of intent alignment between humans and AI in the AI coding era. Analyzes how ambiguous natural language, insufficient context, lack of verification loops, and excessive context lead to the gap between intent and results under the Vibe Coding and Harness Engineering paradigms, and provides practical recommendations.

Introduction: When “Code” No Longer Equals “Intent”

For decades of software engineering practice, “our code” was the direct embodiment of “our intent.” We designed architectures, defined interfaces, wrote logic, and debugged edge cases ourselves. Every line of code was a concrete manifestation of thought. Even when miscommunication occurred, it happened primarily between people. Once coding began, intent was firmly anchored in the code itself.

Traditional “intent alignment” typically stopped at ensuring shared understanding of requirements. But today, with emerging paradigms like Vibe Coding and Harness Engineering sweeping through development workflows, that equation has been completely broken.

Code is no longer the carrier of intent. It is the AI’s “translation” of your intent.

This shifts the challenge of intent alignment from “person to person” to “person to AI.” And the bar is higher, with far less room for error. You can no longer rely on the fuzzy tacit understanding of “figure it out as you go” or “code as documentation.” You must convey the following to the AI with near 100% precision:

  • Execution order: What comes first? What comes next? Which steps cannot run in parallel?
  • Technology choices: Which framework? Which dependency versions? Are new libraries allowed?
  • Architecture constraints: Where are the module boundaries? What areas are off-limits? How is data flow restricted?
  • Development approach: Standard development, or test-driven development (TDD)?
  • Context assets: Where is the knowledge base? What are the existing interface contracts?
  • Environment limits: Runtime resource caps? Network policies? Security compliance red lines?
  • Verification criteria: What counts as “done”? Which paths must tests cover? How are failure semantics defined? Is E2E required?

Missing any of these, the AI will make reasonable but incorrect inferences based on “common patterns” from its training data. This is exactly what drew the harshest criticism of early Vibe Coding: acting on assumptions, making decisions without authorization. These missing details are the root cause of the gap between intent and results.

The Intent-Result Gap: Many Factors Lead to Poor Output

In Vibe Coding practice, we often fall into an illusion: as long as we “say” what we want, the AI will “understand” and “get it right.” But the reality is: intent ≠ instruction, instruction ≠ implementation, implementation ≠ correctness.

The Ambiguity of Natural Language

Natural language excels at conveying emotion and direction, but it severely lacks the precision engineering requires. After all, natural language evolved for efficient “person-to-person communication.” Nobody wants to describe every detail.

“Optimize performance,” “make the code cleaner,” “handle it more robustly.” These phrases are common in human collaboration, but they are disaster starting points in human-machine interaction. The AI cannot sense your subtext. It can only probabilistically select the “most likely” interpretation from its training data. And that interpretation often diverges sharply from your actual intent.

Insufficient Context

The AI knows nothing about your system unless you explicitly tell it:

  • Which services are on the critical path, and which are peripheral features?
  • What do fields mean? Which ones cannot be modified? Where are the forbidden zones?
  • What is the RPC call path for this interface?
  • After cache expiration, should we degrade gracefully or return an error?
  • Should cache TTL include a 20% random jitter to prevent cache stampedes?

These “organizational common sense” details, if not explicitly injected into context, will be filled by the AI with general patterns from its training. The result is often wrong degradation strategies, unauthorized data access, or interface changes that conflict with existing contracts. This pollutes the architecture and plants hidden bugs.

No Feedback, No Clarification, No Investigation, No Verification

“Ambiguous natural language” and “insufficient context” are not the most dangerous problems. What is truly risky is the lack of a human-AI collaborative verification loop. Current AI does not proactively report uncertainty, nor does it independently browse knowledge bases. All of this requires the developer to actively design the interaction flow:

  • Mandatory clarification: When something is ambiguous or there are multiple valid approaches, the AI must ask us instead of deciding on its own.
  • Context completion: The information we provide is inevitably incomplete. The AI must be able to investigate designated locations on its own.
  • Self-verification: With sufficient information, can the AI connect the dots and form a coherent solution? It needs the ability to self-verify.

Context Length and the “Corruption” That Produces Hallucination

We said earlier that insufficient context leads to misaligned understanding. But excessively long context can cause the same problem. Large models have finite context windows and finite information processing capacity. As information volume grows, the AI’s attention mechanism starts to lose focus. Key constraints from early in the conversation get diluted. Later additions get over-emphasized. You will notice a clear pattern: the rules you told it first are “forgotten” by the end. Meanwhile, excess context forces the model to compress, which inevitably introduces further information distortion.

Control the signal-to-noise ratio. What we need to do is:

  • Context information: precision > quantity
  • Structured descriptions
  • Focus on what matters

Practice: A Baseline Specification

We should first establish a baseline specification in our context. You can even add this to your skill configuration:

  1. If requirements are ambiguous, ask first. Do not add tasks the user never requested based on “common practice.”
  2. If a description allows multiple interpretations, clarify before proceeding. Do not guess on the user’s behalf.
  3. If 50 lines solves the problem, do not write 200. Avoid over-encapsulation and over-abstraction.
  4. Make localized changes only. Do not start with sweeping refactoring. Do not turn a small task into major surgery.
  5. If a task is highly complex, implement it in steps. Keep total context volume under control to avoid “corruption.”

For more on the paradigm shift from prompting to engineering constraints, see my post on Vibe Coding: Harness Engineering. To understand how context rot affects AI coding sessions, check out my experience with GSD. For practical Vibe Coding workflows, see Git Worktree in the Vibe Coding Era. And for spec-driven development in practice, read my OpenSpec experience.