Featured image of post My First Experience with GSD: Solving the Context Rot Problem in AI Coding

My First Experience with GSD: Solving the Context Rot Problem in AI Coding

A deep dive into the GSD (get-shit-done) specification-driven development system. Learn how it tackles the context rot problem through fine-grained phased workflows and subagent mechanisms. A backend developer's perspective on why this approach resonates with modular design principles.

Introduction: The Context Rot Dilemma

Recently, while exploring AI-assisted coding tools, I discovered GSD (get-shit-done), a specification-driven development system. The name itself is refreshingly direct—no pretense, just getting things done. But what really caught my attention was its approach to tackling context rot.

If you’ve been using AI coding assistants for a while, you’ve probably noticed this phenomenon: conversations start off great—the AI’s responses are precise and helpful. But as the session progresses and context accumulates, quality starts degrading noticeably. This isn’t a model limitation; it’s what researchers call “context rot.”

Studies show that when a single session’s context exceeds roughly 50% of the model’s capacity, output quality declines significantly. It’s like a developer juggling too many details in their “working memory”—information overload inevitably affects judgment and creativity.

Traditional AI coding tools often use a single large context approach, stuffing all requirements, designs, and code into one long conversation. When facing complex projects, this quickly hits the context bottleneck. GSD was designed specifically to address this problem.

GSD’s Core Feature: Fine-Grained Workflows

As a backend developer, I’ve always valued “granularity” as a concept. Good system design should be modular—each module with clear responsibilities and well-defined boundaries. GSD’s design philosophy aligns perfectly with this: fine-grained phased workflows and dedicated subagent execution.

GSD decomposes the entire development process into multiple phases, each with clear goals and boundaries. Each phase is handled by specialized subagents that focus on their specific domain—research, planning, execution, and verification.

This design offers several clear advantages:

Context Isolation. Each subagent only handles context relevant to its phase, avoiding interference from other stages. It’s similar to service isolation in microservices architecture—each service focuses on its own business logic.

Clear Responsibilities. The researcher doesn’t need execution details; the verifier doesn’t need planning decisions. This separation of concerns ensures high-quality output at every step.

Traceability. Each phase produces clear artifacts—research documents, plan documents, verification reports. These aren’t just development records—they provide clear references for future maintenance and iteration.

For someone accustomed to backend thinking, this controllable granularity is highly appealing. It allows me to “design” the development process like designing a system, rather than passively following the AI’s pace.

Hands-on Experience: Building a Password Generator

After understanding GSD’s basic philosophy, I decided to test it with a simple project—a password generator. This choice was deliberate: small enough to validate GSD’s workflow quickly, yet with enough functionality to demonstrate how different phases collaborate.

The development followed GSD’s standard flow. First, the research phase—a research agent explored common password generator requirements, security standards, and implementation approaches. It produced detailed documentation covering password strength definitions, UI design considerations, and more.

Next, the planning phase. A planning agent used the research documents to create a development plan, breaking the project into phases: UI design, password generation logic, security validation. Each phase had defined inputs, outputs, and acceptance criteria.

The execution phase was where things got practical. GSD’s execution agent implemented features step by step, triggering a verification agent after each phase completion. It felt like a CI/CD pipeline—automated testing at every checkpoint.

What impressed me most: I didn’t need much intervention. I stated my requirements at the start, and GSD automatically drove the entire process. It felt like directing a well-trained development team—state the goal, and the team handles the rest.

The result was a fully functional password generator—clean UI, clear logic. But more importantly, I clearly saw how GSD’s workflow transforms a vague idea into a concrete product.

GSD vs. OpenSpec: Nuances in Specification Development

During my exploration, I also reflected on my previous experience with OpenSpec. Both are specification-driven approaches, but GSD left a deeper impression regarding specification rigor and granularity.

If you’re interested in spec-driven development, check out my previous post on OpenSpec in OpenCode.

OpenSpec’s philosophy is for developers to write detailed specification documents, then have AI generate code accordingly. This gives specification writers control, but the problem is that writing specifications itself requires significant time and expertise. And once specifications have gaps or ambiguities, the generated code may have issues.

GSD takes a different strategy. Rather than requiring developers to pre-write complete specifications, it uses multiple collaborating agents to progressively derive comprehensive specifications. Research agents gather requirements and technical context, planning agents create implementation plans, execution agents implement, and verification agents ensure quality. Each agent supplements and refines specifications while completing its work.

The benefit: specifications are dynamically and iteratively generated. You don’t need to anticipate every detail upfront—the system automatically discovers and fills specification gaps during execution. It’s more like “emergent design” in agile development, rather than “pre-design” in waterfall.

Additionally, GSD is stricter on specification verification. Each phase’s output becomes input for subsequent phases, creating an automatic verification mechanism. If specifications are incomplete or erroneous, downstream agents encounter problems, exposing specification issues early.

This rigor and granularity is a major plus for backend developers who care about code quality. It builds confidence that GSD-developed projects aren’t just reliable in code—they’re traceable and verifiable throughout the development process.

Conclusion

My first experience with GSD confirms it genuinely addresses AI coding’s core pain point. Context rot isn’t a minor issue to ignore—it directly impacts project quality. Through controllable granularity workflows and specialized subagent mechanisms, GSD effectively manages context scale, ensuring quality at every step.

As a backend developer, I appreciate GSD’s design philosophy—it applies classic software engineering principles (modularity, separation of concerns, traceability) to AI-assisted development. This fusion of traditional wisdom and new technology makes GSD not just a tool, but a methodology.

Of course, GSD isn’t a silver bullet. For very simple projects, using GSD might feel like overkill. But for moderately complex projects—especially those requiring team collaboration and long-term maintenance—GSD’s value becomes apparent.

For more on AI-assisted development paradigms, see my post on Vibe Coding: Harness Engineering.

Looking ahead, I plan to explore GSD in more projects, pushing its boundaries further. AI-assisted coding is evolving rapidly, and tools like GSD show a promising direction: not replacing developers, but making AI a capable assistant that helps us do our work better.


Have you tried GSD or other spec-driven development tools? I’d love to hear your experience in the comments.