A Team's Guide to Structured Prompt-Driven Development (SPDD)

By • min read

Introduction

Large language model (LLM) programming assistants have proven invaluable for individual developers, but scaling their use to entire teams introduces unique challenges. Thoughtworks’ internal IT organization tackled this by developing a method called Structured Prompt-Driven Development (SPDD), which treats prompts as first-class artifacts stored alongside code in version control. This approach aligns development with business needs and fosters collaboration. In this guide, we’ll walk through the SPDD workflow, breaking it into clear steps that emphasize the three key skills required: alignment, abstraction-first thinking, and iterative review. By following this method, your team can harness LLMs effectively while maintaining code quality and consistency.

A Team's Guide to Structured Prompt-Driven Development (SPDD)
Source: martinfowler.com

What You Need

Step 1: Align with Business Needs

Before any prompt or code is written, ensure every team member understands the core business objective. This alignment step prevents wasted effort and keeps the LLM focused on solving real problems. Sit with stakeholders to define the desired outcome, acceptance criteria, and edge cases. Document these in a shared space (e.g., a wiki or ticket) that will inform your prompts later.

Step 2: Abstract First – Design High-Level Prompts

Instead of diving into details, start with a high-level abstraction of the task. Write a prompt that describes the function or feature in general terms without specifying every implementation nuance. This abstraction-first approach mirrors good software design—it encourages you to think about the “what” before the “how.” For instance, a prompt might say: “Generate a function that validates user email addresses according to RFC 5322.” Leave low-level logic to the LLM during iteration.

Step 3: Write the Initial Prompt

Now expand the abstract prompt into a concrete, self-contained instruction. Include context from Step 1 (business need) and any constraints (e.g., “use Python 3.11”, “avoid external libraries”). Treat this prompt as a draft—it will evolve. A good practice is to specify the desired output format, such as code with comments. For example:

“Write a Python function `validate_email(email: str) -> bool` that returns True if the email matches RFC 5322. Include inline comments explaining each validation step. No external libraries.”

Step 4: Generate Code with the LLM

Feed the prompt to your LLM assistant. Review the generated code critically—do not accept it blindly. Check for correctness, security, and style. If the output is missing edge cases or is verbose, refine the prompt and regenerate. This iterative process is where iterative review begins. Keep regenerating until the code meets the initial acceptance criteria.

Step 5: Iterative Review and Refinement

Once an acceptable version is generated, subject it to peer code review as you would any human-written code. Use the prompt as part of the review context. The prompt should be a sibling artifact under version control. Discuss with teammates: Is the prompt precise enough? Did the LLM misinterpret anything? Document any modifications to the prompt in a separate file (e.g., prompts/email_validation_v1.md). This step embodies the iterative review skill.

Step 6: Store Prompt as a First-Class Artifact

Commit the final prompt alongside the generated code in your version control system. Ensure each prompt file captures the version of the LLM used (if possible) and any parameters (temperature, max tokens). This practice makes prompts traceable and reproducible. For example, your repository might look like:

src/
  validators.py (generated code)
prompts/
  email_validation_prompt_v2.md

Step 7: Collaborate and Maintain

When business requirements change, update the prompt first, regenerate code, and repeat the review cycle. Because prompts are versioned, you can revert or diff changes. Encourage team members to contribute improvements to prompts during code reviews. SPDD scales well because the prompt becomes a shared template that can be reused for similar functions—just adjust parameters or examples.

Tips for Success

For more details on the SPDD method, see the original example by Wei Zhang and Jessie Jie Xia on GitHub (included in the source).

Recommended

Discover More

mmwinbet88f8betRust WebAssembly Targets to Drop Crucial Compatibility Flag, May Break Existing Projects8dayMotorola Razr Fold Price and US Launch Revealed as Apple Readies Its Own Foldablevip79bet888dayf8betEmpowering Multi-Tenant Platforms with Dynamic Durable ExecutionNVIDIA Unleashes Critical Vulkan Beta Drivers: Descriptor Heap Fixes Boost Linux and Windows Performancemmwinvip79Active Windows Shell Spoofing Bug Sparks Urgent Patching Debate