🧠 All Projects
āš™ļø

WiderWings Recursive Development Protocol (RDP) v0.1

P3 - Low
Process WiderWings

WiderWings Recursive Development Protocol (RDP)

Draft v0.1 — March 3, 2026
Author: Mark Wings (Research)
Status: Draft for Bob's review


1. Overview

This document defines how WiderWings builds software using a recursive, multi-agent development process inspired by the BMAD Method but tailored to our team structure, tooling, and culture.

Core Principles:

  • Same agents work across all projects, with project-level context isolation
  • Second Brain is the single source of truth for tasks, knowledge, and handoffs
  • Every feature flows through defined pipeline phases with quality gates
  • Agents improve their own processes over time (recursive improvement)
  • Human (Henry) sets vision; agents execute autonomously within guardrails

2. Team Structure

Org Chart

Henry Kim (CEO / Vision)
  └── Bob Wings (Chief of Staff / Architect) — Own OpenClaw instance
        ā”œā”€ā”€ Kevin (PM: MedSchools.ai)
        ā”œā”€ā”€ Liz (PM: Hedge)
        ā”œā”€ā”€ Kai (Frontend Dev)
        ā”œā”€ā”€ Atlas (Backend Dev)
        ā”œā”€ā”€ Maya (Content / SEO)
        ā”œā”€ā”€ Designer [NEW] (UI/UX Design)
        └── Sage (QA / Reviewer) [Repurposed from Research]
  └── Mark Wings (Research / Scraping) — Own OpenClaw instance

Infrastructure

Agent Machine Wake Triggers Model (suggested)
Bob Own instance Discord, Telegram, heartbeat Opus (main), Haiku (heartbeat)
Mark Own instance Discord, Telegram, heartbeat Opus (main), Haiku (heartbeat)
Kevin Bob's machine Discord, heartbeat Sonnet (main), Haiku (heartbeat)
Liz Bob's machine Discord, heartbeat Sonnet (main), Haiku (heartbeat)
Kai Bob's machine Discord, heartbeat Sonnet (main), Haiku (heartbeat)
Atlas Bob's machine Discord, heartbeat Sonnet (main), Haiku (heartbeat)
Maya Bob's machine Discord, heartbeat Sonnet (main), Haiku (heartbeat)
Designer Bob's machine Discord, heartbeat Sonnet (main), Haiku (heartbeat)
Sage Bob's machine Discord, heartbeat Sonnet (main), Haiku (heartbeat)

Role Definitions

Bob — Chief of Staff / Architect

  • Cross-project priority setting and conflict resolution
  • Architecture reviews and technical decisions (ADRs)
  • Routes Henry's ideas to the right PM/agent
  • Reviews and approves major architectural changes
  • Does NOT do day-to-day task management (that's the PMs)

Kevin — PM: MedSchools.ai

  • Owns the MedSchools.ai backlog and sprint planning
  • Creates and sequences tasks for the dev team
  • Coordinates handoffs between agents for MedSchools work
  • Reports progress to Bob and Henry
  • Runs retrospectives after each milestone

Liz — PM: Hedge

  • Same as Kevin but for the Hedge platform
  • Owns Hedge backlog, sprints, and coordination

Kai — Frontend Dev

  • UI/UX implementation (Svelte, Tailwind, shadcn-svelte)
  • Consumes design specs from Designer
  • Component development and responsive implementation
  • Works from DESIGN.md and story files

Atlas — Backend Dev

  • APIs, databases, infrastructure
  • Supabase, Node.js, Python services
  • Deployment and DevOps
  • Performance optimization

Maya — Content / SEO

  • Blog posts, landing page copy, meta descriptions
  • SEO strategy and keyword research
  • Content calendar management
  • GEO (Generative Engine Optimization)

Designer [NEW] — UI/UX Design

  • Reference/inspiration research (Dribbble, Godly, Awwwards, 21st.dev)
  • Design system creation and maintenance (DESIGN.md per project)
  • Visual mockups and component specs
  • Screenshot review loops (visual QA on design fidelity)
  • Does NOT write production code — produces specs that Kai consumes

Sage — QA / Reviewer [Repurposed]

  • Code review (logic, security, patterns)
  • Visual regression testing (screenshot comparison)
  • Accessibility and performance audits
  • Eval criteria enforcement — nothing ships without Sage's sign-off
  • Runs automated test suites where available

Mark — Research / Scraping (Cross-Project)

  • Market research, competitive analysis
  • Web scraping and data extraction
  • Technology research and tool evaluation
  • Supports any project on-demand (not sprint-bound)

3. Pipeline Phases

Every feature/task flows through a subset of these phases. Not every task hits every phase — a bug fix skips to Build, a new product starts at Brief.

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”   ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”   ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”   ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”   ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”   ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”   ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”   ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│  Brief  │ → │ Research  │ → │ Design │ → │ Architecture │ → │ Build │ → │ Review │ → │   QA   │ → │ Deploy │
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜   ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜   ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜   ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜   ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜   ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜   ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜   ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜
   Henry/PM      Mark          Designer       Bob               Kai/Atlas    Sage         Sage         Atlas

Phase Details

Phase Owner Input Output Skip When
Brief Henry or PM Idea / request Structured project brief Never (every task needs a brief, even one-liners)
Research Mark Brief Market data, competitor analysis, reference materials Well-understood domain, bug fixes
Design Designer Brief + research + brand assets DESIGN.md, mockups, component specs, reference screenshots Backend-only changes, API work
Architecture Bob Brief + research + design specs Architecture doc, ADRs, tech stack decisions Small changes within existing patterns
Build Kai (FE) / Atlas (BE) Story file with all context from prior phases Working code + tests Never
Review Sage Completed code + story file Approved or changes requested Trivial fixes (typos, config)
QA Sage Approved code on staging QA report (pass/fail with details) Non-user-facing changes
Deploy Atlas QA-approved code Production deployment N/A

Phase Entry Criteria

Each phase has a gate — you can't enter without the prior phase's output:

  • Research requires: a written brief (not just a verbal idea)
  • Design requires: brief + any relevant research saved to Second Brain
  • Architecture requires: brief + design specs (if applicable)
  • Build requires: a story file with acceptance criteria, referencing all prior artifacts
  • Review requires: code committed with tests passing
  • QA requires: review approval from Sage
  • Deploy requires: QA pass

Shortcut Paths

Task Type Path
New product/feature Brief → Research → Design → Architecture → Build → Review → QA → Deploy
New page/UI feature Brief → Design → Build → Review → QA → Deploy
API/backend feature Brief → Architecture → Build → Review → QA → Deploy
Bug fix Brief → Build → Review → Deploy
Content update Brief → Build (Maya) → Review → Deploy
Hotfix (P0) Brief → Build → Deploy (review post-deploy)

4. Task Board Protocol

Where: Second Brain Task Board

All tasks live in Second Brain (/api/tasks). No task files in repos. No side-channel task tracking.

Task Schema

Each task should include:

{
  "title": "Clear, actionable title",
  "description": "What needs to be done + acceptance criteria",
  "priority": "P0 | P1 | P2 | P3",
  "status": "todo | in_progress | review | done | blocked",
  "agent_id": "who owns this task right now",
  "project_id": "which project (MedSchools, Hedge, WiderWings, Operations)",
  "phase": "brief | research | design | architecture | build | review | qa | deploy",
  "depends_on": "task ID of prerequisite (if any)",
  "created_by": "who created this task",
  "artifacts": "links to Second Brain memories or repo paths produced by this task"
}

New fields needed (for Bob/Atlas to add):

  • phase — pipeline phase
  • depends_on — prerequisite task ID
  • created_by — originating agent
  • artifacts — output references

Task Lifecycle

1. PM creates task (status: todo, assigned to agent, phase set)
2. Agent picks up on heartbeat (status: todo → in_progress)
3. Agent does the work, saves output to Second Brain
4. Agent marks done (status: in_progress → done)
5. Agent creates next-phase task OR PM sequences the next step
6. Repeat until deploy

Pickup Rules

  • Agents check for assigned tasks on every heartbeat
  • P0 tasks: Pick up immediately, notify PM
  • P1 tasks: Pick up on next heartbeat
  • P2/P3 tasks: Pick up when no P0/P1 work is pending
  • If a task is blocked, set status to blocked with a note explaining why

Handoff Protocol

When completing a task:

  1. Save all output/artifacts to Second Brain (type: spec, research, log, etc.)
  2. Update task status to done
  3. Update task artifacts field with memory IDs or repo paths
  4. If the next phase has a clear owner, create the next task assigned to them
  5. If unclear who's next, notify the PM (Kevin or Liz)

5. Second Brain Schema Improvements

Memory Types (Proposed)

Type Use For Example
spec PRDs, architecture docs, design systems "PRD: MedSchools Interview Prep Feature"
decision Why we chose X over Y (ADRs) "Decision: Supabase over Firebase for Hedge"
research Market intel, competitive analysis "Top 5 MCAT Prep Resources 2026"
log Build notes, deployment records, what happened "Deployed Hedge chart v2 with SMA indicators"
lesson Mistakes, patterns, things we learned "Screenshot loops fail on animated components"
process Workflow definitions, team protocols This document
eval Quality criteria, test results, review outcomes "QA Report: MedSchools Landing Page v3"
idea Brainstorms, future possibilities, parking lot "Could we add AI mock interviews?"

Tagging Discipline

Every memory saved to Second Brain must include:

  • type_id — one of the types above (not just context)
  • project_id — which project this belongs to
  • importance — 1 (low) to 5 (critical)
  • Title format: {Type}: {Descriptive Title} (e.g., "Decision: PostgreSQL for user data")

Cleanup Task

Bob/Atlas should:

  1. Add phase, depends_on, created_by, artifacts fields to task schema
  2. Add log, process, eval, idea as valid memory types
  3. Re-tag existing context memories to proper types (one-time cleanup)
  4. Add agent_id field to memories (who created this)

6. Project Context Isolation

Each project repo gets these standard files:

project-repo/
ā”œā”€ā”€ PROJECT.md          # Tech stack, conventions, architecture summary
ā”œā”€ā”€ DESIGN.md           # Brand colors, typography, component rules, visual guidelines
ā”œā”€ā”€ _pipeline/
│   └── sprint-status.md  # Current sprint state, active stories
└── docs/
    └── architecture.md   # Detailed architecture decisions (ADRs)

When an agent picks up a task for a project, they read that project's context files first. The agent is the same; the project context is different.

Second Brain holds cross-project and long-term knowledge.
Repo files hold project-specific working context.


7. Communication Protocol

Channels

Channel Use For Who
Discord Real-time messages, status updates, quick questions Everyone
Second Brain Task Board Task assignment, handoffs, work tracking Agents
Second Brain Memories Knowledge, research, specs, decisions Everyone
Telegram Henry ↔ Bob, Henry ↔ Mark (existing setup) Henry, Bob, Mark
Direct gateway dispatch Urgent agent-to-agent (Bob ↔ Mark) Bob, Mark

Henry → Team Communication

Henry primarily talks to Bob (Chief of Staff). Bob routes to the right PM or agent.

Prefix convention for messages to Bob:

  • šŸŽÆ = Action item — create a task
  • šŸ’­ = Thinking out loud — file as idea, no action needed
  • ā“ = Need a decision or recommendation
  • šŸ“‹ = Pass this to the right person

Henry can message any agent directly on Discord for urgent items, but the default flow goes through Bob.

Agent → Agent Communication

Same machine (Bob's gateway): sessions_send between agents — instant, free.
Cross-machine (Bob ↔ Mark): Direct gateway dispatch via /tools/invoke.
Default handoff: Task board (async, audit trail, no latency pressure).
Urgent only: Direct sessions_send or Discord ping.

Status Updates

  • PMs (Kevin/Liz) post daily sprint status to Discord project channel
  • Agents update task status in Second Brain as they work
  • Bob posts weekly cross-project summary to Henry

8. Eval Criteria (Quality Gates)

Universal Evals (All Projects)

Code Quality:

  • Tests exist and pass
  • No console errors or warnings
  • No hardcoded secrets or API keys
  • Follows project's PROJECT.md conventions
  • Functions/components are reasonably sized (not 500-line monsters)

Frontend (when applicable):

  • Responsive on mobile, tablet, desktop
  • Matches DESIGN.md brand guidelines
  • Screenshot comparison passes (2 rounds)
  • Accessibility basics (alt text, keyboard nav, contrast)
  • Loading performance acceptable (< 3s on 3G)

Backend (when applicable):

  • API endpoints documented
  • Error handling covers edge cases
  • Database queries are indexed/optimized
  • Rate limiting on public endpoints
  • Input validation/sanitization

Content (when applicable):

  • SEO meta tags present (title, description, OG)
  • No placeholder text remaining
  • Links work
  • Grammar/spelling check passed

Per-Project Evals

Each project can add project-specific criteria in their PROJECT.md. Examples:

  • MedSchools.ai: FERPA compliance considerations, medical accuracy disclaimer present
  • Hedge: Financial data accuracy, real-time update performance, disclaimer present

QA Process (Sage)

  1. Sage receives review task with link to code and story file
  2. Runs through universal eval checklist
  3. Runs through project-specific eval checklist
  4. Produces QA report saved to Second Brain (type: eval)
  5. Verdict: PASS (proceed to deploy) or CHANGES REQUESTED (back to developer with specific items)
  6. Developer fixes → Sage re-reviews (max 2 rounds, then escalate to Bob)

9. Recursive Improvement Loop

The process itself improves over time:

Sprint Retrospectives (Per Project)

  • After each milestone/epic, PM runs a retrospective
  • What worked? What didn't? What should change?
  • Lessons saved to Second Brain (type: lesson)
  • Process changes proposed → Bob reviews → updates this protocol

Agent Self-Improvement

  • Agents update their own SOUL.md and skills as they learn
  • New automation scripts written for repetitive tasks
  • Tool/skill gaps identified and filled (install from ClawHub or build custom)

Monthly Process Review

  • Bob reviews all lesson type memories from the month
  • Updates this protocol with improvements
  • Shares changes with the team

10. Design Pipeline Integration

The design pipeline tool (design.widerwings.com, currently P2) fits into the Design phase:

Brief → Research → [Design Pipeline Tool] → Architecture → Build → ...
                    ā”œā”€ā”€ Step 1: Project Brief (Q&A)
                    ā”œā”€ā”€ Step 2: Inspiration Gallery
                    ā”œā”€ā”€ Step 3: Design System Generation
                    ā”œā”€ā”€ Step 4: Build Preview
                    ā”œā”€ā”€ Step 5: Screenshot Review Loop
                    ā”œā”€ā”€ Step 6: Component Polish
                    └── Step 7: Handoff to Kai

The code development equivalent is the Build → Review → QA pipeline, which doesn't need a separate tool — it runs through Second Brain task board + agent workflows.


11. Getting Started — Implementation Plan

Phase 1: Foundation (This Week)

  • Bob reviews and finalizes this protocol
  • Create Designer agent on Bob's machine
  • Repurpose Sage from Research → QA (new SOUL.md, new skills)
  • Add phase, depends_on, created_by, artifacts to Second Brain task schema
  • Add new memory types (log, process, eval, idea)
  • Re-tag existing context memories to proper types

Phase 2: First Run (Next Week)

  • Pick one MedSchools.ai feature to run through the full pipeline
  • Kevin creates the task chain (Brief → Build → Review → QA → Deploy)
  • Team executes with the new protocol
  • Run retrospective after completion

Phase 3: Refinement (Ongoing)

  • Fix what broke during Phase 2
  • Establish heartbeat stagger schedule for Bob's agents
  • Build project context templates (PROJECT.md, DESIGN.md) for all active projects
  • Begin design pipeline tool development (P2 → P1)

Appendix: Key Resources


This is a living document. Bob owns updates. Last modified: 2026-03-03.

Created: Tue, Mar 3, 2026, 9:16 PM by mark

Updated: Tue, Mar 3, 2026, 9:16 PM

Last accessed: Mon, Mar 16, 2026, 8:35 AM

ID: 49982577-9d1d-42e2-ae49-5baaadb4fa21