# Kendo HQ — Full Knowledge Base > Concatenated full content of every published doc. Source of truth is kendo-HQ repo. --- # Mission *Source: `company/mission.md` · [Raw](https://kendo-hq.pages.dev/raw/company/mission.md)* # Mission & Vision ## Vision A world where development teams spend their time building, not managing tools. Where AI handles the routine so humans can focus on the creative. ## Mission Build the [issue tracker that lives in your terminal](https://kendo-hq.pages.dev/raw/company/product-overview.md) — where your AI assistant manages your board, your time tracking is built in, and your data stays in Europe. ## Beliefs - **Tools should disappear.** The best project management happens when you barely notice the tool. If you're spending time configuring workflows, the tool has failed. - **Simplicity is a feature.** Every feature we don't add is time saved for every user. We add things when the pain of not having them is obvious. - **Developers deserve better.** The people building software shouldn't have to fight their project management tool. It should feel like it was built by someone who gets it — because it was. - **Everything included.** Time tracking, audit logging, compliance — these aren't enterprise upsells. They ship with every tier. ## Strategic Priorities 1. **Nail the core** — Issue tracking, board, backlog, sprints, and epics must be fast, reliable, and pleasant to use before anything else matters. See [product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) for the full feature set. 2. **MCP ecosystem** — Be the project board that lives inside AI assistants. 29 MCP tools. If a developer uses Claude, Cursor, or Codex, they should manage Kendo without opening a browser. This is currently our strongest differentiator — see [competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) for the competitive landscape. 3. **European trust** — Amsterdam-hosted, Dutch-owned, database-per-tenant, hash-chained audit logs. GDPR by architecture, not by policy. 54% of EU IT decision-makers prioritize data sovereignty; no competitor in the developer PM segment offers EU hosting by default at this price point. Regulatory tailwinds (NIS2, EU Data Act, Schrems III threat) only strengthen this position. Resonates most with the [CTO "Christiaan" persona](https://kendo-hq.pages.dev/raw/shared/user-personas.md#tertiary-cto-christiaan). *(See [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* 4. **Everything-included pricing** — Time tracking, audit logging, and compliance at every tier. Competitors gate these behind enterprise plans. We don't. See [go-to-market.md](https://kendo-hq.pages.dev/raw/company/go-to-market.md#6-pricing-strategy-recommendation) for pricing rationale. 5. **Community-first growth** — Developers buy tools that other developers recommend. Build in public, share decisions, earn trust. See [go-to-market.md](https://kendo-hq.pages.dev/raw/company/go-to-market.md#phase-4-community--recurring-growth-month-2-6) for the community playbook. ## Related - [product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Full feature set and tech stack - [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — How we communicate this mission - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Who we're up against - [../shared/user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Who we're building for - [marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — Positioning, SWOT, USPs, messaging - [go-to-market.md](https://kendo-hq.pages.dev/raw/company/go-to-market.md) — Pricing, channels, launch roadmap - [decision-principles.md](https://kendo-hq.pages.dev/raw/company/decision-principles.md) — How agents decide when goals conflict --- # Brand Guide *Source: `company/brand-guide.md` · [Raw](https://kendo-hq.pages.dev/raw/company/brand-guide.md)* # Brand Guide *Internal governance document — authored and maintained by the CEO. Visual identity tokens derived from the Kendo app codebase (`~/Code/kendo/frontend/src/shared/styles/kendo.css`). Voice and tone principles established during initial brand definition.* ## Brand Identity **Name:** Kendo **Tagline:** "The issue tracker that lives in your terminal." **Status:** In active development **Other tagline candidates:** - "Project management, sharpened." - "Ship software, not status updates." ## Logo Clean typographic wordmark — "kendo." in lowercase with a red period. No illustration, no icon. The red accent does the heavy lifting. The red period (`.`) is non-negotiable — no version exists without it. ### Variants | Tier | Mark | Font | Weight | Size | Context | |------|------|------|--------|------|---------| | **Full** | `kendo.dev` | Sora | 500 | 42px | Landing page, external marketing. ".dev" uses weight 300 at ~60% font-size in `--text-muted` | | **Standard** | `kendo.` | Sora | 500 | 16px | In-app sidebar, nav header. Most common variant | | **Compact** | `k.` | Sora | 600 | 18px | Collapsed sidebar, favicon. Heavier weight for readability at small size | In all variants the period is rendered in `--kendo-red`. ## Voice & Tone ### Principles 1. **Direct** — Say it in fewer words. No filler, no corporate speak, no "leverage synergies." 2. **Technical but accessible** — We talk to developers, but we don't gatekeep. A PM should understand our copy too. 3. **Confident, not arrogant** — We know we're good. We don't need to trash competitors to prove it. 4. **Dry wit welcome** — A bit of personality is good. Forced humor is not. ### Do - Use short sentences - Use active voice - Be specific ("saves 2 hours per sprint" not "increases productivity") - Reference real developer pain points - Speak like a technical coworker, not a marketing department ### Don't - Use buzzwords: "revolutionary", "game-changing", "next-gen", "leverage" - Overpromise AI capabilities — be factual about what agents can and can't do - Use exclamation marks in product copy (one per page max) - Talk down to the reader - Use "we're excited to announce" — just announce it ### Tone by Channel | Channel | Tone | |---------|------| | Twitter/X | Punchy, opinionated, slightly provocative. Short threads ok. | | Bluesky | Conversational, authentic, community-oriented. Share insights, invite discussion. Less promotional, more human. | | LinkedIn | Professional but not corporate. Show the builder journey. | | Changelog | Matter-of-fact. What changed, why, how it affects you. | | Blog | Thoughtful, in-depth. Share real decisions and trade-offs. | | Docs | Clear, scannable. No personality needed — just accuracy. | | Support | Empathetic, fast, solution-oriented. | ## Visual Identity *(Design values derived from the Kendo app codebase — see `kendo/frontend/src/shared/styles/kendo.css` and Figma source files.)* ### Brand Color Kendo Red (`#c8553a`) is the **only** primary accent color. It's used for CTAs, active states, the logo period, and navigation highlights. No other brand colors exist — the rest of the palette is neutral surfaces and text. | Token | Value | Usage | |-------|-------|-------| | `--kendo-red` | `#c8553a` | Primary accent — buttons, active nav, logo period | | `--kendo-red-light` | `#e8734f` | Hover state | | `--kendo-red-dark` | `#a3412b` | Pressed/active state | | `--kendo-red-subtle` | `rgba(200,85,58,0.1)` | Background tint for active elements | ### Surface & Text Colors (Dark Mode — Default) | Token | Value | Usage | |-------|-------|-------| | `--bg` | `#09090e` | Page background (darkest) | | `--surface-1` | `#111116` | Subtle raised layers, same value as `--surface-raised` | | `--surface` | `#1c1c22` | Cards, nav, panels. Same value as `--surface-header` | | `--surface-2` | `#262630` | Board cards, hover states | | `--surface-3` | `#30303b` | Inputs, dropdowns | | `--border` | `#363640` | Default borders | | `--border-light` | `#46464f` | Hover borders | | `--text` | `#e8e8ec` | Primary text | | `--text-dim` | `#a0a0aa` | Secondary text | | `--text-muted` | `#8e8e98` | Tertiary/placeholder text (light mode: `#706a64`) | ### Semantic Colors Used for status indicators, alerts, and badges. Never used as brand colors. | Token | Value | Usage | |-------|-------|-------| | `--success` | `#2ea043` | Done, completed, green indicators | | `--warning` | `#d29922` | In progress, caution, yellow indicators | | `--danger` | `#da3633` | Errors, cancelled, destructive actions | | `--info` | `#388bfd` | Informational, review states | Each has a `*-subtle` variant at `rgba(..., 0.1)` for background tints. ### Tint System Used for enum badges (status, type, priority). Auto-adapts to light/dark mode. | Tint | Color | Background | Border | |------|-------|------------|--------| | Gray | `#9ca3af` | `rgba(156, 163, 175, 0.15)` | `rgba(156, 163, 175, 0.3)` | | Green | `#4ade80` | `rgba(74, 222, 128, 0.15)` | `rgba(74, 222, 128, 0.3)` | | Blue | `#60a5fa` | `rgba(96, 165, 250, 0.15)` | `rgba(96, 165, 250, 0.3)` | | Red | `#f87171` | `rgba(248, 113, 113, 0.15)` | `rgba(248, 113, 113, 0.3)` | | Purple | `#c084fc` | `rgba(192, 132, 252, 0.15)` | `rgba(192, 132, 252, 0.3)` | | Yellow | `#facc15` | `rgba(250, 204, 21, 0.15)` | `rgba(250, 204, 21, 0.3)` | | Orange | `#fb923c` | `rgba(251, 146, 60, 0.15)` | `rgba(251, 146, 60, 0.3)` | | Teal | `#2dd4bf` | `rgba(45, 212, 191, 0.15)` | `rgba(45, 212, 191, 0.3)` | | Pink | `#f472b6` | `rgba(244, 114, 182, 0.15)` | `rgba(244, 114, 182, 0.3)` | | Indigo | `#818cf8` | `rgba(129, 140, 248, 0.15)` | `rgba(129, 140, 248, 0.3)` | ### Status & Priority Colors Default lanes are configurable per project. These are the shipped defaults: | Status | Color | Visual | |--------|-------|--------| | To Do | Red tint | Red dot | | In Progress | Yellow tint | Yellow dot | | In Review | Blue tint | Blue dot | | Done | Green tint | Green dot | | Priority | Icon | Icon Color | Badge Tint | |----------|------|------------|------------| | Highest | ChevronsUp | Red | Purple | | High | ChevronUp | Red (75% opacity) | Red | | Medium | Equals | Orange | Yellow | | Low | ChevronDown | Blue | Blue | | Lowest | ChevronsDown | Blue (60% opacity) | Green | ### Typography | Use | Font | Weight | Size | |-----|------|--------|------| | All UI text & headings | Sora | 300–700 | 0.6875–2.5rem | | Code, counters, labels | JetBrains Mono | 400–600 | 0.58–0.875rem | **Google Fonts:** `Sora:wght@300;400;500;600;700` + `JetBrains Mono:wght@400;500;600` #### Weight scale (Sora) | Weight | Name | Usage | |--------|------|-------| | 300 | Light | Decorative subtitles, logo ".dev" suffix | | 400 | Regular | Body text, descriptions, badge content | | 500 | Medium | Labels, active nav items, emphasis | | 600 | Semibold | Page titles, buttons, section headers | | 700 | Bold | Large brand headings (rare) | #### Size scale | Token | Size | Usage | |-------|------|-------| | `--text-xs` | 0.6875rem (11px) | Badges, metadata, fine print | | `--text-sm` | 0.75rem (12px) | Labels, descriptions, form errors | | `--text-base` | 0.8125rem (13px) | Body text, inputs, select entries | | `--text-nav` | 0.82rem (~13.1px) | Navigation, toast titles | | `--text-md` | 0.875rem (14px) | Section labels, prominent body text | | `--text-lg` | 1rem (16px) | Subheadings, emphasis | | `--text-xl` | 1.25rem (20px) | Page section titles | | `--text-2xl` | 1.5rem (24px) | Page titles | | `--text-3xl` | 2rem (32px) | Hero/display text | | `--text-4xl` | 2.5rem (40px) | Pricing display numbers | ### Icons - **Library:** Font Awesome (Pro + Free — duotone, light, regular, solid styles) - **Wrapper:** `@fortawesome/vue-fontawesome` component - **Sizes:** 14px (inline), 16px (nav/sidebar), 20px (page headers) - **Color:** Inherits from parent via `color: currentColor` - **Rule:** No emojis as icons — always use proper Font Awesome icons ### Spacing | Value | Name | Usage | |-------|------|-------| | 2px | Micro | Inline icon gaps | | 4px | Tight | Icon-to-text, badge padding-y | | 6px | Compact | Pill padding, small gaps | | 8px | Base | Nav item gaps, list gaps | | 12px | Comfortable | Input padding, card internal | | 16px | Standard | Card padding, modal sections | | 20px | Large | Modal body, spacious sections | | 24px | XL | Page horizontal padding | | 32px | XXL | Section vertical spacing | ### Border Radius | Token | Value | Usage | |-------|-------|-------| | `--radius-sm` | 6px | Inputs, buttons, small elements | | `--radius-md` | 10px | Cards, toasts, panels | | `--radius-lg` | 14px | Modals, table containers | ### Shadows Shadows are only used on floating elements (modals, dropdowns, popovers). Never on cards or buttons. | Token | Value | Usage | |-------|-------|-------| | `--shadow-sm` | `0 2px 8px rgba(0,0,0,0.2)` | Toasts, tooltips | | `--shadow-md` | `0 8px 32px rgba(0,0,0,0.3)` | Dropdowns, popovers | | `--shadow-lg` | `0 20px 80px rgba(0,0,0,0.4)` | Modals | ### Design Principles - **Dense and compact** — Information-rich, not spacious. Every pixel earns its place. - **Dark-first** — Dark mode is the primary experience. Light mode is supported but secondary. - **Surface layering over borders** — Use surface contrast to create depth (e.g., board cards on lanes). Prefer fewer borders, not more. - **No shadows on cards** — Shadows are reserved for floating layers (modals, dropdowns). Cards use surface contrast and borders. - **Subtle, not flashy** — Animations are functional (transitions, not celebrations). Red accent is used sparingly for emphasis. - **One accent color** — Kendo Red is the only brand color. Everything else is neutral. - **Always use CSS custom properties** — Never hardcode hex values in components. ## Brand Language — Kendo Martial Art Theme The product is named after a Japanese martial art. The landing page uses this as a subtle structural metaphor. This gives Kendo a distinctive voice that no other PM tool has — lean into it, but don't overdo it. ### How it's used on the landing page | Generic term | Kendo term | Where it's used | |-------------|------------|-----------------| | Section | **Form** | Page sections are numbered "First Form", "Second Form", etc. | | FAQ | **"Before you step into the dojo"** | FAQ section header | | Features | **"The full arsenal"** | Features section subtitle | | Workflow | **"One flow"** | Final CTA: "Your board, your terminal, one flow" | ### Extended vocabulary (available for future use) | Kendo term | Meaning | Potential use | |-----------|---------|---------------| | Dojo | Training hall | Onboarding, getting started | | Kata | Practiced forms | Structured workflows, templates | | Stance | Ready position | Getting started guide, setup | | Waza | Technique | New features, changelog entries | | Keiko | Practice/training | Documentation, tutorials | ### Rules - **Use for structure and headers, not body copy.** "First Form" as a section label works. "Execute your waza against the backlog" does not. - **Never explain the metaphor.** If someone doesn't know kendo, the terms should still read as clean labels. "The full arsenal" works whether you know kendo or not. - **Keep body copy developer-friendly.** The martial art theme is flavor, not the main course. Feature descriptions should be clear and technical. - **Don't use it in docs.** Documentation should be plain and accurate. No personality needed. ## Competitor References When discussing competitors, follow these rules. For detailed competitive analysis, see [competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) and the [marketing strategy](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md#2-competitive-landscape). - **Never disparage by name in public content** — Compare on features and philosophy, not insults - **Jira** — Position against complexity and bloat. "You shouldn't need a certification to manage tasks." - **Linear** — Respect the design, differentiate on AI-native and MCP integration. "Beautiful, but we go further." - **Asana** — Position against enterprise focus. "Built for developers, not project managers." - **GitHub Issues** — Respect simplicity, differentiate on structure. "When issues alone aren't enough." ## Related - [mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — Vision and beliefs that shape this brand - [product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — What we're actually building - [marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — Messaging framework, SWOT, competitive positioning - [go-to-market.md](https://kendo-hq.pages.dev/raw/company/go-to-market.md) — Channel strategy, launch roadmap, pricing - [../shared/glossary.md](https://kendo-hq.pages.dev/raw/shared/glossary.md) — Kendo-specific terminology (includes kendo-red, MCP server, etc.) - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Full competitive landscape for positioning - [quality-standards.md](https://kendo-hq.pages.dev/raw/company/quality-standards.md) — Output quality checklists for non-code work --- # Product Overview *Source: `company/product-overview.md` · [Raw](https://kendo-hq.pages.dev/raw/company/product-overview.md)* # Product Overview ## What is Kendo? Kendo is a streamlined project management tool for development teams. It's a modern alternative to [Jira, Linear, and Asana](https://kendo-hq.pages.dev/raw/shared/competitors.md) — the issue tracker that lives in your terminal. **Website:** [kendo.dev](https://kendo.dev) ## Core Proposition Project management tools are either bloated (Jira) or pretty but limited (Linear). Kendo sits in the sweet spot: powerful enough for real teams, simple enough that you don't need a Jira admin. With 29 MCP tools, your AI coding assistant can read issues, log time, move cards, and plan sprints — all from the terminal. ## Key Features - **Issue tracking** — Board, backlog, and overview views with statuses, priorities, epics, blocking relations, and assignees - **Sprint planning** — Scope sprints, assign work, unfinished issues carry over automatically - **Epics** — Group related issues, set dates, track progress across sprints. Epics have Open, In Progress, and Completed status tracking. - **GitHub integration** — Branch linking, PR sync, webhook-driven lane transitions. Code merges — card moves to Done. - **OAuth login** — Sign in with Google or GitHub. Link/unlink providers in profile settings. - **Two-factor authentication** — TOTP-based 2FA with QR code setup and recovery codes. Tenants can enforce 2FA for admins, members, or both. - **BYOK AI keys** — Tenants and projects can bring their own Anthropic or OpenAI API keys instead of using shared keys. - **Issue templates** — Reusable templates for issue creation, managed per project in settings. - **Real-time updates** — WebSocket-powered live updates via Laravel Reverb. Issues and announcements update in real time without page refresh. - **MCP server** — 29 MCP tools that connect to Claude Code, Cursor, and Codex. Read issues, log time, move cards, plan sprints — without leaving the terminal. An interactive MCP App widget for Claude Desktop is in pilot scope (read-only "Today" card first, interactive board later). *(See *research/mcp-apps-kendo-pilot.md* and the 2026 improvement roadmap in *research/mcp-tools-improvement-roadmap-2026.md*)* - **Time tracking** — Built-in, not a plugin. Log per issue, filter by project or date range, export to XLSX. - **Audit logging** — Hash-chained, tamper-proof logs with point-in-time user snapshots. Every action recorded. - **Teams & roles** — Multi-project support with custom role-based access. Tenants define their own roles with granular per-permission scopes (None, Own, All). Admin and Member are the default system roles. Two-tier authorization. See *ADR-002* for the design rationale. - **CLI** — Go-based command-line tool for managing issues, sprints, and time entries outside the browser. - **Reports** — Triage inbox for incoming feedback, requests, and bugs. Team members submit lightweight reports; team leads review, reorder (drag-drop), and filter by status (Pending, Promoted, Dismissed). Pending reports can be promoted into real issues — optionally using AI story generation to draft the title, description, type, and priority from the report content — or dismissed. *(AI story generation is internal-only — not for external messaging yet.)* - **Notifications** — In-app notification system with per-user preferences, mark read/unread, and unsubscribe options. - **File attachments** — Upload files to issues and comments. - **EU hosting** — Amsterdam-hosted, database-per-tenant isolation. GDPR-compliant by architecture. ## Tech Stack - Dark-themed UI with light mode support - Design system with kendo-red (#c8553a) as accent color - Sora for all UI text and headings, JetBrains Mono for code/labels - Compact, dense interface — collapsed sidebar at 64px, expanded at 230px - Vue 3.5 + TypeScript 5.9 + UnoCSS frontend - Laravel 12 + PHP 8.4 backend, MySQL database - Multi-tenant (database-per-tenant, subdomain routing) ## What Makes Kendo Different 1. **Terminal-first** — The MCP server means your AI assistant manages your board directly. No browser tab, no context switch. Developers lose 4.5 hours/day to context switching — Kendo eliminates the PM tool switch entirely. *(See [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* 2. **Developer-first** — Built by a developer, for developers. No enterprise bloat, no certification programs, no consultants needed. Fast, keyboard-driven, dark-themed. 3. **Nothing extra** — Time tracking, audit logging, AI, MCP — built into every tier. No Toggl subscription, no Tempo add-on, no enterprise upsell for compliance features. 4. **European by default** — Amsterdam-hosted, Dutch-owned, database-per-tenant. Your data stays in the EU — no upgrade required. No competitor in the developer PM segment offers EU hosting at this price point. *(See [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* 5. **Opinionated simplicity** — Fewer features, better defaults. No custom workflows with 47 status types. Ship software, don't configure tools. ## Target Users See [user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) for detailed profiles. - Small to mid-size development teams (3-20 developers) - Solo developers and indie hackers who want structure without overhead - Teams already using AI coding tools who want their project management to keep up - Teams migrating away from Jira who want something lighter but not too minimal ## Current Stage In active development. Core project management is functional: issues, board, backlog, sprints, epics, time tracking, GitHub integration, MCP server, audit logging, multi-tenancy, teams & roles. Built in the Netherlands, hosted in Amsterdam. ## Related - [mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — Vision, beliefs, and strategic priorities driving this product - [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Visual identity, design tokens, and voice guidelines - [marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — SWOT analysis, competitive landscape, USPs, messaging - [go-to-market.md](https://kendo-hq.pages.dev/raw/company/go-to-market.md) — Pricing, channels, launch roadmap - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — How we compare to Jira, Linear, Asana, Plane, and others - [../shared/user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Target users and their pain points - [../shared/glossary.md](https://kendo-hq.pages.dev/raw/shared/glossary.md) — Kendo-specific terminology --- # Decision Principles *Source: `company/decision-principles.md` · [Raw](https://kendo-hq.pages.dev/raw/company/decision-principles.md)* # Decision Principles *Internal governance document — authored and maintained by the CEO. Priority stack informed by lessons from *RETRO-001* and ongoing agent operations.* Trade-off hierarchy for agent decision-making. When goals conflict during execution, this stack resolves the tension. `company/mission.md` tells agents *what to build*. This document tells agents *how to decide when goals conflict*. ## Priority Stack When two goals compete, find both in the stack. The higher one wins. | # | Principle | Means | Example | |---|-----------|-------|---------| | 1 | **Correct over fast** | Never publish or ship something wrong to save time. Wrong output creates more work than slow output. | Marketing draft with an unverifiable competitor claim → don't publish, flag the claim. A slow but accurate report beats a fast but speculative one. | | 2 | **CEO control over agent autonomy** | When uncertain, surface the decision. The cost of asking is lower than the cost of wrong work. | Engineer unsure if a plan's design decision is wrong → escalate, don't improvise a "better" approach. Librarian finds source that contradicts the wiki → present both versions, don't silently overwrite. | | 3 | **Simple over complete** | A focused output beats a comprehensive one. Answer the question asked, not every adjacent question. | Research task about pricing → answer the pricing question, don't also produce a feature comparison unless asked. Comms asked for release notes → write release notes, don't also draft a blog post. | | 4 | **Convention over novelty** | Follow existing patterns unless the CEO explicitly asks for deviation. The codebase and docs represent accumulated decisions — don't reinvent without reason. | Plan uses int-backed enums because the codebase does, even if string enums feel "cleaner." Marketing follows the brand guide tone even if a punchier voice might perform better. *(Lesson from RETRO-001.)* | | 5 | **Shipped over perfect** | Good enough and reviewed beats perfect and stuck. Don't gold-plate. | A PR with 95% coverage on a trivial edge case → ship it with a note, don't spend 3 hours on mock plumbing. A tweet that scores 7/10 → present it, don't iterate to 9/10 when the CEO hasn't seen it yet. | ## How to Use When you face a trade-off, identify the two competing principles. The lower-numbered one takes priority. **Example tension:** #1 (correct over fast) vs. #5 (shipped over perfect). Resolution: ship things that are *correct* even if not *perfect*. Don't ship things that are *wrong* to be fast. The line between "imperfect" and "wrong" is the judgment call — when you can't tell, apply #2 (surface it to the CEO). ## CEO Override These principles are defaults. When the CEO gives an explicit instruction that conflicts with the stack, the CEO's instruction takes priority. See "CEO Overrides" in the root `CLAUDE.md`. ## Related - [mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — What to build and in what order - [quality-standards.md](https://kendo-hq.pages.dev/raw/company/quality-standards.md) — Output quality checklists for non-code work - [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Voice, tone, and copy standards --- # Quality Standards *Source: `company/quality-standards.md` · [Raw](https://kendo-hq.pages.dev/raw/company/quality-standards.md)* # Quality Standards *Internal governance document — authored and maintained by the CEO. Thresholds and checklists established during agent fleet setup and refined through operational experience.* Output quality checklists for non-code work. Architecture tests define good code. This document defines good output for everything else. Agents self-score their work (1-10). This document adds two things: a **minimum-present threshold** (don't show the CEO work that scores below 6) and **pass/fail checklists** per output type. ## External-Facing Content Tweets, landing page copy, directory listings, announcements, social posts. - [ ] Every factual claim is verifiable from `company/` or `shared/` docs - [ ] No off-limits features mentioned (see CLAUDE.md "Off-Limits for External Messaging") - [ ] Tone matches the channel guide in [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) - [ ] No buzzwords from the brand guide's Don't list - [ ] Specific over generic — numbers and concrete outcomes, not adjectives - [ ] Within character/length limits for the channel - [ ] Self-score is 6 or above If any item fails, fix it before presenting. If you can't fix it (e.g., missing source data), flag the specific gap to the CEO. ## Internal Reports Audits, research reports, finance reports, competitive analyses. - [ ] Leads with the finding or recommendation — evidence follows, not the other way around - [ ] Every claim cites a specific file path, URL, or data source - [ ] Includes a "what this means for Kendo" section — not just raw data - [ ] Flags uncertainty explicitly rather than hedging with weak language - [ ] Self-score is 6 or above If your self-score is below 6, do not present the report. Instead, state what's blocking quality (missing data, unclear scope, conflicting sources) and ask the CEO how to proceed. ## Knowledge Base Changes Wiki updates, research filings, compiled sources. - [ ] Source attribution present per the provenance rule (two hops max from claim to origin) - [ ] `INDEX.md` updated with new or modified entries - [ ] Cross-references added to related wiki files where relevant - [ ] No structural changes to existing documents without CEO approval Structural changes = renaming sections, reorganizing a file's layout, splitting or merging documents. Data updates within existing structure are fine. ## Minimum-Present Threshold All agents that self-score should apply this rule: | Score | Action | |-------|--------| | 7-10 | Present to CEO | | 6 | Present to CEO with a note on what would make it better | | 4-5 | Do not present. State what's blocking quality and ask for guidance | | 1-3 | Do not present. Something is fundamentally wrong — escalate the blocker | ## Related - [decision-principles.md](https://kendo-hq.pages.dev/raw/company/decision-principles.md) — Trade-off hierarchy when goals conflict - [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Voice, tone, and copy standards - [mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — Strategic priorities --- # Marketing Strategy *Source: `company/marketing-strategy.md` · [Raw](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md)* # Kendo.dev — Marketing & Positioning > SWOT analysis, competitive landscape, USPs, target personas, and messaging framework for kendo.dev. > > For pricing, channels, launch roadmap, and metrics: see [go-to-market.md](https://kendo-hq.pages.dev/raw/company/go-to-market.md). --- ## 1. SWOT Analysis ### Strengths | # | Strength | Detail | |---|----------|--------| | S1 | **AI-native architecture** | Claude integration isn't bolted on — it's core. MCP server, AI audit trails. Most competitors added AI as an afterthought. | | S2 | **Deepest MCP integration for developer workflows** | 29 MCP tools designed for the Claude Code / Cursor terminal workflow. MCP is now table stakes (13+ PM tools have servers), but Kendo's focused, developer-terminal-native integration and native time tracking via MCP remain unusual. *(Updated 2026-04-04 based on [research/mcp-ecosystem-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/mcp-ecosystem-landscape-2026-q2.md))* | | S3 | **Native time tracking** | Surprisingly rare. Linear doesn't have it. Asana doesn't. GitHub Projects doesn't. Kendo has per-issue time logging with sprint/team/project aggregation. | | S4 | **Deep GitHub integration** | Branch linking, PR status sync, lane-based webhook triggers (auto-move issues on PR events), OAuth connection per user. On par with Linear/Shortcut. | | S5 | **True multi-tenancy** | Database-per-tenant isolation — real data separation, not row-level filtering. Important for security-conscious teams and compliance. | | S6 | **Hash-chained audit logging** | Tamper-proof, append-only logs with user snapshots. Unique in this market segment. Compliance-ready out of the box. | | S7 | **Mattermost integration** | Bot with AI-assisted issue creation, approval workflows. No competitor has native Mattermost support. *(Internal tooling for the dev team — not a public-facing feature.)* | | S8 | **Developer-built for developers** | Vue 3 + Laravel stack, 100% unit test coverage, TypeScript strict, PHPStan level max. The product reflects the engineering standards it serves. | | S9 | **European hosting** | Fly.io Amsterdam. GDPR-friendly by default. Competitors are mostly US-hosted. | | S10 | **All-in-one without bloat** | Board + sprints + epics + time tracking + GitHub + AI — but not trying to be a CRM/Wiki/Chat/Whiteboard/Everything. Focused scope. | ### Weaknesses | # | Weakness | Impact | |---|----------|--------| | W1 | **No brand recognition** | kendo.dev is unknown. Linear, Shortcut, Plane all have established communities. | | W2 | **Small team** | Slower feature velocity than VC-backed competitors. | | W3 | **No mobile app** | Linear, Jira, Asana all have native mobile apps. | | W4 | **No self-hosting option** | Plane's open-source model (46.5k GitHub stars) is a strong pull for data-sovereignty-conscious teams. | | ~~W5~~ | ~~**Landing page still 404**~~ | **Resolved.** kendo.dev is live with landing page + docs. | | ~~W6~~ | ~~**No public API docs**~~ | **Resolved.** Public docs live at kendo.dev/docs (API, MCP, guides). | | W7 | **Limited integrations** | Only GitHub + Mattermost. No Slack, GitLab, Bitbucket, Figma, Sentry. | | ~~W8~~ | ~~**Dark theme only**~~ | **Resolved.** Light mode is shipped. | ### Opportunities | # | Opportunity | Why now | |---|-------------|---------| | O1 | **MCP depth is the next frontier** | MCP is now ubiquitous (13+ PM tools have servers, 97M SDK downloads/month) but most implementations are shallow. Kendo's 11 structured resources for agent context, native time tracking via MCP, and terminal-first design give it an edge in developer workflow integration that broader tools (Jira, Monday, Asana) won't match. The opportunity is depth, not presence. *(Updated 2026-04-04 — the original ~12 month first-mover window closed in ~3-6 months; based on [research/mcp-ecosystem-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/mcp-ecosystem-landscape-2026-q2.md))* | | O2 | **Jira fatigue** | Developers are actively seeking Jira alternatives. "Jira fatigue" is a documented trend. Linear captured this wave; Kendo can ride the next one with AI-native positioning. | | O3 | **European market** | EU teams need GDPR-compliant tools. Amsterdam hosting + database-per-tenant = strong story for EU data residency. | | O4 | **AI story generation as differentiator** | Reports can be promoted to issues with AI-generated titles, descriptions, types, and priorities — drafted from the report content. No competitor offers this triage-to-story workflow. *(Internal only — not for external messaging yet.)* | | O5 | **Time tracking bundled** | Most competitors charge extra or require integrations. Bundling it as a core feature is a pricing advantage. | | O6 | **Mattermost niche** | Open-source chat users (Mattermost, Matrix) are underserved by PM tools that only integrate with Slack. *(Currently internal tooling only — potential public feature later.)* | | O7 | **AI discoverability** | KD-0179's SEO/AI strategy (llms.txt, structured data, robots.txt) can establish kendo.dev as a reference in AI search results before competitors optimize for it. | | O8 | **MCP Apps (interactive UI in Claude Desktop)** | MCP Apps became an official MCP extension on 2026-01-26 (SEP-1865), letting servers render sandboxed UI widgets inside Claude Desktop, Claude.ai, VS Code Copilot, and ChatGPT. Asana, Monday.com, Canva, Figma, Box, and Slack were launch partners — direct and adjacent competitors. Kendo's existing 29 tools + 11 resources already cover the data layer; a read-only "Today" card pilot is estimated at 3-5 days with zero new backend endpoints. The window to ship before it becomes table stakes (like MCP servers did in ~6 months) is now. *(See *research/mcp-apps-kendo-pilot.md*)* | ### Threats | # | Threat | Severity | |---|--------|----------| | T1 | **Linear's AI execution** | Linear Agent in public beta, 21+ MCP tools, GitHub Copilot integration for coding agent dispatch (75% enterprise adoption), "self-driving SaaS" vision. Furthest along in agentic PM. Well-funded, strong brand. High. | | T2 | **GitHub Projects is free** | For teams already on GitHub, "good enough" project management at $0 is hard to beat. Medium. | | T3 | **Plane's open-source momentum** | 46.5k stars, $6/seat, self-hostable, largest MCP tool surface (55+ tools), time tracking on Pro+, AI sidecar, agent assignment. Competes on the same "AI-native" positioning — and uses those exact words. High. | | T4 | **Feature expectations inflation** | ClickUp/Monday.com trained the market to expect everything-apps. Kendo's focused scope could be perceived as "missing features." Medium. | | T5 | **AI commoditization** | Basic AI (summarization, triage, generation) is now table stakes — every PM tool in the developer segment has it. MCP support is near-universal. The moat must be depth (29 MCP tools, audit trails, time-tracking via MCP) not presence. **High.** *(Updated 2026-04-04 based on [research/ai-native-dev-tools-landscape-2026.md](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md))* | | T6 | **MCP Apps adoption by competitors** | Asana and Monday.com were MCP Apps launch partners (Jan 2026) and already ship interactive UI widgets inside Claude Desktop. If Linear, Shortcut, or Plane follow, the "MCP depth" differentiator narrows further. Medium — mitigated by shipping our own pilot quickly. *(See *research/mcp-apps-kendo-pilot.md*)* | --- ## 2. Competitive Landscape ### Competitor Overview *(Updated 2026-04-04 based on [research/mcp-ecosystem-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/mcp-ecosystem-landscape-2026-q2.md), [research/dev-tool-pricing-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/dev-tool-pricing-landscape-2026-q2.md), [research/ai-native-dev-tools-landscape-2026.md](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md))* | Tool | Price (per user/mo) | MCP Support | Native Time Tracking | AI Features | Target | |------|---------------------|-------------|---------------------|-------------|--------| | **Linear** | $0-16 | Yes (21+ tools) | No | Agent (beta), Triage Intelligence, Copilot dispatch | Engineering teams | | **Jira** | $0-16+ | Yes (46+ tools, Rovo MCP) | Via add-ons | Rovo AI, agents-in-Jira (open beta) | Enterprise | | **Asana** | $0-24.99 | Yes (30+ tools, V2 GA) | No | AI Studio, 12 AI Teammates | Cross-functional teams | | **Monday.com** | $0-19 | Yes (18+ tools, all plans) | Yes (Standard+) | Sidekick (GA), Agent Builder (beta) | Broad/enterprise | | **Shortcut** | $0-16 | Yes (24 tools) | No | Korey AI agent (GA), Shortcut for Agents | Engineering teams | | **ClickUp** | $0-24+ | Yes (20+ tools, public beta) | Yes (Unlimited+) | Brain ($9 add-on), Super Agents, AI Autopilot ($28) | All-in-one seekers | | **Plane** | $0-15 | Yes (55+ tools) | Yes (Pro+) | Plane Intelligence, AI sidecar, agent assignment | OSS/self-hosted teams | | **GitHub Projects** | $0-21 | Yes (51 tools) | No | Copilot cloud agent | Teams already on GitHub | | **Notion** | $0-19.50 | Yes (18 tools, Business+) | No | Custom Agents (multi-model), MCP orchestration | Knowledge workers | | **Trello** | $0-17.50 | Community only | No | Butler (rule-based) | Simple Kanban users | | **Wrike** | Custom | Yes (GA, full API) | Yes | AI features | Enterprise | | **Kendo** | TBD | **Yes (29 tools)** | **Yes (built-in, all tiers)** | **MCP server, GitHub sync, audit logging** | Dev teams | ### Key Takeaways - **MCP support is table stakes**: All 10 tools in our competitive set now have MCP servers — official or community-built. This is no longer a differentiator by existence; it's a question of depth and developer-workflow fit. - **Native time tracking + sprints + AI at every tier** is a combination very few competitors offer at a single price point without add-ons. Plane comes closest at $6/seat but gates audit logging behind $13. - **Linear** is the most direct competitor — similar positioning, strong execution, well-funded, and the furthest along on the "agentic PM" vision with coding agent dispatch. - **Plane** competes on open-source + price + the largest MCP tool surface (55+ tools). Kendo competes on MCP resources, audit logging, EU hosting, and the no-add-on bundle. - **Shortcut is no longer stagnating** — Korey AI agent and the Shortcut for Agents platform represent serious AI investment. --- ## 3. Unique Selling Propositions (USPs) ### Primary USP > **"The issue tracker that lives in your terminal."** > > Developers lose 4.5 hours of deep focus daily to context switching — 12-15 major switches per day at 23 minutes recovery each. Kendo's MCP server eliminates the PM tool switch entirely: your AI assistant reads issues, creates tasks, logs time, and plans sprints without leaving your editor. No browser tab. No context loss. *Why it's defensible:* Every major PM tool now has an MCP server — but most are enterprise retrofits spanning dozens of products (Jira: 46+ tools across 5 Atlassian products) or shallow wrappers. Kendo's MCP was designed from day one for the developer terminal workflow: 29 focused tools that give agents the context they need without noise. *(Context switching data from [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* ### Secondary USPs | USP | One-liner | Proof point | |-----|-----------|-------------| | **Nothing extra** | "No Toggl subscription. No Tempo add-on. No enterprise tier for audit logs. One tool, one price." | Time tracking, audit logging, AI, MCP — all included in every tier. Competitors charge extra for each. | | **AI writes your stories** | "Submit a report. Kendo's AI turns it into a fully formed issue." | AI story generation from reports — drafts title, description, type, and priority from report content. Select one or more reports and generate a ready-to-go issue. *(Internal only — not for external messaging yet.)* | | **Time tracking included** | "Log time per issue, per sprint, per team — built in, not bolted on." | Native time entries with aggregation by user/project/team/issue/day. Linear, Asana, GitHub Projects, and Shortcut don't have it. | | **Audit-grade compliance** | "Hash-chained logs, user snapshots on every action, AI call auditing. The only PM tool with MCP audit trails." | Tamper-proof append-only audit logs with hash chains. MCP automation is booming — audit trails are missing everywhere else. *(See [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md) §6)* | | **European by default** | "Amsterdam-hosted, database-per-tenant isolation. Your data stays in Europe — no upgrade required." | Fly.io Amsterdam region, Dutch-owned (Back to code B.V.), true database-per-tenant isolation. No competitor in the developer PM segment matches this at this price point. 54% of EU IT decision-makers prioritize data sovereignty in purchasing. *(See [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* | | **GitHub-native workflow** | "Branch linking, PR status sync, webhook-driven lane transitions. Your board reflects your code, automatically." | Lane-based GitHub triggers, branch-to-issue linking, PR webhooks | --- ## 4. Target Audience & Personas ### Primary: Small-to-medium dev teams (3-20 developers) | Persona | Description | Pain points | Kendo solves | |---------|-------------|-------------|--------------| | **Tech Lead "Tessa"** | Leads a team of 5-8 developers. Uses Claude Code daily. Tracks work in Linear/GitHub but hates context switching. | Switching between IDE, browser, PM tool. Context loss between tools. | MCP integration, terminal-first workflow | | **Freelance Dev "Floris"** | Solo developer or small agency. Needs time tracking for client billing. Uses GitHub for everything. | Paying for Jira + Toggl. Overkill tools for a small team. | All-in-one with time tracking. Simple, focused. | | **CTO "Christiaan"** | Responsible for team productivity + compliance. EU-based company. | GDPR compliance, data residency requirements, audit trail needs. | European hosting, audit logging, multi-tenancy | ### Secondary: EU compliance-focused teams EU-based companies that need GDPR-compliant project management with data residency guarantees, audit trails, and database-per-tenant isolation. Most competitors are US-hosted or gate compliance features behind enterprise tiers. --- ## 5. Messaging Framework ### Tagline Options 1. **"The issue tracker that lives in your terminal."** (Terminal-first, used on landing page) 2. **"Your board, your terminal, one flow."** (MCP-focused) 3. **"Stop switching tabs. Manage issues from your IDE."** (Pain-point-first — targets context switching) 4. **"Linear + time tracking + EU hosting. No add-ons."** (Comparison-anchor — instant comprehension for Linear users) *Positioning guidance: Supabase grew from 8 to 800 databases in 72 hours after changing their tagline from "realtime Postgres" to "open-source Firebase alternative." "AI-native issue tracker with MCP support" is accurate but requires understanding MCP. Test simpler frames that trigger "I need that" rather than "what's MCP?" (See [research/dev-tool-growth-playbooks.md §Supabase](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md))* ### Elevator Pitch (30 seconds) > Kendo is a project management tool built for developers who live in the terminal. Claude Code and Cursor can read your issues, create tasks, log time, and plan sprints — without opening a browser. You get board, backlog, sprints, time tracking, GitHub sync, and audit logging in one tool. No add-ons, no enterprise upsells. Amsterdam-hosted, database-per-tenant, GDPR-ready. ### Feature Messaging Matrix | Feature | Headline | Supporting copy | Competitor weakness it targets | |---------|----------|-----------------|-------------------------------| | MCP integration | "Manage your board from the terminal" | "29 MCP tools let Claude Code read, create, and update your issues without leaving your editor. Zero context switches. Designed for the terminal workflow, not bolted onto an enterprise platform." | Most competitors have MCP now, but their integrations span multiple products (Jira: 46+ tools across 5 Atlassian products) or lack time tracking (Linear's 21 tools). Kendo's integration is focused on the developer terminal workflow. | | AI story generation | "Submit a report. Get an issue." | "AI generates fully formed issues from report content — title, description, type, priority. Triage feedback in seconds, not minutes." *(Internal only — not for external messaging yet.)* | No competitor has report-to-issue AI generation. | | Time tracking | "No Toggl subscription needed" | "Log time per issue, view by sprint, team, or project. Built in, not bolted on. No add-ons, no third-party integrations." | Linear/Asana/GitHub Projects/Shortcut: no native time tracking. Plane has it but only on Pro+ ($6). ClickUp gates it behind Unlimited+ ($7). Kendo includes it in every tier. | | Audit logging | "Every AI action, audited" | "Hash-chained, tamper-proof logs with user snapshots. MCP automation is booming — audit trails are missing. Kendo fills the gap." | No competitor has hash-chained audit logs. No competitor audits MCP/AI agent actions. | | EU hosting | "Your data stays in Europe" | "Amsterdam-hosted, database-per-tenant isolation. Not a checkbox — the default. No US jurisdiction, no CLOUD Act exposure." | No developer PM tool at this price point offers EU hosting by default. Linear, Shortcut: US-only. Plane: US cloud, self-host for EU. Jira: EU on Premium ($14.54). | | GitHub integration | "Your board reflects your code" | "Automatic lane transitions on PR events, branch linking, status sync. No manual updates." | Comparable to Linear/Shortcut, better than Asana/Monday/ClickUp. | --- ## Research Sources ### Competitors - Linear: linear.app (pricing, features, MCP, AI agent platform) - Jira: atlassian.com/software/jira (pricing, Atlassian Intelligence, Rovo) - Asana: asana.com (pricing, AI Studio, tiered AI access) - Monday.com: monday.com (pricing, Sidekick, Agents) - Shortcut: shortcut.com (pricing, Korey AI, MCP server) - ClickUp: clickup.com (pricing, Brain, Super Agents) - Plane: plane.so (pricing, open-source, MCP, Intelligence) - GitHub Projects: github.com/features/issues (pricing, Copilot) - Notion: notion.com (pricing, Notion Agent, Enterprise Search) - Trello: trello.com (pricing, Butler, Atlassian Intelligence) ### Market trends - MCP adoption across PM tools (March–April 2026) - Jira fatigue and developer PM tool preferences - AI-native project management trends 2024-2026 - European GDPR-compliant SaaS demand - llms.txt adoption curve (844k+ websites) ### Kendo HQ research reports (updated 2026-04-04) - [research/mcp-ecosystem-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/mcp-ecosystem-landscape-2026-q2.md) — MCP server adoption across all competitors - [research/dev-tool-pricing-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/dev-tool-pricing-landscape-2026-q2.md) — Pricing, feature gating, AI pricing models - [research/ai-native-dev-tools-landscape-2026.md](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md) — AI-native positioning, agentic PM, commoditization risk - [research/dev-tool-growth-playbooks.md](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md) — How Linear, Vercel, Supabase, Railway, Resend, PostHog grew from zero; channel validation - [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md) — Developer PM tool frustrations, context switching data, persona validation - [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md) — EU data sovereignty as purchase driver, regulatory landscape, competitive positioning ## Related - [go-to-market.md](https://kendo-hq.pages.dev/raw/company/go-to-market.md) — Pricing, channels, launch roadmap, metrics, strategic priorities - [product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Full feature set powering this strategy - [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Voice, tone, and visual identity for all marketing output - [mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — Strategic priorities that guide what we emphasize - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Detailed competitive landscape (strengths, weaknesses, positioning angles) - [../shared/user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Persona details for targeted messaging - *growth/directory-launch-plan.md* — Startup directory submission strategy --- # Go-to-Market *Source: `company/go-to-market.md` · [Raw](https://kendo-hq.pages.dev/raw/company/go-to-market.md)* # Kendo.dev — Go-to-Market Plan > Pricing, marketing channels, launch roadmap, metrics, and strategic priorities for kendo.dev. > > For positioning, SWOT, competitive landscape, and messaging: see [marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md). --- ## 6. Pricing Strategy Recommendation | Tier | Price | Target | Includes | |------|-------|--------|----------| | **Free** | $0 | Solo devs, evaluation | 1 project, up to 3 seats, all core features, MCP, AI (limited) | | **Team** | $8/seat/month | Small teams (3-15) | Unlimited projects, full AI, time tracking, GitHub integration, audit logs, BYOK AI keys | ### Market positioning *(See [research/dev-tool-pricing-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/dev-tool-pricing-landscape-2026-q2.md) for the full pricing matrix.)* - Cheaper than Linear ($10) with time tracking and audit logging included — Linear gates both behind higher tiers - Cheaper than Shortcut ($8.50) with time tracking and audit logging included — Shortcut has Korey AI but no time tracking or audit logs below Enterprise - More feature-rich than Plane ($6) at the compliance layer — Plane gates audit logging behind Business ($13), Kendo includes hash-chained logs at $8 - Massively simpler and cheaper than Jira ($7.91 + add-ons) — Jira gates AI behind Premium ($14.54) and audit logs behind the same tier ### Key pricing principles - **No per-feature add-on pricing** (unlike ClickUp's $9/user AI add-on) - **MCP and AI in every tier** — this is the differentiator, don't gate it - **Time tracking in every tier** — competitive advantage, don't hide it --- ## 7. Marketing Channels & Strategy *(Updated 2026-04-04 based on [research/dev-tool-growth-playbooks.md](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md), [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md), [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* ### Phase 1: Foundation ✅ COMPLETE Landing page and docs are live at kendo.dev. SEO and AI discoverability (llms.txt, structured data) are shipped. ### Phase 1.5: Pre-Launch (Weeks 1-8 before public launch) **Goal:** Build demand, refine product with real users, establish founder brand. *Every breakout dev tool studied ran a private beta before public launch. Linear had 1,000 paying customers before going public. Resend had 20K on a waitlist. Going straight to public launch leaves signal on the table.* | Action | Channel | Priority | Notes | |--------|---------|----------|-------| | **Waitlist / invite-only beta** | kendo.dev | High | 4-8 week invite-only period. Goal: 50-200 users providing feedback before public launch. Even 200 email signups from strangers validates positioning. | | **Founder brand on Twitter/X** | @jasperboerhof | High | Jasper's personal account is the primary channel, not @kendo. Build-in-public tweets 3-5x/week: screenshots, technical decisions, architecture choices. #buildinpublic. Founder brand vastly outperforms company accounts — Linear's founders generated a 10K waitlist through their Twitter following alone; Resend built a 20K waitlist purely from Zeno Rocha's personal brand. *(See [research/dev-tool-growth-playbooks.md](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md) §Linear, §Resend)* | | **Open-source wedge** | GitHub | High | Ship a useful standalone open-source tool adjacent to Kendo. Every breakout dev tool had one (Resend → React Email, Vercel → Next.js, Supabase → entire platform, PostHog → self-hosted analytics). Candidates: open-source MCP integration library, time tracking CLI, or board template format. This is the biggest gap in the current strategy. | | **GitHub README** | GitHub | High | Compelling README with screenshots, feature list, link to kendo.dev. | | **Manual onboarding** | Direct | High | DM first 10-50 beta users. Set up accounts personally. Watch them use the product. PostHog targeted 30-second response times — speed of support is a solo founder's unfair advantage. | ### Phase 2: Public Launch (Week 8-10) **Goal:** Maximum visibility from coordinated launch across channels. | Action | Channel | Priority | Notes | |--------|---------|----------|-------| | **Hacker News "Show HN"** | news.ycombinator.com | Critical | Single highest-leverage launch action for dev tools. Post Tue-Thu, 8-10 AM PT. Write a first comment explaining who you are and why you built this. Engage every comment for 18 hours. Supabase got 1,120 upvotes and 100x growth; PostHog got 300 deployments in days. | | **Product Hunt launch** | producthunt.com | High | Same week as HN. Prepare assets in advance: demo video (90s), screenshots, tagline. Riding momentum from HN into PH newsletters maximizes impact. | | **Cross-post the technical story** | Reddit, dev.to, Hashnode | Medium | Same HN post adapted for r/programming, r/webdev, dev.to. Each platform has different norms — adapt the format, not the substance. | ### Phase 3: Content Marketing (Month 2-4) **Goal:** Organic traffic through developer-focused content. Content marketing supports growth in Month 2-6 but was never the primary 0-to-1000 driver for any studied company. Focus on authentic, honest writing. | Content Type | Topic Ideas | Channel | Frequency | |-------------|-------------|---------|-----------| | **Founder blog** | Honest writing about the building journey — decisions, trade-offs, mistakes. PostHog's most successful content was transparent founder writing. This naturally hits HN frontpage. | kendo.dev/blog | 2x/month | | **Technical deep-dives** | "Building an MCP server with Laravel" / "Hash-chained audit logs in PHP" / "The $78K context-switching tax and how MCP fixes it" | dev.to, Hashnode | 1x/month | | **Comparison pages** | "Kendo vs Linear" / "Kendo vs Jira" / "Kendo vs GitHub Projects" — honest, feature-by-feature | kendo.dev/compare | One-time, update quarterly | | **Pain-point content** | "Jira fatigue: what 49,000 developers actually say" (use sentiment research data) / "Database-per-tenant: real multi-tenancy" | kendo.dev/blog | As needed | ### SEO keyword targets | Keyword cluster | Intent | Content | |----------------|--------|---------| | "issue tracker with MCP" / "MCP project management" | Discovery | Landing page, blog post | | "Jira alternative for small teams" | Comparison | Comparison page | | "project management with time tracking" | Feature | Feature page, docs | | "AI issue tracker" / "AI project management" | Discovery | Landing page, blog | | "GDPR project management EU" / "European Jira alternative" | Compliance | Landing page, blog. The "European alternatives" search segment is real (europealternatives.com catalogs 354+ EU companies). | | "issue tracker terminal" / "manage issues from terminal" | Developer workflow | Landing page, blog | ### Phase 4: Community & Recurring Growth (Month 2-6) **Goal:** Build community, establish recurring visibility through Launch Weeks. | Channel | Strategy | Effort | |---------|----------|--------| | **Twitter/X (Jasper personal)** | Primary channel. Opinionated dev insights, build-in-public, MCP ecosystem commentary. Founder voice, not company voice. | Daily | | **Hacker News** | Ongoing: submit technical blog posts, engage in PM/dev tool threads. Not just the launch post. | Ongoing | | **Reddit** | r/programming, r/webdev, r/ExperiencedDevs — publish technical posts that showcase Kendo's approach, not product mentions. "Here's how we built hash-chained audit logs" > "check out our tool." | Ongoing | | **Discord community** | Launch a Kendo Discord for users, feedback, support. Railway grew to 45K Discord members with zero marketing spend. Be there daily. | Ongoing | | **Claude Code / MCP community** | Share Kendo + Claude Code workflow demos. Showcase MCP integration patterns. | Ongoing | | **Launch Weeks** | Quarterly mini-launch-weeks: 3 days of feature announcements. Supabase's most repeatable growth tactic — they even open-sourced the concept at launchweek.dev. First one at Month 2-3. | Quarterly | | **Mastodon/Fediverse** | Deprioritized. No studied company cited it as a growth channel. Cross-post from Twitter when relevant, but don't invest equal effort. | Low priority | ### Phase 5: Partnerships & Integrations (Month 3-6) | Partnership | Value | Approach | |-------------|-------|----------| | **Anthropic/Claude Code** | Featured in MCP ecosystem, Claude Code docs | Submit to MCP registry, write integration guide | | **Mattermost** | Listed in Mattermost marketplace | Submit integration | | **Laravel community** | Laravel-built tool, credibility in the ecosystem | Laravel News feature, Laracasts sponsorship | | **Vue.js ecosystem** | Built with Vue, credibility | Vue.js conference sponsorship/talk | | **European alternatives directories** | Listed on europealternatives.com and similar EU-SaaS directories | Submit listings — validated as a real discovery channel for EU-conscious buyers | --- ## 8. Launch Roadmap *(Updated 2026-04-04 — resequenced based on [research/dev-tool-growth-playbooks.md](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md). Key change: added pre-launch beta phase before public launch.)* | Status | Action | Owner | Phase | |--------|--------|-------|-------| | ✅ | Landing page shipped (kendo.dev) | Dev | Foundation | | ✅ | Documentation shipped (kendo.dev/docs) | Dev | Foundation | | ✅ | SEO + AI discoverability (llms.txt, structured data) | Dev | Foundation | | **Next** | Start building in public on Twitter/X (Jasper personal) | Founder | Pre-launch | | **Next** | Ship open-source wedge project (MCP library or time tracking CLI) | Dev | Pre-launch | | **Next** | Set up waitlist / invite-only beta on kendo.dev | Dev | Pre-launch | | **Next** | Onboard first 10-50 beta users manually | Founder | Pre-launch | | **Next** | Prepare Product Hunt assets (demo video, screenshots, copy) | Marketing | Pre-launch | | **Launch** | Hacker News "Show HN" post (Tue-Thu, 8-10 AM PT) | Founder | Public launch | | **Launch** | Product Hunt launch (same week as HN) | All hands | Public launch | | **Planned** | Launch Discord community | Founder | Post-launch | | **Planned** | First blog posts (context switching article, MCP deep-dive) | Content | Post-launch | | **Planned** | Comparison pages (vs Linear, vs Jira, vs GitHub Projects) | Content | Post-launch | | **Planned** | Submit to MCP registry, Mattermost marketplace, EU directories | Dev | Post-launch | | **Planned** | First mini-Launch Week (3 days of announcements) | All | Month 2-3 | | **Ongoing** | 2x/month blog posts, daily Twitter, community engagement | Content | Ongoing | --- ## 9. Key Metrics to Track *Note: Month 3 targets are aspirational. PostHog took ~6 months to reach 1,000 users. Linear had 1,000 after a year of invite-only beta. Calibrate expectations — the 0-to-1000 phase depends heavily on HN/PH launch success. (See [research/dev-tool-growth-playbooks.md](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md))* | Metric | Tool | Target (Month 3) | |--------|------|-------------------| | Waitlist signups (pre-launch) | Internal | 200+ signups | | Beta users (pre-launch) | Internal | 50 active users | | Organic traffic to kendo.dev | Google Search Console | 1,000 visits/month | | AI citations (ChatGPT, Perplexity mentions) | Manual monitoring | Appearing in "issue tracker" queries | | Sign-ups (post-launch) | Internal analytics | 100 sign-ups | | Product Hunt upvotes | Product Hunt | Top 5 of the day | | Hacker News upvotes | HN | 100+ upvotes | | GitHub stars (open-source wedge) | GitHub | 500 stars | | Blog post views | Analytics | 500/post average | | Discord members | Discord | 100 members | --- ## 10. Strategic Priorities *(Updated 2026-04-04 based on [research/dev-tool-growth-playbooks.md](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md), [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md), [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* ### Five things that matter most 1. **Lead with context switching, not "Jira alternative."** Context switching costs developers $78K/year and 4.5 hours of focus daily. "Your board lives in your terminal — zero context switches" is a stronger and less saturated message than "Jira alternative" (at least 15 articles already compete for that phrase). *(See [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* 2. **Ship an open-source wedge before public launch.** Every breakout dev tool had an open-source component driving awareness (Next.js, React Email, PostHog self-hosted). This is the biggest gap in the strategy. Candidates: MCP integration library, time tracking CLI, board template format. *(See [research/dev-tool-growth-playbooks.md](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md))* 3. **Run a private beta before going public.** Linear had 1,000 customers before public launch. Resend had 20K on a waitlist. A 4-8 week invite-only beta builds demand, refines the product, and ensures the public launch converts. HN + Product Hunt remains the launch sequence — but after beta, not before. 4. **MCP depth + audit trails is the moat.** MCP is now table stakes (13+ PM tools have servers), but nobody is solving audit trails for AI agent actions via MCP. Kendo's hash-chained audit logs uniquely fill this gap. Position around trust and transparency: "AI manages your board — and every action is audited." *(See [research/developer-pm-sentiment-2026.md §6](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* 5. **"Nothing extra" > "everything included."** Reframe the bundle message to avoid sounding like ClickUp. "No Toggl subscription. No Tempo add-on. No enterprise tier for audit logs." is more compelling than "all-in-one." Developers who hear "everything included" think feature bloat; "nothing extra" communicates lean focus. --- ## Related - [marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — SWOT, competitive landscape, USPs, messaging framework - [product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Full feature set powering this strategy - [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Voice, tone, and visual identity for all marketing output - [mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — Strategic priorities that guide what we emphasize - *growth/directory-launch-plan.md* — Startup directory submission strategy - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Detailed competitive landscape - [../shared/user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Persona details for targeted messaging --- # Glossary *Source: `shared/glossary.md` · [Raw](https://kendo-hq.pages.dev/raw/shared/glossary.md)* # Glossary Kendo-specific terminology for consistent communication across all agents. | Term | Definition | |------|-----------| | **Agent** | An AI entity (Claude Code, custom bot, etc.) that can be assigned to issues and autonomously work on them | | **Agent assignee** | An AI agent assigned to an issue, treated the same as a human assignee on the board | | **Board** | Kanban-style view of issues organized by status columns | | **Backlog** | List of issues not yet in active development — the "to be prioritized" queue | | **Epic** | A grouping of related issues that together form a larger feature or initiative | | **Issue** | The atomic unit of work in Kendo — a bug, feature, or task | | **Issue key** | The unique identifier for an issue, formatted as `PROJECT-0001` | | **MCP server** | Kendo's Model Context Protocol server that lets AI assistants read and manage issues | | **Project** | A container for issues, with its own board, backlog, and team members | | **Project code** | Short uppercase prefix for a project (e.g., "KEN" for Kendo), used in issue keys | | **Tenant** | An isolated instance of Kendo with its own database, subdomain, projects, and users | | **Worktree** | Git worktree used by the Dev agent to work on issues in isolation without switching branches | | **Kendo HQ** | This repo — the company-level orchestration layer containing agent profiles and shared context | | **Report** | A lightweight feedback item (bug, request, idea) submitted to a project's triage inbox. Reports can be reviewed, reordered, promoted to issues (with optional AI story generation), or dismissed. Not an issue until promoted. | | **Lane** | A configurable status column on the board (e.g., To Do, In Progress, In Review, Done). Each lane has a name, color, and order. Projects can customize their lanes. | | **Priority** | Issue urgency level — Highest, High, Medium, Low, or Lowest | | **Role** | A custom permission set within a tenant. Each role defines per-action scopes (None, Own, All). Admin and Member are the default system roles; tenants can create additional roles. | | **Notification** | An in-app alert triggered by issue changes, comments, assignments, or other events. Users can configure notification preferences and mark notifications as read. | | **Attachment** | A file uploaded to an issue or comment | | **Sprint status** | The lifecycle state of a sprint — planned, active, or completed. When completing a sprint, unfinished issues migrate to a target sprint or the backlog. | | **Blocking relation** | A dependency between issues — an issue can block or be blocked by other issues. Displayed on the issue detail and managed via the board or MCP. | | **Issue template** | A reusable template for issue creation within a project. Templates define default content that pre-populates new issues. Managed in project settings. (Renamed from "system prompts" internally.) | | **Two-factor authentication (2FA)** | Optional TOTP-based second factor for user accounts. Tenants can enforce 2FA for admins, members, or both. Setup includes QR code and recovery codes. | | **BYOK AI keys** | "Bring your own key" — tenants and projects can configure their own Anthropic or OpenAI API keys instead of using Kendo's shared keys. Managed at tenant and project level. | | **OAuth** | Third-party login via Google or GitHub. Users can link/unlink OAuth providers in their profile settings. Also used for invite acceptance. | | **Overview (issue view)** | A dedicated issue overview page alongside Board and Backlog, showing issue summaries and key metrics for a project. | | **Epic status** | The lifecycle state of an epic — Open, In Progress, or Completed. | | **kendo-red** | `#c8553a` — the brand accent color used throughout the UI. See [brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md#brand-color) for the full color system. | ## Related - [../company/product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Features referenced in this glossary - [../company/brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Visual identity and design tokens - [user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Who uses these terms and in what context --- # Competitors *Source: `shared/competitors.md` · [Raw](https://kendo-hq.pages.dev/raw/shared/competitors.md)* # Competitive Landscape *(Updated 2026-04-04 based on [research/mcp-ecosystem-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/mcp-ecosystem-landscape-2026-q2.md), [research/dev-tool-pricing-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/dev-tool-pricing-landscape-2026-q2.md), [research/ai-native-dev-tools-landscape-2026.md](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md), [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md), [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* ## Direct Competitors ### Jira (Atlassian) - **Market position:** Dominant in enterprise, hated by developers - **Strengths:** Deep integrations, mature ecosystem, enterprise compliance, Rovo MCP server (46+ tools across Jira/Confluence/Compass/JSM), Rovo AI agents assignable to issues - **Weaknesses:** Slow, bloated, requires admin overhead, ugly UI, complex workflows, AI gated behind Premium+ ($14.54) - **Our angle:** "You shouldn't need a Jira admin. You shouldn't need a certification. You should just track issues and ship." ### Linear - **Market position:** The "developer-loved" alternative. Strong brand, great design. Pursuing a "self-driving SaaS" vision. - **Strengths:** Beautiful UI, fast, opinionated, strong keyboard shortcuts, MCP support (21+ tools), AI triage, Linear Agent (public beta), GitHub Copilot integration for coding agent dispatch *(See [research/ai-native-dev-tools-landscape-2026.md](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md))* - **Weaknesses:** No native time tracking, closed ecosystem, annual-only billing on paid plans, audit logging gated behind Enterprise - **Our angle:** "Linear proved developers want better tools. We go further — time tracking included, deeper MCP integration, EU-hosted." ### Asana - **Market position:** Popular with PMs and non-technical teams - **Strengths:** Flexible views, good for cross-functional teams, strong automation, MCP server (30+ tools, V2 GA), AI Studio with 12 pre-built AI Teammates, **MCP Apps launch partner (Jan 2026)** — ships an interactive UI widget inside Claude Desktop - **Weaknesses:** Not developer-focused, can feel like a PM tool forced on devs, AI Teammates geared toward cross-functional workflows - **Our angle:** "Built for developers, not for the people who manage developers." ### GitHub Issues + Projects - **Market position:** Free, integrated with where code lives - **Strengths:** Zero friction if you're already on GitHub, tight PR/commit links, MCP server (51 tools including Projects), Copilot cloud agent integration - **Weaknesses:** Limited views, no proper board experience, no AI agent assignment, basic at scale, no time tracking - **Our angle:** "When GitHub Issues isn't enough, but Jira is too much." ### Plane - **Market position:** Open-source, AI-native alternative, growing fast (46.5k GitHub stars) - **Strengths:** Self-hostable, open source, clean UI, MCP server (55+ tools — largest in PM category), AI sidecar in every view, agent assignment via @mention, time tracking on Pro+ ($6) - **Weaknesses:** Less polished than Linear, time tracking gated behind Pro tier, smaller ecosystem, audit logging gated behind Business ($13) - **Our angle:** "Open source is great for trust. But audit logging and EU hosting aren't add-ons — they're built in from day one." ### Shortcut (formerly Clubhouse) - **Market position:** Mid-market, developer-friendly, actively investing in AI - **Strengths:** Good API, reasonable UI, milestones/epics, Korey AI agent (GA, shipped Q1 2026) with spec generation and task breakdown, MCP server (24 tools), "Shortcut for Agents" platform - **Weaknesses:** Low brand awareness, no native time tracking, no audit logging below Enterprise - **Our angle:** "Shortcut built a good PM agent. We built the terminal-native platform that agents work through — time tracking, audit logs, and EU hosting included." ## Additional Competitors ### Monday.com - **Market position:** Broad/enterprise work management, not developer-focused - **Strengths:** MCP server (18 standard + dynamic API tools, all plans), Sidekick AI (GA), Agent Builder (beta), flexible views, **MCP Apps launch partner (Jan 2026)** — ships an interactive UI widget inside Claude Desktop - **Weaknesses:** Not developer-focused, minimum 3-seat purchase on paid plans, AI pricing tiered (Lite/Plus/Super Sidekick) - **Our angle:** "Built for developers, not for everyone. Monday.com charges extra for AI depth — we include it." ### ClickUp - **Market position:** All-in-one for everyone, heavy AI investment - **Strengths:** MCP server (20+ official tools, public beta), AI Super Agents with persistent memory, time tracking on Unlimited+ ($7), broad feature set - **Weaknesses:** Bloated everything-app, Brain add-on at $9/user/month, AI Autopilot at $28/user/month — most expensive AI pricing in the market - **Our angle:** "ClickUp charges $9/user for AI as an add-on. We include AI in every tier — no hidden costs." ### Notion - **Market position:** Knowledge tool adding PM as a side feature - **Strengths:** MCP server (18 tools, GA), Custom Agents with multi-model support (Claude/GPT/Gemini), MCP connections to Linear/Figma/Slack - **Weaknesses:** Not a real PM tool — developers who use Notion for project management usually outgrow it - **Our angle:** "Notion is where you write docs. Kendo is where you ship software." ### Wrike - **Market position:** Enterprise work management - **Strengths:** MCP server (GA, full API coverage), time tracking via MCP, OAuth 2.0 - **Weaknesses:** Enterprise-focused, not developer-targeted, less known in dev circles - **Our angle:** Not a primary competitor for developer teams. ## Adjacent / Complementary ### Cursor, Windsurf, Claude Code (AI coding tools) - Not competitors but complementary. Every major coding tool now supports MCP natively. Kendo's MCP server makes it the project layer for AI-assisted development. These tools write code — Kendo tells them what to write. ## Our Unique Position MCP support is now table stakes — at least 13 major PM tools have official MCP servers. The differentiator is no longer "we have MCP" but the depth and developer-workflow fit of our integration, combined with capabilities competitors gate behind higher tiers. The next frontier is **MCP Apps** (SEP-1865, GA 2026-01-26) — interactive UI widgets rendered inside Claude Desktop, Claude.ai, VS Code Copilot, and ChatGPT. Asana, Monday.com, Canva, Figma, Box, and Slack were launch partners. Kendo can ship a pilot MCP App (a read-only "Today" card) in 3-5 days with zero new backend work — our 29 tools and 11 resources already cover the data layer. *(See *../research/mcp-apps-kendo-pilot.md*)* Our key differentiators: 1. **Zero context switches** — 29 MCP tools designed for the Claude Code / Cursor terminal workflow. Developers lose 4.5 hours/day to context switching; Kendo eliminates the PM tool switch entirely. *(See [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* 2. **MCP audit trails** — the only PM tool with hash-chained audit logs for AI agent actions. MCP automation is booming but nobody else is solving audit trails. Included at $8/user; competitors gate audit logs behind Enterprise tiers ($13-21+) or don't offer them. 3. **Nothing extra** — time tracking, audit logging, AI, MCP all included in every tier. No add-on pricing. Competitors charge extra for each. 4. **Only EU-hosted developer PM tool at this price** — Amsterdam-hosted, Dutch-owned, database-per-tenant. Linear, Shortcut: US-only. Plane: US cloud (self-host for EU). Jira: EU on Premium ($14.54). 54% of EU IT decision-makers prioritize data sovereignty. *(See [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* 5. A dense, developer-first UI — dark theme, JetBrains Mono, keyboard-driven. Developers consistently praise PM tools that exhibit sub-second speed, keyboard-first nav, and developer aesthetics. *(See [research/developer-pm-sentiment-2026.md §4](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* No single competitor matches all five. That combination is the moat — not any one feature in isolation. ## Related - [../company/product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Our full feature set and differentiators - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — SWOT analysis, competitive positioning, and feature messaging matrix - [../company/brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Guidelines for how we reference competitors in content - [user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Who's evaluating us against these competitors --- # User Personas *Source: `shared/user-personas.md` · [Raw](https://kendo-hq.pages.dev/raw/shared/user-personas.md)* # User Personas *(Validated 2026-04-04 against [research/developer-pm-sentiment-2026.md](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md) — all three personas strongly confirmed by survey data and community sentiment.)* ## Primary: Freelance Dev "Floris" **Role:** Solo developer or small agency **Team size:** 1-3 **Situation:** - Building a product alone or with a tiny team - Uses GitHub for code, but needs more structure for planning - Already using AI tools (Claude Code, Cursor, Copilot) for coding - Doesn't want to pay for or configure Jira - Needs time tracking for client billing **Needs:** - Quick issue creation without ceremony - Reports inbox to dump feedback and ideas without writing a full issue - Board view to see what's in flight - MCP integration so they can manage issues from the terminal - Built-in time tracking — no Toggl subscription needed **Pain points:** - Paying for Jira + Toggl is overkill for a small team - Switching between coding and project management breaks flow - Existing tools are either too simple (GitHub Issues) or too complex (Jira) **Research validation:** Strongly confirmed. "Paying for Jira + Toggl is overkill" appears repeatedly in community discussions. Small teams use only 20% of enterprise tool features but pay for 100%. Many have tried Linear, Shortcut, and GitHub Projects and found each lacking — this is the "missing middle ground" segment. *(See [research/developer-pm-sentiment-2026.md §5](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* **What they'd say:** - "I just want to dump tasks somewhere and have them organized" - "I don't need sprints or story points, I need a board and a backlog" - "Why do I need three subscriptions to track issues and log time?" --- ## Secondary: Tech Lead "Tessa" **Role:** Leads a team of 5-8 developers **Team size:** 5-15 **Situation:** - Managing a small team that moves fast - Uses Claude Code daily, tracks work in Linear or GitHub - Currently on Jira (reluctantly) or Linear - Hates context switching between IDE, browser, and PM tool **Needs:** - Role-based access with teams - Sprint/cycle planning without overhead - Report triage — review incoming feedback, promote to issues or dismiss - MCP integration so the team can manage issues from their AI coding tools - GitHub PR and branch linking - Time tracking for reporting **Pain points:** - Jira is slow and the team hates it - Too much time in standups discussing status that a board should show - Switching between IDE, browser, and PM tool kills flow **Research validation:** Strongly confirmed. Context switching is the #1 developer productivity killer: 12-15 major switches/day, 23 min recovery each, $78K/year lost per developer. Atlassian's own DevEx report found developers save 10 hrs/week with AI but lose 10 hrs/week to tool friction — the gains cancel out. JetBrains publicly stated the fix is to "bring tools into the IDE." This is Kendo's strongest validated positioning angle. *(See [research/developer-pm-sentiment-2026.md §2](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md))* **What they'd say:** - "I want to see at a glance what everyone is working on" - "We don't need 90% of Jira's features but we need the 10% to be solid" - "Why can't I manage my board from the terminal where I already work?" --- ## Tertiary: CTO "Christiaan" **Role:** Responsible for team productivity and compliance **Team size:** 10-20, EU-based company **Situation:** - Needs GDPR-compliant tooling with EU data residency - Wants audit trails for compliance reporting - Evaluating alternatives to US-hosted PM tools **Needs:** - EU hosting with database-per-tenant isolation - Hash-chained audit logging for compliance - Role-based access control - Time tracking for client reporting and invoicing **Pain points:** - GDPR compliance with US-hosted tools requires extra legal work - Audit logging is an enterprise upsell at every competitor - Wants one tool that covers PM + time tracking + compliance **Research validation:** Validated with growing strength. 54% of EU IT decision-makers prioritize data sovereignty in purchasing. 73% of European business decision-makers weight privacy as significant in vendor selection. 61% of Western European CIOs intend to shift workloads to local/regional providers. GDPR fines up 38% YoY in 2025. NIS2 enforcement (October 2026) will require customers to evaluate vendor compliance. No competitor in Kendo's segment matches EU-hosted + database-per-tenant + audit logs at every tier. Kendo should lead with trust for this persona specifically. *(See [research/eu-data-sovereignty-demand-2026.md](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md))* **What they'd say:** - "Where is my data stored? Can you prove it?" - "I need an audit trail that holds up, not a CSV export" - "Why is compliance always the enterprise tier?" ## Related - [end-user-roles.md](https://kendo-hq.pages.dev/raw/shared/end-user-roles.md) — Day-to-day user archetypes within a team (companion doc) - [../company/product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Features that solve each persona's pain points - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md#4-target-audience--personas) — Messaging tailored to each persona - [competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — The alternatives these personas are currently evaluating - [../company/mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — Strategic priorities mapped to persona needs --- # End-User Roles *Source: `shared/end-user-roles.md` · [Raw](https://kendo-hq.pages.dev/raw/shared/end-user-roles.md)* # End-User Roles *(Derived from [PR#783](https://github.com/script-development/kendo/pull/783) by Sven Kouwenberg — KD-0389)* > **Context:** Kendo's buyer personas ([user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md)) describe who purchases Kendo. This companion doc describes the different types of people who use Kendo day-to-day **within a team**. These are not Kendo roles (which are customizable per tenant) — they're archetypes for making product decisions. ## Primary: Software Developer **Name:** Robin — Mid-level Developer | | | |---|---| | **Role** | Software developer in a small-to-medium team (3-10 people) | | **Technical proficiency** | High — comfortable with Git, CI/CD, APIs, and CLI tools | | **Goals** | Stay focused on coding with minimal context-switching; quickly find assigned work, log progress, and link branches to issues | | **Pain points** | Too many clicks to get to "my work"; notifications that aren't relevant; tools that force process overhead instead of supporting flow | | **Key tasks** | View assigned issues, move issues across the board, link branches, log time, comment on issues, review PR-linked issues | | **Success metric** | "I spend less than 5 minutes a day on project management overhead" | > *"Just show me what I need to work on and get out of my way."* --- ## Team Lead **Name:** Dana — Technical Team Lead | | | |---|---| | **Role** | Leads a development team; splits time between coding and coordination | | **Technical proficiency** | High — also develops, but needs oversight tooling | | **Goals** | Keep the team on track without micromanaging; spot blockers early; plan sprints efficiently | | **Pain points** | No quick way to see who's stuck or what's overdue; sprint planning takes too long; unclear workload distribution | | **Key tasks** | Sprint planning and review, assign/reassign issues, monitor board health, review time logs, manage epics | | **Success metric** | "I can assess team status in under 2 minutes without asking anyone" | > *"I need the big picture at a glance so I can unblock my team before they have to ask."* --- ## Product Owner **Name:** Farid — Product Owner | | | |---|---| | **Role** | Defines what gets built and in what order; bridges business and development | | **Technical proficiency** | Moderate — understands software concepts but doesn't write code | | **Goals** | Prioritize the backlog based on business value; track progress toward milestones; communicate status to stakeholders | | **Pain points** | Hard to tell what's actually done vs. "in progress"; epics don't clearly show delivery progress; writing user stories that developers find actionable | | **Key tasks** | Create and prioritize issues, manage epics, review sprint outcomes, triage reports/feedback, generate progress summaries | | **Success metric** | "I can confidently tell stakeholders where we stand on any feature" | > *"I need to know what's shipping this sprint and whether we're on track — without reading every ticket."* --- ## Admin **Name:** Priya — Organization Admin | | | |---|---| | **Role** | Manages the Kendo workspace: users, projects, permissions, integrations | | **Technical proficiency** | High — often also a developer or team lead | | **Goals** | Keep the workspace secure, organized, and correctly configured; onboard/offboard users smoothly | | **Pain points** | Permission changes are tedious; hard to audit who has access to what; setup for new projects involves too many manual steps | | **Key tasks** | Manage users and roles, configure projects and lanes, manage GitHub integrations, review audit logs | | **Success metric** | "New team members are productive on day one; I never discover a permissions gap from an incident" | > *"I want to set it up once and trust that it works — and know immediately when it doesn't."* --- ## QA / Tester **Name:** Jesse — QA Engineer | | | |---|---| | **Role** | Validates features, reports bugs, triages incoming reports and feedback | | **Technical proficiency** | Moderate-to-high — reads code, runs tests, but primary focus is verification | | **Goals** | Efficiently triage reports; ensure bugs are well-documented and actionable; track fix verification | | **Pain points** | Reports lack reproduction steps; hard to see which bugs are blocked or blocking; no clear feedback loop when a fix is deployed | | **Key tasks** | Triage reports (promote to issue or dismiss), create bug issues, verify fixes, track bug lifecycle, comment with test results | | **Success metric** | "Every bug that reaches Done has been verified, and no report sits untriaged for more than a day" | > *"I'm the quality gate — give me the tools to triage fast and verify with confidence."* --- ## Stakeholder **Name:** Lena — Client / Project Stakeholder | | | |---|---| | **Role** | External or non-technical stakeholder who needs visibility into project progress | | **Technical proficiency** | Low — not familiar with development workflows | | **Goals** | Understand project status without learning a complex tool; provide feedback that actually reaches the team | | **Pain points** | Dashboards are too technical; doesn't know where to leave feedback; never sure if feedback was seen | | **Key tasks** | View project progress, submit feedback or bug reports, review delivered features | | **Success metric** | "I can check how things are going and give feedback without needing a walkthrough every time" | > *"I don't want to learn your tool — I just want to see progress and be heard."* --- ## Overlap Matrix Shows which goals are shared across archetypes — useful for prioritizing features with broad impact. | Goal | Developer | Team Lead | Product Owner | Admin | QA | Stakeholder | |---|:---:|:---:|:---:|:---:|:---:|:---:| | Quick access to "my work" | **X** | **X** | | | **X** | | | Board/sprint overview | | **X** | **X** | | | | | Progress visibility | | **X** | **X** | | | **X** | | Report/feedback triage | | | **X** | | **X** | | | Low friction issue management | **X** | **X** | **X** | | **X** | | | User/permission management | | | | **X** | | | | Minimal learning curve | **X** | | | | | **X** | | Notification relevance | **X** | **X** | | | **X** | | | Time tracking | **X** | **X** | | | | | | Audit log access | | | | **X** | | | ## How to Use This Document 1. **Feature prioritization** — Check which archetypes benefit. Features serving 3+ archetypes rank higher. 2. **Design decisions** — When debating UX trade-offs, ask: "Does this serve the primary archetype (Developer)? Does it harm any other archetype?" 3. **User stories** — Reference an archetype by name: *"As Robin (Developer), I want to..."* — this keeps stories grounded in real needs. 4. **Scope control** — If a feature only serves one non-primary archetype and adds complexity for others, question whether it belongs in the current iteration. ## Related - [user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Buyer personas (who purchases Kendo) - [../company/product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Features that address each archetype's needs - [competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — The alternatives these users have tried - *../plans/PLAN-020-watch-system/* — Watch system plan (notification strategy driven by these archetypes) --- # AI Browser Automation & Scraping Tools 2026 *Source: `research/ai-browser-automation-scraping-tools-2026.md` · [Raw](https://kendo-hq.pages.dev/raw/research/ai-browser-automation-scraping-tools-2026.md)* --- title: "AI Browser Automation for Scraping: agent-browser, browser-use, Stagehand — Patterns for Structured Data Extraction" date: 2026-04-04 type: investigation tags: [browser-automation, scraping, AI-agents, data-extraction, Stagehand, browser-use, agent-browser, Node.js, Twitter, X] sources: - https://github.com/vercel-labs/agent-browser - https://github.com/vercel-labs/agent-browser/blob/main/skills/agent-browser/SKILL.md - https://github.com/browser-use/browser-use - https://github.com/browserbase/stagehand - https://docs.stagehand.dev/v3/basics/extract - https://docs.stagehand.dev/v3/basics/observe - https://browser-use.com/posts/web-scraping-guide-2026 - https://scrapfly.io/blog/posts/stagehand-vs-browser-use - https://scrapfly.io/docs/cloud-browser-api/agent-browser - https://www.morphllm.com/ai-web-scraping - https://medium.com/@astro_official/performance-of-ai-based-solutions-for-web-scraping-6a32c3799b2f - https://okhlopkov.com/web-scraping-ai-agents-2026/ - https://www.nxcode.io/resources/news/stagehand-vs-browser-use-vs-playwright-ai-browser-automation-2026 - https://www.skyvern.com/blog/browser-use-vs-stagehand-which-is-better/ - https://github.com/the-convocation/twitter-scraper - https://github.com/nirholas/XActions - https://github.com/hireshBrem/X-ai-agent - https://github.com/browser-use/browser-use/issues/2167 - https://help.apiyi.com/en/agent-browser-ai-browser-automation-cli-guide-en.html - https://www.browserbase.com/blog/stagehand-v3 --- # AI Browser Automation for Scraping: agent-browser, browser-use, Stagehand ## Summary Three AI-driven browser automation tools dominate the 2026 landscape: **agent-browser** (Vercel, CLI-first, ref-based element selection via accessibility tree), **browser-use** (Python, full LLM autonomy, 50k+ GitHub stars), and **Stagehand** (TypeScript, hybrid AI+code, Zod schema extraction). For a Node.js server scraping X search results every 15 minutes, **Stagehand is the strongest fit**: it's TypeScript-native, has a built-in `extract()` method that returns Zod-validated structured data, supports selector caching to reduce LLM costs on repeated runs, and connects to cloud browsers via CDP. However, **none of these tools reliably handle X login automation** due to X's anti-bot detection — the practical pattern is to use a persistent authenticated session (cookies/`user_data_dir`) rather than automating login. Cost runs $0.01-0.05 per page in LLM fees, meaning a 15-minute scrape cycle costs roughly $0.50-2.00/day depending on pages scraped and model chosen. For pure X scraping without AI navigation, the Node.js `twitter-scraper` library (reverse-engineered frontend API, no browser needed) is cheaper and faster but fragile. --- ## 1. Tool-by-Tool Analysis ### 1.1 agent-browser (Vercel Labs) **What it is:** A native Rust CLI for browser automation, designed as an MCP server for AI coding agents (Claude Code, Cursor). 12.1k GitHub stars. **Architecture:** CLI commands executed via `npx agent-browser `. Connects to local Chromium or remote browsers via CDP (Chrome DevTools Protocol). Not a library you import — it's a CLI tool your AI agent shells out to. **Data extraction model — no `extract` or `observe` commands exist.** Extraction is done through: ```bash # 1. Get accessibility tree with element refs agent-browser snapshot -i # Returns @e1, @e2, etc. agent-browser snapshot -s "#selector" # Scope to CSS selector # 2. Extract content from specific elements agent-browser get text @e1 # Text content agent-browser get html @e1 # HTML markup agent-browser get attr @e1 href # Attribute value agent-browser get value @e1 # Form field value # 3. Run arbitrary JS for complex extraction agent-browser eval "document.querySelectorAll('.tweet').length" ``` **Batch mode for efficiency:** ```bash echo '[ ["open", "https://x.com/search?q=kendo"], ["snapshot", "-i"], ["get", "text", "@e5"], ["screenshot", "result.png"] ]' | agent-browser batch --json ``` **Structured output:** The `--json` flag on most commands returns machine-parseable output. The LLM (Claude, GPT) interprets the accessibility tree snapshot and decides which `get text` calls to make. There is no built-in schema validation — the AI agent is responsible for structuring the data. **Session persistence:** Supports `--session` flag and `state save/load` commands to maintain browser state across runs. Can connect to cloud browsers (Scrapfly, Browserbase) via `--cdp` for persistent sessions: ```bash BROWSER_WS="wss://browser.scrapfly.io?api_key=KEY&session=my-task" agent-browser --cdp "$BROWSER_WS" open "https://x.com/search?q=kendo" ``` **Key strength:** 93% less context consumption vs Playwright MCP because ref-based selection (`@e1`) is compact. Designed for AI agents that need to reason about page structure. **Key weakness for scraping:** No structured extraction primitive. You're relying on the LLM to parse snapshots and compose `get text` calls. This works for one-off exploration but is expensive and unpredictable for recurring automated scraping. **Verdict for X scraping:** Poor fit for recurring automated scraping. agent-browser is designed for AI-in-the-loop workflows where a human or parent LLM is driving. It has no way to define "extract these fields from every tweet on the page" without an LLM interpreting each run. ### 1.2 browser-use (Python) **What it is:** Python-first AI browser automation framework. 50k+ GitHub stars, fastest-growing in the category. Fully autonomous — describe a goal in natural language, the agent handles everything. **Architecture:** Agent receives a task string, uses an LLM to reason about each step, controls a browser (Playwright-based) via screenshots + accessibility tree. Re-reasons at every step. ```python from browser_use import Agent, Browser, ChatBrowserUse import asyncio async def main(): browser = Browser() agent = Agent( task="Go to x.com/search?q=kendo, extract the first 10 tweets with author, text, timestamp, and engagement metrics. Return as JSON.", llm=ChatBrowserUse(), browser=browser, ) result = await agent.run() print(result) asyncio.run(main()) ``` **LLM options and cost:** - `ChatBrowserUse()` (proprietary, optimized): $0.20/M input, $2.00/M output tokens - Claude 3.5 Sonnet via Anthropic - Gemini Flash via Google - Local models via Ollama (zero API cost) **Cloud service (browser-use SDK):** ```python from browser_use_sdk.v3 import AsyncBrowserUse client = AsyncBrowserUse(api_key="YOUR_API_KEY") result = await client.run( "Go to x.com/search?q=kendo and extract the first 10 tweets with text, author, timestamp, likes, retweets" ) ``` Cloud service includes: custom Chromium fork with OS-level stealth, residential proxies in 195+ countries, free CAPTCHA solving, 81% success rate on stealth benchmark (71 websites tested). **Anti-detection:** Custom Chromium fork with C++/OS-level stealth modifications, fingerprint management, proxy rotation. The cloud version handles Cloudflare, reCAPTCHA, PerimeterX automatically. **X/Twitter specific findings:** - X login automation **does not work reliably** ([GitHub issue #2167](https://github.com/browser-use/browser-use/issues/2167)). X's anti-bot detection blocks automated login attempts. - **Workaround:** Log in manually once, then use `user_data_dir` parameter to persist the authenticated session. Or inject cookies from a real browser session. - The browser-use team has a demo showing "scraping my personal Twitter" with results piped to Google Sheets. **Cost benchmark:** $0.33 for extracting 20 Hacker News articles + comments (~60 seconds) vs $1.46 for same task on Browserbase (~401 seconds). That's roughly $0.01-0.05 per page depending on complexity. **Key strength:** Fully autonomous — describe what you want, it figures out navigation, scrolling, waiting. Handles dynamic content and SPAs naturally. Ollama support enables zero-API-cost runs. **Key weakness for Node.js:** It's Python. Integrating it into a Node.js server requires either a Python subprocess, a microservice, or using the cloud SDK (HTTP API). ### 1.3 Stagehand (Browserbase) **What it is:** TypeScript/Node.js AI browser automation SDK. 10k+ GitHub stars. Hybrid approach — mix natural language AI with deterministic code. **Architecture:** CDP-native (v3 removed Playwright dependency), connects directly to browsers. Three core primitives: `act()` (do something), `extract()` (get structured data), `observe()` (discover what's on the page). #### extract() — The Killer Feature for Scraping Returns Zod-validated structured data from the current page: ```typescript import { Stagehand } from "@browserbasehq/stagehand"; import { z } from "zod"; const stagehand = new Stagehand({ env: "BROWSERBASE", // or "LOCAL" modelName: "anthropic/claude-sonnet-4-5", }); await stagehand.init(); // Simple extraction const { author, title } = await stagehand.extract( "extract the author and title of the PR", z.object({ author: z.string().describe("The username of the PR author"), title: z.string().describe("The title of the PR"), }), ); // Array extraction (ideal for tweet lists) const tweets = await stagehand.extract( "extract all visible tweets", z.array(z.object({ author: z.string().describe("The @username of the tweet author"), text: z.string().describe("The full tweet text"), timestamp: z.string().describe("The tweet timestamp"), likes: z.number().describe("Number of likes"), retweets: z.number().describe("Number of retweets"), replies: z.number().describe("Number of replies"), })) ); // Primitive extraction const price = await stagehand.extract("extract the price", z.number()); const url = await stagehand.extract("extract the contact page link", z.string().url()); // No-parameter extraction (returns accessibility tree) const raw = await stagehand.extract(); ``` **Advanced extract() options:** ```typescript const result = await stagehand.extract("extract the repo name", { model: "anthropic/claude-sonnet-4-5", // Override model per call timeout: 30000, // Custom timeout selector: "xpath=/html/body/div/table", // Target specific DOM region serverCache: false, // Disable caching for this call }); // Cache status check console.log(result.cacheStatus); // "HIT" or "MISS" ``` #### observe() — Page Discovery Returns available actions on the page as structured objects: ```typescript const actions = await stagehand.observe("find the search input and submit button"); // Returns: // [ // { // description: "Search input field", // method: "fill", // arguments: ["search text"], // selector: "xpath=/html[1]/body[1]/div[1]/input[1]" // }, // { // description: "Submit button", // method: "click", // arguments: [], // selector: "xpath=/html[1]/body[1]/div[1]/button[1]" // } // ] ``` **observe() use cases:** - Exploration: map interactive elements before building automation - Planning: discover all actions for multi-step workflows upfront - Caching: store discovered selectors to skip LLM calls on subsequent runs - Validation: verify elements exist before acting #### act() — Browser Interaction ```typescript await stagehand.act("click the search button"); await stagehand.act("scroll down to load more tweets"); await stagehand.act("type 'kendo project management' into the search box"); ``` #### Selector Caching (Cost Reduction) Stagehand records successful element paths and replays them without LLM calls on subsequent runs. This is critical for recurring scraping — the first run uses AI to find elements, subsequent runs replay deterministically. This changes the cost model from $0.01-0.05/page (every run) to near-zero for repeat visits to the same page structure. **v3 performance:** 44% faster on average across iframes and shadow-root interactions. Modular driver system supports Puppeteer, Playwright, or any CDP-based driver. **Key strength:** TypeScript-native, Zod schema validation, selector caching for recurring jobs, clean `extract()` API that returns exactly the shape you define. **Key weakness:** Still requires LLM calls for first-time extraction. Each `act()` or `extract()` call consumes tokens. Browserbase cloud hosting adds cost ($0.10/session-minute for Scale plan). --- ## 2. Head-to-Head Comparison | Dimension | agent-browser | browser-use | Stagehand | |-----------|--------------|-------------|-----------| | **Language** | CLI (any language) | Python | TypeScript/Node.js | | **GitHub stars** | 12.1k | 50k+ | 10k+ | | **Extraction model** | `snapshot` + `get text` (manual) | Natural language task (autonomous) | `extract()` with Zod schemas | | **Structured output** | JSON flag on commands | LLM-generated text/JSON | Zod-validated typed objects | | **Schema validation** | None built-in | None built-in | Zod (compile-time + runtime) | | **LLM in the loop** | Required for every run | Required for every run | First run only (selector caching) | | **Cost per page** | ~$0.01-0.05 (LLM interprets) | ~$0.01-0.05 ($0.33/20 pages) | ~$0.01-0.05 (first run), ~$0 (cached) | | **Anti-detection** | Via cloud browser providers | Built-in stealth (cloud SDK) | Via Browserbase | | **Session persistence** | `state save/load`, `--session` | `user_data_dir` | Browserbase sessions | | **Recurring scraping** | Expensive (LLM every time) | Expensive (LLM every time) | Cheap after first run (caching) | | **Best for** | AI agent exploration | Autonomous one-shot tasks | Production TypeScript scraping | | **X login** | Manual + session persistence | Manual + `user_data_dir` | Manual + Browserbase sessions | --- ## 3. Patterns for X/Twitter Scraping ### 3.1 The Login Problem All three tools fail at automating X login. X's anti-bot detection is too sophisticated. The universal pattern is: 1. **Log in manually** in a real browser 2. **Export cookies** or use a persistent browser profile (`user_data_dir`) 3. **Load the authenticated session** into the automation tool 4. **Keep the session alive** — X sessions last days/weeks if the profile is realistic For Stagehand with Browserbase: ```typescript const stagehand = new Stagehand({ env: "BROWSERBASE", browserbaseSessionCreateParams: { projectId: "YOUR_PROJECT", // Reuse a persistent session with stored cookies browserSettings: { context: { id: "persistent-x-session" } } } }); ``` ### 3.2 Navigation: Hardcode URLs vs Dynamic **Recommendation: Hardcode the search URL.** For recurring scraping of X search results, there's no reason to let the LLM navigate. The URL pattern is deterministic: ``` https://x.com/search?q=kendo%20project%20management&src=typed_query&f=live ``` Use the AI only for **extraction**, not navigation. This cuts LLM costs by 50-70% and eliminates navigation failures. ```typescript // Good: hardcode navigation, AI for extraction only await stagehand.page.goto("https://x.com/search?q=kendo&f=live"); await stagehand.page.waitForSelector('[data-testid="tweet"]'); const tweets = await stagehand.extract("extract all visible tweets", tweetSchema); // Bad: let AI handle everything (expensive, unreliable) await stagehand.act("go to X and search for kendo"); ``` ### 3.3 Structured Data Extraction Pattern Define a strict schema upfront and reuse it: ```typescript const tweetSchema = z.array(z.object({ author: z.string().describe("The @handle of the tweet author"), displayName: z.string().describe("The display name of the tweet author"), text: z.string().describe("The full tweet text content"), timestamp: z.string().describe("When the tweet was posted (relative or absolute)"), likes: z.number().describe("Number of likes shown"), retweets: z.number().describe("Number of retweets/reposts shown"), replies: z.number().describe("Number of replies shown"), url: z.string().url().optional().describe("Direct link to the tweet if visible"), })); ``` ### 3.4 Scrolling for More Results X search results load dynamically. Pattern for getting more tweets: ```typescript const allTweets = []; for (let i = 0; i < 3; i++) { const batch = await stagehand.extract("extract all visible tweets", tweetSchema); allTweets.push(...batch); await stagehand.act("scroll down to load more tweets"); await stagehand.page.waitForTimeout(2000); // Wait for new tweets to render } // Deduplicate by author + text ``` ### 3.5 Anti-Detection for Recurring 15-Minute Scrapes - **Randomize timing:** Don't scrape at exactly :00, :15, :30, :45. Add 1-3 minutes of jitter. - **Use residential proxies:** Cloud browser providers (Browserbase, Scrapfly) offer these. - **Maintain a realistic session:** Don't open → scrape → close. Keep the browser session alive and scroll like a human would. - **Respect rate limits:** X's frontend rate limits are dynamic. The `twitter-scraper` Node.js library reports limits can pause requests for up to 13 minutes. - **Rotate user agents sparingly:** X fingerprints aggressively. Changing UA too often triggers detection. --- ## 4. Alternative: Skip Browser Automation Entirely For pure X search result scraping, browser automation may be overkill. Two lighter alternatives: ### 4.1 twitter-scraper (Node.js) A Node.js library that reverse-engineers X's frontend JavaScript API. No browser needed — direct HTTP requests. ```typescript import { Scraper } from "@the-convocation/twitter-scraper"; const scraper = new Scraper(); await scraper.login("username", "password", "email"); // Or: await scraper.setCookies(cookiesFromBrowser); // Search tweets for await (const tweet of scraper.searchTweets("kendo project management", 20)) { console.log({ author: tweet.username, text: tweet.text, timestamp: tweet.timeParsed, likes: tweet.likes, retweets: tweet.retweets, }); } ``` **Pros:** Fast, no LLM costs, no browser overhead, returns structured data natively. **Cons:** Reverse-engineered API breaks when X changes their frontend. Account ban risk. Rate limits can pause for 13 minutes. No CAPTCHA handling. ### 4.2 XActions (MCP + Puppeteer) 140+ MCP tools for X automation. Puppeteer-based scrapers with Socket.IO real-time streaming. **Pros:** MCP-native (works with Claude Code), real-time events via Socket.IO, cross-platform (X + BlueSky + Mastodon), plugin architecture. **Cons:** Heavy setup, primarily designed for engagement automation not just scraping. --- ## 5. Cost Analysis for 15-Minute Recurring Scrape Scenario: Scrape X search results every 15 minutes, extracting ~20 tweets per run. | Approach | Cost/Run | Cost/Day (96 runs) | Notes | |----------|----------|---------------------|-------| | **Stagehand (cached)** | ~$0.005 | ~$0.48 | First run ~$0.05, subsequent near-zero | | **Stagehand (uncached)** | ~$0.03-0.05 | ~$2.88-4.80 | Every run uses LLM | | **browser-use (cloud SDK)** | ~$0.02-0.05 | ~$1.92-4.80 | Plus cloud fees | | **browser-use (Ollama)** | ~$0 | ~$0 | Local GPU required | | **agent-browser** | ~$0.03-0.05 | ~$2.88-4.80 | Plus LLM API costs | | **twitter-scraper (Node.js)** | ~$0 | ~$0 | No LLM, no browser, but fragile | | **X API (pay-per-use)** | ~$0.20 | ~$19.20 | $0.01/tweet x 20 tweets | | **Browserbase hosting** | — | ~$4-14 | $0.10/min, depends on session time | **Bottom line:** For 96 runs/day, Stagehand with selector caching is the most cost-effective AI-driven option at ~$0.50/day after warmup. The `twitter-scraper` Node.js library is the cheapest overall ($0/day) but carries higher maintenance and ban risk. --- ## 6. Recommended Architecture for Kendo For a Node.js server scraping X search results every 15 minutes: ### Option A: Stagehand + Browserbase (Recommended) ``` Node.js Server (cron every 15min ± jitter) → Stagehand SDK (TypeScript-native) → Browserbase cloud browser (persistent session, residential proxy) → X.com search page (hardcoded URL) → extract() with Zod schema → structured tweet data → Store in database / pass to content pipeline ``` **Why:** TypeScript-native, Zod validation, selector caching, cloud browser handles anti-detection, persistent sessions handle auth. **Estimated cost:** ~$0.50/day (Stagehand LLM) + ~$5-10/day (Browserbase) = **~$6-11/day**. ### Option B: Hybrid Playwright + Stagehand ``` Node.js Server (cron every 15min ± jitter) → Playwright for navigation (deterministic, no LLM) → Load persistent profile with X cookies → Navigate to hardcoded search URL → Wait for tweets to render → Stagehand extract() for data extraction only → Zod schema → structured tweet data ``` **Why:** Playwright handles the predictable 80% (navigation, waiting). Stagehand handles the volatile 20% (parsing tweet content from dynamic DOM). Minimizes LLM calls. **Estimated cost:** ~$0.50/day (Stagehand extract only) + local browser costs. ### Option C: twitter-scraper (Cheapest, Riskiest) ``` Node.js Server (cron every 15min) → twitter-scraper library (direct HTTP, no browser) → Login with cookies → searchTweets() → structured data natively ``` **Why:** Zero LLM cost, zero browser cost, fastest execution. But reverse-engineered API breaks unpredictably, and account bans are a real risk. **Estimated cost:** $0/day, but high maintenance cost. --- ## 7. Open Questions - **Stagehand selector caching durability:** How well does caching survive X's frequent DOM changes? If X changes `data-testid` attributes, does the cache invalidate gracefully? - **Browserbase session limits:** What's the maximum session duration? Can a session stay alive for days/weeks for persistent X auth? - **X detection of Browserbase:** Has X specifically fingerprinted Browserbase's cloud browser infrastructure? Residential proxies help, but X may detect the browser environment itself. - **browser-use Node.js SDK:** browser-use is Python-first but offers a cloud SDK via HTTP. How stable is the cloud API? Is there a native Node.js client? - **Legal exposure:** X's ToS imposes $15,000 liquidated damages for accessing >1M posts/24h via automated means. At 20 tweets/15 min, we'd hit ~1,920 tweets/day — well under the threshold, but the legal ambiguity of automated access remains. *(See [research/automated-social-media-engagement-risks-2026.md](https://kendo-hq.pages.dev/raw/research/automated-social-media-engagement-risks-2026.md) for full legal analysis.)* --- # AI-Native Dev Tools Landscape 2026 *Source: `research/ai-native-dev-tools-landscape-2026.md` · [Raw](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md)* --- title: "AI-Native Developer Tools — Market Landscape 2026" date: 2026-04-04 type: analysis tags: [ai, competitors, market-trends, positioning] sources: - https://linear.app/now/self-driving-saas - https://linear.app/changelog/2026-03-24-introducing-linear-agent - https://linear.app/changelog/2026-02-05-linear-mcp-for-product-management - https://linear.app/docs/mcp - https://www.theregister.com/2026/03/26/linear_agent/ - https://www.shortcut.com/agents - https://www.shortcut.com/korey - https://www.korey.ai/ - https://clickup.com/brain/agents - https://clickup.com/brain - https://plane.so/ai - https://plane.so/ - https://plane.so/changelog/2026-02-01-product-tour-try-section-and-plane-ai-updates - https://developers.plane.so/dev-tools/mcp-server - https://github.com/makeplane/plane-mcp-server - https://www.notion.com/releases/2026-02-24 - https://www.notion.com/product/ai - https://www.notion.com/help/mcp-connections-for-custom-agents - https://www.atlassian.com/software/jira/ai - https://www.atlassian.com/software/rovo - https://www.atlassian.com/platform/remote-mcp-server - https://www.businesswire.com/news/home/20260224033792/en/Atlassian-Introduces-Agents-in-Jira-to-Drive-Human-AI-Collaboration-at-Enterprise-Scale - https://asana.com/product/ai/ai-studio - https://monday.com/w/ai - https://www.gb-advisors.com/blog/monday-com-ai-2026-sidekick-vibe-agents - https://www.dartai.com - https://www.ycombinator.com/companies/dart - https://www.taskade.com/agents/project-management - https://docs.gitscrum.com/en/mcp/overview - https://github.com/gitscrum-core/mcp-server - https://github.blog/changelog/2026-04-01-research-plan-and-code-with-copilot-cloud-agent/ - https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/integrate-coding-agent-with-linear - https://docs.github.com/en/copilot/concepts/agents/coding-agent/about-coding-agent - https://openai.com/codex/ - https://developers.openai.com/codex - https://dev.to/pockit_tools/mcp-vs-a2a-the-complete-guide-to-ai-agent-protocols-in-2026-30li - https://thenewstack.io/model-context-protocol-roadmap-2026/ - https://zuplo.com/mcp-report - https://mcpmanager.ai/blog/mcp-adoption-statistics/ - https://www.cdata.com/blog/2026-year-enterprise-ready-mcp-adoption - https://www.saassimply.com/post/the-agentic-era-how-software-slaughter-and-self-driving-workflows-will-change-saas-in-2026 - https://zapier.com/blog/best-ai-project-management-tools/ - https://www.nxgntools.com/blog/autonomous-project-management-tools-2026 - https://medium.com/@simontowity/what-ai-commoditizes-and-whats-left-for-product-people-to-do-d09e66e312dc - https://mixflow.ai/blog/ai-models-commoditization-second-order-effects-2026/ - https://chatforest.com/guides/ai-coding-assistants-compared/ - https://cursor.com/ - https://markaicode.com/cursor-beta-features-2026/ - https://docs.windsurf.com/windsurf/cascade/mcp - https://bugasura.io/ai-issue-tracker --- # AI-Native Developer Tools — Market Landscape 2026 > Research compiled by the Librarian agent for Kendo HQ. This report surveys the AI-native developer tools landscape as of April 2026, covering positioning claims, shipped features, protocol adoption, and commoditization risk. --- ## 1. Who Is Claiming "AI-Native"? The phrase "AI-native" has gone from niche differentiator to mainstream buzzword. Multiple players now use it — but the depth behind the claim varies enormously. ### Explicit "AI-native" positioning | Company | Claim | Substance | |---------|-------|-----------| | **Plane** | Homepage title: "AI-native project management." Blog: "not retrofitted for AI, built around it." | MCP server (official), AI sidecar in every view, agent assignment to work items, web search in AI, chart generation. Open-source. Credible claim. | | **Dart** | YC profile: "the only truly AI-native project management tool." | Chat-as-UI, agents as first-class collaborators, natural language task creation. YC-backed. Small but genuine AI-first architecture. | | **Taskade** | Markets as "AI-native" work management. | 700+ AI task types, custom agents with persistent memory, 22+ built-in tools. Lightweight, more consumer than enterprise. | | **Linear** | "Designed for the AI era" (homepage). "Self-driving SaaS" (blog). Does not use the exact phrase "AI-native." | Linear Agent (public beta), MCP server, coding agent dispatch, Slack/Teams integration. Strong execution but started as a traditional PM tool — AI is layered on top of a solid foundation, not the original architecture. | | **Kendo** | "AI-native issue tracker" (marketing strategy). | MCP server (26 tools, 10 resources), AI audit trails, built from scratch with MCP as a core design decision. Credible, but brand is unknown. | ### "AI-enhanced" (not AI-native but shipping significant AI) | Company | AI posture | |---------|------------| | **ClickUp** | "AI Super Agents" — branded as human-level teammates. 500+ skills, persistent memory, ChatGPT-5.1. Heavy investment but AI is layered onto an everything-app. | | **Jira/Atlassian** | Rovo AI + agents in Jira (open beta Feb 2026). MCP server now GA. Massive enterprise reach. AI is a modernization effort, not the founding principle. | | **Asana** | AI Studio with no-code agent builder, 12 pre-built AI Teammates. Focused on cross-functional workflows, not developer tooling. | | **Monday.com** | Sidekick (out of beta Jan 2026), Agent Builder (beta), Vibe (no-code app generation). Three-tier AI pricing (Lite/Plus/Super). AI as an engagement strategy. | | **Notion** | Custom Agents (Notion 3.3, Feb 2026) with MCP connections to Linear, Figma, Slack. Multi-model (Claude Opus 4.5, GPT-5.2, Gemini 3 Pro). Powerful but Notion is a knowledge tool, not a PM tool. | | **Shortcut** | Korey AI agent + "Shortcut for Agents" platform. Strong product but no "AI-native" claim — positioned as AI-augmented. | | **GitHub** | Copilot cloud agent integrates with Linear, Jira, Azure Boards. Not a PM tool itself but becoming a work orchestrator. | | **GitScrum** | MCP server with 29 tools, 160+ operations. Claims AI-first integration. Small player. | | **Bugasura** | "AI issue tracker" — lighter positioning, focused on bug tracking. | ### Key finding Plane is Kendo's closest competitor for the literal "AI-native" positioning. They use the same language, have open-source momentum (46.5k stars), and their MCP server + AI sidecar make the claim defensible. Dart is the other AI-native claimant but is smaller and less developer-focused. Linear is the most dangerous competitor overall but does not claim "AI-native" — they claim "designed for the AI era," which is subtly different. **Flag for `marketing-strategy.md`:** The document describes Kendo as one of 4 PM tools with MCP support (alongside Linear, Shortcut, Plane). This is now outdated — Jira, Notion, GitScrum, and Monday.com (via MCP gallery) have all added MCP support. The "only 4" claim needs revision. See Section 3 for the updated count. --- ## 2. What AI Features Are Competitors Actually Shipping? Not announced — live and usable as of April 2026. ### Feature matrix: shipped AI capabilities | Feature | Linear | Shortcut | ClickUp | Plane | Jira | Asana | Monday | Notion | Kendo | |---------|--------|----------|---------|-------|------|-------|--------|--------|-------| | AI triage/auto-assign | Yes (Triage Intelligence) | Via Korey | Super Agents | AI sidecar | Rovo agents | AI Teammates | Sidekick | Custom Agents | No | | AI issue generation from natural language | Yes (Agent) | Yes (Korey) | Yes (Brain) | Yes (Plane AI) | Yes (Rovo) | Yes (AI Studio) | Yes (Sidekick) | Yes (Agents) | Partial (from reports) | | AI coding agent dispatch | Yes (75% enterprise adoption) | Via GitHub | No | Agent assignment | Rovo Dev | No | No | No | No | | MCP server (official) | Yes (remote, OAuth 2.1) | Yes | No | Yes (Python/FastMCP) | Yes (Rovo MCP, GA) | No | Via gallery | Via agent connections | Yes (26 tools) | | Agent assignment to issues | Yes (Copilot integration) | Yes (Shortcut for Agents) | Yes (Super Agents as users) | Yes (@mention agents) | Yes (Rovo + third-party) | Yes (AI Teammates) | Yes (Agent Builder) | N/A | Internal only | | Slack/Teams AI bot | Yes (Linear Agent) | No | No | No | No | No | No | Yes (Custom Agents) | No | | AI spec/plan generation | Yes (Agent) | Yes (Korey) | Yes (Brain) | Yes (Plane AI) | Yes (Rovo) | No | No | Yes (Agents) | No | | Persistent memory | No | No | Yes (episodic + long-term) | No | No | No | No | Yes (via Notion pages) | No | | Multi-model support | No | No | Yes (ChatGPT-5.1) | No | Rovo (multiple) | No | No | Yes (Claude/GPT/Gemini) | No | ### Depth assessment **Linear** has the deepest integration between PM and coding agents. Their stat — "coding agents installed in 75% of enterprise workspaces, work volume up 5x in 3 months" — is the most concrete evidence of agentic PM adoption. The GitHub Copilot-to-Linear pipeline (assign Copilot to a Linear issue, it creates a PR) is a genuine workflow innovation. **ClickUp** has the broadest AI feature set (500+ skills, persistent memory, ambient monitoring) but it is bolted onto an everything-app. The "Super Agents as real users" approach is architecturally interesting — agents show up as teammates, not tools. **Shortcut/Korey** is executing well on PM-specific AI. The 48% reduction in "work about work" claim is notable. Korey generates specs, breaks down tasks, and tracks dependencies. Plans to expand to Jira, Asana, Monday, and Linear integration would make it a cross-platform agent. **Plane** has a strong AI sidecar (contextual, anchored to the current view) plus web search in AI, chart generation, and de-duplication. Combined with open-source credibility, this is a formidable package. **Jira/Rovo** is the enterprise play. Rovo Dev turns Jira work items into code. The MCP server is now GA with broad client support. The open beta of agents-in-Jira (Feb 2026) means enterprise teams can assign AI and third-party agents directly within their existing Jira workflows. **Notion** is not a PM competitor but its agent platform with MCP connections to Linear and Figma creates a meta-layer — Notion agents orchestrating work across multiple tools. This is architecturally significant. --- ## 3. AI Coding Tools and PM Integration ### MCP: the winning protocol MCP has decisively won the agent-to-tool integration layer. The numbers: - **97 million** monthly SDK downloads (Python + TypeScript, Feb 2026) - **10,000+** MCP servers published - **5,800+** in public registries - Adopted by **every major AI provider**: Anthropic, OpenAI, Google, Microsoft, Amazon - Donated to the **Linux Foundation's Agentic AI Foundation** (Dec 2025) MCP is supported natively in Claude Desktop, Claude Code, VS Code (Copilot), Cursor, Windsurf, Zed, JetBrains IDEs, and more. ### PM tools with MCP servers (updated count) | Tool | MCP Server | Transport | Status | |------|-----------|-----------|--------| | Linear | Official (remote) | Streamable HTTP, OAuth 2.1 | GA | | Shortcut | Official | stdio | GA | | Plane | Official | Python/FastMCP, multiple transports | GA | | Jira/Atlassian | Rovo MCP Server | Remote, cloud-hosted | GA | | Notion | MCP connections for Custom Agents | Via agent platform | GA (Business+) | | GitScrum | Official | stdio, npm | GA | | Kendo | Official | stdio | GA | | GitHub Projects | Community servers | Various | Community | | Trello | Community (via Atlassian) | Various | Community | **Flag for `marketing-strategy.md`:** The claim "only 4 PM tools support MCP" (S2, O1) is now significantly outdated. At least 7 have official MCP servers, with community servers covering several more. The "first-mover advantage window of ~12 months" (O1) has largely closed. MCP support is approaching table stakes for developer-facing PM tools. ### A2A: the complementary protocol A2A (Agent-to-Agent Protocol, Google-originated) handles agent-to-agent coordination — a different layer from MCP's agent-to-tool focus. Both are under the Linux Foundation's AAIF. The emerging consensus is a three-layer stack: 1. **WebMCP** — structured web access 2. **MCP** — agent-to-tool connections 3. **A2A** — agent-to-agent coordination For PM tools, MCP is the relevant layer. A2A becomes relevant when multi-agent orchestration crosses tool boundaries (e.g., a research agent handing off to a coding agent handing off to a review agent). No PM tool is shipping native A2A support yet, but the protocol had 100+ enterprise supporters by Feb 2026. ### How coding tools connect to PM | Coding Tool | PM Integration Method | Supported PM Tools | |-------------|----------------------|-------------------| | **Claude Code** | MCP servers | Any MCP-compatible (Linear, Plane, Kendo, Jira, etc.) | | **Cursor** | MCP servers (4,133+ available), native support since v0.43 | Any MCP-compatible | | **Windsurf** | MCP servers (21 third-party tools), one-click setup | GitHub, Jira, Asana, and MCP-compatible tools | | **GitHub Copilot** | Cloud agent + MCP; direct integrations | Linear, Jira, Azure Boards + MCP-compatible | | **OpenAI Codex** | Automations, task-level integration | Linear (native), GitHub + MCP-compatible | | **VS Code** | MCP server configuration | Any MCP-compatible | The pattern is clear: **MCP is the universal adapter**. Every major coding tool supports it. PM tools that lack an MCP server are invisible to AI coding workflows. --- ## 4. "Agentic" Project Management ### Linear's "self-driving SaaS" vision Linear CEO Karri Saarinen published a manifesto describing three autonomy levels for PM software, modeled after self-driving car classifications: 1. **Assistive** — AI suggests actions (assign to team X, apply label Y). User accepts or ignores. 2. **Semi-autonomous** — AI takes a first pass; user corrects. Rules-based auto-application of suggestions. 3. **Full autonomy** — Classes of issues dispatched to coding agents automatically. Humans evaluate results, not process. **What is actually live (April 2026):** - Linear Agent in public beta (all plans) — chat, issue creation from Slack, context synthesis, spec drafting - Skills (reusable agent workflows) on all plans - Automations (auto-triage triggers) on Business/Enterprise - Code Intelligence (codebase Q&A for non-engineers) coming soon, Business/Enterprise - GitHub Copilot integration — assign Copilot to a Linear issue, it creates a PR - Coding agents in 75% of enterprise workspaces **What is announced but not yet live:** - Full autonomous dispatch of simple issues to coding agents without human initiation - Auto-collation of requirements into project specifications - Complete self-driving workflows **Assessment:** Linear is the furthest along in agentic PM, but the full vision is still aspirational. The gap between "agent in beta" and "self-driving" is significant. The infrastructure (MCP, Agent, Copilot integration) is real; the autonomous loop is not yet closed. ### Who else is doing agentic PM? | Company | Agentic approach | Maturity | |---------|-----------------|----------| | **ClickUp** | Super Agents as "AI coworkers" with persistent memory, ambient monitoring, proactive action | Live (production) — broad but shallow | | **Shortcut** | Korey as PM agent + "Shortcut for Agents" allowing AI agents as story owners | Live (GA for Korey, expanding) | | **Plane** | Agent assignment to work items, AI sidecar with full project context | Live — solid but less aggressive than Linear | | **Jira** | Rovo agents assignable in Jira, third-party agent support via MCP | Open beta (Feb 2026) — enterprise-scale | | **Asana** | AI Teammates (12 pre-built), AI Studio for custom workflows | Live — cross-functional, not dev-focused | | **Monday.com** | Agent Builder (beta), Sidekick (GA), three-tier AI pricing | Live (Sidekick) / Beta (agents) | | **Notion** | Custom Agents with MCP orchestrating across Linear, Figma, Slack | Live (Business+) — meta-orchestration layer | | **Dart** | Chat-as-UI, agents as first-class collaborators | Live — niche, AI-first from founding | | **Taskade** | AI agents with 700+ task types, persistent memory | Live — consumer/prosumer tier | ### The agentic loop pattern The industry is converging on a common agentic workflow: 1. **Capture** — Issues created from Slack, email, support tickets, or code comments (Linear Agent, Korey, Rovo) 2. **Triage** — AI classifies, prioritizes, assigns to team/person (Linear Triage Intelligence, Plane AI, Rovo) 3. **Spec** — AI generates specifications, acceptance criteria, task breakdown (Korey, Linear Agent, ClickUp Brain) 4. **Dispatch** — Issue assigned to coding agent (Copilot, Codex) or human developer 5. **Implement** — Coding agent creates PR from issue context 6. **Review** — Human evaluates result, AI assists with code review 7. **Close** — Status updated, documentation generated Linear and Shortcut are closest to completing this loop end-to-end. Kendo participates at steps 4-5 (via MCP) but lacks steps 1-3 (AI triage, agent capture, spec generation). --- ## 5. The AI Commoditization Risk ### Has it accelerated? Yes. Kendo's marketing strategy identified AI commoditization as threat T5 (severity: Medium). Based on the current landscape, this assessment should be upgraded. **Evidence of acceleration:** 1. **Every competitor now has AI.** As of April 2026, there is no PM tool in the developer segment without some form of AI. The question is no longer "does it have AI?" but "how deep is the AI?" 2. **MCP is approaching table stakes.** What was a rare differentiator (4 tools) is now common (7+ official servers, community servers for many more). The 12-month window identified in marketing-strategy.md has shrunk. 3. **AI models are commoditized.** ClickUp uses ChatGPT-5.1, Notion offers Claude Opus 4.5 / GPT-5.2 / Gemini 3 Pro. The underlying AI model is no longer a differentiator — orchestration, data, and workflow integration are. 4. **AI features are free-tier fodder.** Linear Agent is free during beta (all plans). Plane AI is built into every tier. Monday offers Sidekick Lite. The premium tier is shifting from "has AI" to "has autonomous AI." 5. **Coding agent integration is the new battleground.** Linear (75% enterprise), Shortcut (Shortcut for Agents), Plane (agent assignment), and Jira (agent assignment in beta) all support assigning AI coding agents to issues. This was exotic 12 months ago. ### Where differentiation still exists Despite commoditization of basic AI features, there are durable differentiators: | Differentiator | Why it persists | Who has it | |---------------|----------------|-----------| | **Proprietary workflow data** | AI quality depends on the data it operates on. Your sprint history, velocity patterns, contributor output is unique. | All tools (but requires user commitment) | | **MCP tool depth** | Number of tools/operations exposed matters. Shallow MCP = limited agent capability. | Kendo (26 tools), GitScrum (29 tools), Linear (expanding) | | **End-to-end agentic loop** | Capturing, triaging, speccing, dispatching, implementing, reviewing — completing the full loop is rare. | Linear (closest), Shortcut/Korey (strong) | | **Architecture-level AI** | AI baked into the data model (audit trails, hash chains, multi-tenancy) vs. bolted on. | Kendo (audit trails), Plane (AI-first redesign) | | **Vertical specialization** | AI tuned for a specific workflow (dev teams vs. marketing vs. enterprise). | Linear (dev), Asana (cross-functional), Jira (enterprise) | | **Time tracking + AI** | Bundling time tracking with AI-assisted PM remains rare at a single price point. | Kendo, Plane (Pro+), ClickUp (Unlimited+) | ### Revised threat assessment | Threat | Original severity | Revised severity | Rationale | |--------|-------------------|------------------|-----------| | T5: AI commoditization | Medium | **High** | Basic AI (summarization, generation, triage) is now table stakes. MCP support is near-universal among developer PM tools. The moat must shift from "has AI" to "deepest AI integration with developer workflows." | **Flag for `marketing-strategy.md`:** T5 severity should be upgraded from Medium to High. The 12-month MCP window (O1) has largely closed. Kendo's AI differentiation now depends on depth (26 MCP tools, audit trails, time-tracking integration) rather than presence. --- ## 6. Implications for Kendo ### What the data says 1. **"AI-native" is a viable but contested positioning.** Plane uses the same claim with more brand recognition. Kendo's claim is architecturally honest (MCP-first design, AI audit trails) but must be backed by visible proof points, not just assertions. 2. **MCP is no longer the moat — MCP depth is.** Having an MCP server is common. Having 26 tools with deep operations (time tracking, sprint planning, audit logs) is not. Kendo should market the depth, not the existence, of its MCP server. 3. **The missing capability: AI triage and capture.** Every major competitor has some form of AI issue creation, triage, and assignment. Kendo's AI story generation from reports is internal-only. Without visible AI-assisted triage, Kendo looks AI-incomplete to evaluators. 4. **Time tracking + AI remains underexploited.** Linear still has no native time tracking. Plane gates it to Pro+. Kendo bundling time tracking with MCP and AI at every tier is genuinely rare. 5. **The agentic loop matters.** Teams evaluating tools in 2026 ask: "Can I assign an AI agent to an issue and get a PR back?" Kendo's MCP server enables this via Claude Code/Cursor, but the workflow is not productized or marketed. 6. **European positioning strengthens.** As AI features proliferate, data residency and compliance become differentiators. Amsterdam hosting + database-per-tenant + hash-chained audit logs + GDPR-by-architecture is a combination no competitor matches. ### Contradictions with existing docs | Document | Claim | Current reality | |----------|-------|----------------| | `marketing-strategy.md` S2, O1 | "One of only 4 PM tools with MCP support" | At least 7 have official MCP servers. The claim needs updating. | | `marketing-strategy.md` O1 | "First-mover advantage window is ~12 months" | Window has largely closed. MCP is near-universal for dev PM tools. | | `marketing-strategy.md` T5 | AI commoditization severity: Medium | Should be High. Basic AI is table stakes; every competitor has it. | | `competitors.md` on Shortcut | "No AI differentiator, stagnating" | Shortcut has shipped Korey (PM agent), Shortcut for Agents, and plans cross-platform expansion. Not stagnating. | | `competitors.md` on Jira | No mention of MCP support | Jira now has Rovo MCP Server (GA) and agents-in-Jira (open beta). | | `competitors.md` on Plane | "No built-in time tracking" | Plane has time tracking on Pro+ tier. | --- ## 7. Summary The AI-native developer tools landscape in April 2026 is defined by three shifts: 1. **AI is table stakes.** Every PM tool has AI features. The differentiation is in depth, not presence. 2. **MCP is the universal protocol.** 97M monthly SDK downloads, 10K+ servers, adopted by all major providers. PM tools without MCP servers are invisible to AI coding workflows. 3. **The agentic loop is the new frontier.** Linear, Shortcut, and ClickUp are closest to closing the full capture-triage-spec-dispatch-implement-review loop. This is where the next 12 months of competition will play out. Kendo's positioning is architecturally sound (MCP-first, AI audit trails, time tracking bundled) but the messaging needs updating: the MCP rarity window has closed, and the differentiation must shift to integration depth, European compliance, and the specific combination of capabilities no one else offers. --- ## Related - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — SWOT and competitive positioning (flags noted above) - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Competitor profiles (flags noted above) - [../company/product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Kendo feature set --- # AI Social Media Reply Tools 2026 *Source: `research/ai-social-media-reply-tools-2026.md` · [Raw](https://kendo-hq.pages.dev/raw/research/ai-social-media-reply-tools-2026.md)* --- title: "Social Media Scraping Agents: X/Twitter & BlueSky Landscape for Marketing Engagement (2026)" date: 2026-04-04 type: investigation tags: [social-media, automation, AI-agents, browser-automation, reply-generation, Twitter, BlueSky, content-pipeline, scraping, AT-Protocol, MCP] sources: - https://github.com/nirholas/XActions - https://github.com/Xquik-dev/x-twitter-scraper - https://github.com/vladkens/twscrape - https://github.com/Altimis/Scweet - https://github.com/garyb9/twitter-llm-bot - https://github.com/ihuzaifashoukat/twitter-automation-ai - https://github.com/Prem95/socialautonomies - https://github.com/gkamradt/twitter-reply-bot - https://github.com/marcolivierbouch/XReplyGPT - https://github.com/proxyvector/twitter-ai-reply - https://github.com/langchain-ai/social-media-agent - https://github.com/russss/polybot - https://github.com/elfacu0/playwright-simple-tweet - https://github.com/Kianmhz/Twitter-Bot - https://github.com/elizaos-plugins/client-twitter - https://github.com/elizaos/agent-twitter-client - https://github.com/ryanmac/agent-twitter-client-mcp - https://github.com/bluesky-social/jetstream - https://github.com/bluesky-social/feed-generator - https://github.com/MarshalX/bluesky-feed-generator - https://github.com/mary-ext/atproto-scraping - https://github.com/browser-use/browser-use - https://github.com/Cortex-AI-Software/Twitter-X-API-Free-Automation-Bot-Auto-Like-Follow-Post-Tracker - https://dev.to/p3ngu1nzz/automating-bluesky-for-ai-agents-at-protocol-bot-44k1 - https://dev.to/altimis/how-to-scrape-twitterx-without-an-api-key-in-python-2026-guide-dji - https://dev.to/0012303/the-complete-guide-to-bluesky-at-protocol-4-free-tools-for-developers-1edb - https://dev.to/fairpricework/how-to-scrape-bluesky-posts-and-profiles-at-scale-no-api-key-needed-3ma0 - https://dev.to/abdibrokhim/build-a-twitter-replyguy-using-aiml-api-and-twitter-api-v2-3c3j - https://scrapfly.io/blog/posts/social-media-scraping - https://apify.com/scrapers/twitter - https://apify.com/knotless_cadence/bluesky-scraper - https://replyguy.com/ - https://replyguy.com/how-it-works - https://www.qura.ai/ - https://smart-ai-reply.com/ - https://tweetstorm.ai/ - https://tweetai.com/reply - https://www.replyguyframework.com/blog/algorithm-2026 - https://posteverywhere.ai/blog/how-the-x-twitter-algorithm-works - https://opentweet.io/blog/how-twitter-x-algorithm-works-2026 - https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/ - https://postproxy.dev/blog/x-api-pricing-2026/ - https://docs.bsky.app/blog/jetstream - https://docs.bsky.app/docs/advanced-guides/firehose - https://docs.bsky.app/docs/starter-templates/bots - https://docs.bsky.app/docs/starter-templates/custom-feeds - https://docs.bsky.app/docs/advanced-guides/rate-limits - https://jazco.dev/2024/09/24/jetstream/ - https://davepeck.org/notes/bluesky/blueskys-jetstream-with-just-a-few-lines-of-python/ - https://o-mega.ai/articles/top-10-browser-agents-for-social-media-automation-full-guide-2025 - https://n8n.io/workflows/categories/social-media/ - https://fast.io/resources/ai-agent-social-media-automation/ - https://skywork.ai/skypage/en/ai-twitter-bots-automation-autonomy/2029473528737636352 - https://noimosai.com/en/blog/top-5-ai-agents-for-x-twitter-in-2026-revolutionizing-your-social-strategy - https://arxiv.org/html/2502.12073v1 --- # Social Media Scraping Agents: X/Twitter & BlueSky Landscape for Marketing Engagement (2026) ## Summary The landscape for social media scraping and AI-powered engagement in 2026 spans five layers: **(1) MCP-native data tools** that give AI agents direct access to X/Twitter and BlueSky data (XActions with 140+ MCP tools, x-twitter-scraper at $0.00015/call, Xpoz), **(2) Python scraping libraries** that bypass official APIs via GraphQL/cookies (twscrape, Scweet v4, ElizaOS agent-twitter-client), **(3) browser extensions** for inline reply generation (Qura AI, TweetStorm, XReplyGPT), **(4) commercial SaaS platforms** that automate the full monitor-reply pipeline (ReplyGuy, TweetHunter, TrendRadar), and **(5) workflow orchestration frameworks** for custom pipelines (LangGraph social-media-agent, n8n with 515+ templates). BlueSky is dramatically more scraper-friendly than X due to the open AT Protocol, free Jetstream WebSocket firehose (~850 MB/day for all posts), and no authentication needed for public data reads. X's official API starts at $200/month for Basic read access, but pay-per-use launched February 2026 at ~$0.01/tweet. **Platform risk is the dominant constraint**: X explicitly bans automated keyword-based replies and will suspend accounts; BlueSky allows bots but mandates opt-in interaction only (users must tag the bot). --- ## 1. Technical Approaches to Scraping ### 1.1 X/Twitter Scraping Methods #### Official API (Updated 2026-04-04) X's API pricing underwent significant changes with a new pay-per-use model launching February 6, 2026: | Tier | Cost | Read Volume | Write Volume | Notes | |------|------|-------------|--------------|-------| | **Free** | $0 | Minimal | 1,500 posts/mo | Posting only, minimal read access | | **Basic** | $200/mo | 10K tweets/mo | 3,000 posts/mo | Raised from $100 in Oct 2024 | | **Pro** | $5,000/mo | 1M tweets/mo | 300K posts/mo | Full search, streaming | | **Enterprise** | $42K-50K+/mo | Custom | Custom | Full firehose, analytics | | **Pay-per-use** | ~$0.01/tweet | 2M cap | Variable | New Feb 2026, credit-based | (Source: [X API Pricing 2026](https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/), [Postproxy](https://postproxy.dev/blog/x-api-pricing-2026/)) The free tier is intentionally posting-only — X wants developers to pay for read access. For monitoring/scraping use cases, the minimum entry is $200/month or the new pay-per-use model. #### Unofficial API Scraping (GraphQL/Cookie-based) These tools reverse-engineer X's internal GraphQL endpoints used by the web client, authenticating via browser cookies rather than API keys: | Tool | Language | Auth Method | Account Pooling | Key Feature | |------|----------|-------------|-----------------|-------------| | **twscrape** | Python | Multi-account + IMAP email verification | Yes (SQLite) | Auto account switching on rate limit | | **Scweet v4** | Python | Multi-account + proxy | Yes (SQLite) | DB-first provisioning, heartbeats, cooldowns | | **ElizaOS agent-twitter-client** | TypeScript | Cookie-based (auth_token, ct0, twid) | No | Works without any API key | | **XActions** | Node.js | Puppeteer browser automation | No | 140+ MCP tools, cross-platform | **twscrape** (github.com/vladkens/twscrape): The most mature Python scraper. Supports async/await for parallel scraping, automatic account switching when one hits rate limits, login flow with email verification code reception via IMAP, and cookie persistence in SQLite. A single account handles hundreds to a few thousand tweets/day; multi-account pooling scales proportionally. (Source: [twscrape GitHub](https://github.com/vladkens/twscrape)) **Scweet v4** (github.com/Altimis/Scweet): Released February 2026. Moved to API-only core using X's GraphQL endpoints. Smart multi-account pooling with SQLite managing leases, heartbeats, daily counters, cooldowns, and automatic failover. Proxy support pairs each account with a different IP. (Source: [Scweet 2026 Guide](https://dev.to/altimis/how-to-scrape-twitterx-without-an-api-key-in-python-2026-guide-dji)) **ElizaOS agent-twitter-client** (github.com/elizaos/agent-twitter-client): Cookie-based Twitter client that avoids API costs entirely. Essential cookies: auth_token, ct0, twid. Known issue: "Session is invalid" errors after cookie expiry and suspicious login alerts from Twitter. There is also an MCP wrapper (github.com/ryanmac/agent-twitter-client-mcp) that exposes this as an MCP server for AI agents. (Source: [ElizaOS GitHub](https://github.com/elizaos/agent-twitter-client)) #### Browser Automation | Tool | Browser Engine | Approach | Stealth | |------|---------------|----------|---------| | **XActions** | Puppeteer | Full browser automation, no API | Built-in | | **twitter-automation-ai** | Selenium + undetected-chromedriver | Multi-account, keyword-based | selenium-stealth, random user-agents, proxy rotation | | **Playwright approaches** | Playwright | Minimal scripts for posting/interacting | Limited | | **Browser Use** | Any (LLM-controlled) | Natural language task -> browser actions | Variable | #### MCP-Native Tools (New Category, 2026) A major 2026 trend: social media data exposed via Model Context Protocol, letting AI agents query social data through natural language: **XActions** (github.com/nirholas/XActions): The most comprehensive open-source toolkit. 140+ MCP tools across scraping, posting, engagement, analytics, streaming. Supports X/Twitter, BlueSky, Mastodon, and Threads. Runs entirely locally — no data leaves the machine. Also includes CLI, Node.js library, browser extension, and 50+ browser scripts. MIT license. Added workflow engine with declarative JSON pipelines, real-time streaming via Socket.IO, sentiment analysis, and social graph mapping in v3.1.0. (Source: [XActions GitHub](https://github.com/nirholas/XActions)) **x-twitter-scraper** (github.com/Xquik-dev/x-twitter-scraper): 120 REST API endpoints, 2 MCP tools, 23 extraction types. Reads at $0.00015/call — 33x cheaper than official API. Works with 40+ AI agents including Claude Code, Cursor, Codex, Copilot. Credit-based pricing: 1 credit = $0.00015, read ops cost 1-3 credits. $20/month subscription available. (Source: [x-twitter-scraper GitHub](https://github.com/Xquik-dev/x-twitter-scraper)) **Xpoz**: MCP-first platform enabling natural language queries through AI assistants like Claude and ChatGPT. (Source: [Xpoz](https://www.xpoz.ai/blog/comparisons/best-twitter-api-alternatives-2026/)) **Apify MCP**: Apify's Twitter scraper now has an MCP server endpoint, enabling AI agents to programmatically scrape tweets. $0.25 per 1,000 tweets. (Source: [Apify Twitter MCP](https://apify.com/xtdata/twitter-x-scraper/api/mcp)) #### Commercial Scraping APIs | Provider | Pricing | Key Feature | |----------|---------|-------------| | **Apify** | $0.25-0.40/1K tweets | Pre-built actors, MCP server, no-code | | **TwitterAPI.io** | $0.15/1K tweets | Pay-as-you-go, 1K+ req/sec | | **Bright Data** | Usage-based | Proxy infrastructure, social media scrapers | | **ScrapeCreators** | Usage-based | Real-time social media scraping APIs | | **EnsembleData** | Usage-based | Multi-platform social media data APIs | ### 1.2 BlueSky Scraping Methods BlueSky's AT Protocol is fundamentally different from X — it's an open protocol designed for interoperability. This makes it the most scraper-friendly major social platform. #### AT Protocol Public API (Free, No Auth for Reads) BlueSky's API is fully open for public data reads with no authentication needed for profiles and posts. Search functionality (`app.bsky.feed.searchPosts`) now requires authentication but is still free to use. Key endpoints: - `app.bsky.feed.getAuthorFeed` — get posts from a specific user - `app.bsky.feed.searchPosts` — keyword search (auth required) - `app.bsky.actor.searchActors` — find users - `com.atproto.sync.subscribeRepos` — full firehose subscription Rate limits (generous): | Metric | Limit | |--------|-------| | Points/hour | 5,000 | | Points/day | 35,000 | | CREATE cost | 3 points | | UPDATE cost | 2 points | | DELETE cost | 1 point | | Max creates/hour | ~1,666 | | Max creates/day | ~11,666 | | API requests/5 min | 3,000 (per IP) | (Source: [BlueSky Rate Limits](https://docs.bsky.app/docs/advanced-guides/rate-limits)) #### Jetstream (Real-Time WebSocket Firehose) Jetstream is BlueSky's official simplified streaming solution — a WebSocket server that consumes the full AT Protocol firehose and redistributes it as simple JSON. This is the **key differentiator** for BlueSky monitoring. **How it works:** ``` Full AT Proto Firehose (CBOR) → Jetstream Server → JSON WebSocket → Your Client ``` **Connection:** `wss://jetstream2.us-east.bsky.network/subscribe?wantedCollections=app.bsky.feed.post` **Official instances:** - `jetstream1.us-east.bsky.network` - `jetstream2.us-east.bsky.network` - `jetstream1.us-west.bsky.network` - `jetstream2.us-west.bsky.network` **Filtering:** - By collection NSID: filter to only posts, likes, follows, etc. (max 100 collections) - By repo DID: filter to specific users (max 10,000 DIDs) - Supports NSID prefixes like `app.bsky.*` **Bandwidth:** ~850 MB/day for all posts on the network. Compressed messages are ~56% smaller than raw JSON. **Trade-off:** No cryptographic signatures or Merkle tree nodes — data isn't self-authenticating (unlike the raw firehose). **Client libraries:** - Official Go client included in the Jetstream repo - Python: Simple WebSocket connection with `websockets` library — [just a few lines](https://davepeck.org/notes/bluesky/blueskys-jetstream-with-just-a-few-lines-of-python/) - TypeScript: Fully typed client available - Ruby: `skyfall` gem supports Jetstream since v0.5 (Source: [Jetstream Blog Post](https://docs.bsky.app/blog/jetstream), [Jetstream GitHub](https://github.com/bluesky-social/jetstream), [Jaz's Blog](https://jazco.dev/2024/09/24/jetstream/)) #### Feed Generators (Custom Algorithms) BlueSky's feed generator framework lets you build custom algorithmic feeds that filter the firehose by any criteria — keywords, sentiment, engagement signals, user lists. **Architecture:** 1. Your server subscribes to the firehose/Jetstream 2. Indexes posts matching your criteria (keyword match, LLM classification, etc.) 3. Serves a feed endpoint that BlueSky clients can subscribe to 4. Users add your feed as a custom timeline **Resources:** - Official starter kit: [github.com/bluesky-social/feed-generator](https://github.com/bluesky-social/feed-generator) - Python implementation: [github.com/MarshalX/bluesky-feed-generator](https://github.com/MarshalX/bluesky-feed-generator) - Example: Indie Tech Feed filters for open-source, game dev, hacking content **Attie** (launched March 2026): BlueSky's own AI assistant (powered by Claude) that lets non-technical users create custom feeds using natural language. Already the second most blocked account (~125K blocks) — indicating user resistance to AI on the platform. (Source: [BlueSky Custom Feeds](https://docs.bsky.app/docs/starter-templates/custom-feeds), [Feed Generator GitHub](https://github.com/bluesky-social/feed-generator)) #### BlueSky Scraping Services | Provider | Pricing | Notes | |----------|---------|-------| | **Apify BlueSky Scraper** | $1.50/1K posts or free for 100/day | Extract posts, profiles, engagement metrics | | **AT-bot (MCP-Native)** | Free (CC0 license) | 31 MCP tools, AES-256-CBC auth, ~300ms post creation | | **atproto-scraping** | Free | Git scraping of AT Protocol instances | | **BlueSkySight** | Free (PyPI) | Python library, Jetstream integration | ### 1.3 Cross-Platform Tools | Tool | Platforms | Type | Key Feature | |------|-----------|------|-------------| | **XActions** | X, BlueSky, Mastodon, Threads | Open-source toolkit + MCP | Unified interface, 140+ MCP tools | | **Polybot** | X, Mastodon, BlueSky | Python framework | Cross-platform posting, auto message length | | **Apify** | X, BlueSky, Instagram, TikTok, etc. | Commercial | Pre-built actors for each platform | | **n8n** | Multi-platform via integrations | Workflow builder | 515+ social media templates | --- ## 2. Existing Tools and Agents ### 2.1 Commercial Reply-Automation Platforms #### ReplyGuy (replyguy.com) — The Category Leader The most polished commercial product for automated social media replies. **How it works:** 1. You define keywords relevant to your product 2. ReplyGuy scours the web for matching conversations 3. AI selects high-quality, recent, relevant posts 4. Generates replies that "genuinely help the original poster while mentioning your product" 5. **Twitter:** Fully automated posting (when enabled) or manual 6. **Reddit & LinkedIn:** Semi-manual — system identifies + generates, you copy-paste-publish **Platforms:** Twitter (auto-reply), Reddit (semi-manual), LinkedIn (semi-manual) **Pricing:** Subscription-based (details behind paywall) **Claims:** Saves 30-60 hours/month per project (Source: [ReplyGuy](https://replyguy.com/), [How It Works](https://replyguy.com/how-it-works)) **Risk warning:** ReplyGuy's Twitter auto-reply feature directly violates X's ToS which requires "prior written and explicit approval" for AI reply bots. Using it risks account suspension. #### Other Commercial Tools | Tool | Platforms | Model | Approach | Notable | |------|-----------|-------|----------|---------| | **TweetHunter** | X/Twitter | Proprietary | SaaS | $10M+ exit. Common complaint: robotic AI content | | **TrendRadar** | X/Twitter | AI-powered | SaaS + browser | "Reply guy" growth strategy automation | | **Hootsuite** | Multi-platform | Various | Enterprise SaaS | AI outperforms humans for bottom-of-funnel CTAs | | **Ayrshare** | Multi-platform | API | Developer API | Programmatic posting + reply to comments | | **Marblism "Sonny"** | Multi-platform | AI | Autonomous agent | 3-4 daily posts with adaptive tactics | | **Manus AI** | Multi-platform | AI | Autonomous agent | Campaign promotion, content optimization, $39-200/mo | ### 2.2 Browser Extensions (Reply-in-Context) These inject AI reply generation directly into the social platform's UI. The user sees a post, triggers the extension, reviews the reply, and posts manually. | Extension | Platforms | LLMs | Key Feature | Source | |-----------|-----------|------|-------------|--------| | **Qura AI** | X, LinkedIn, Reddit, FB | GPT-4o, Claude, Gemini | Fine-tuned on millions of tweets, 19+ tone presets | [qura.ai](https://www.qura.ai/) | | **TweetStorm.ai** | X/Twitter | Proprietary | Keyword forcing, emoji/hashtag toggles, content history | [tweetstorm.ai](https://tweetstorm.ai/) | | **XReplyGPT** | X/Twitter | OpenAI API | Open-source, never auto-sends | [GitHub](https://github.com/marcolivierbouch/XReplyGPT) | | **twitter-ai-reply** | X/Twitter | OpenAI API | Vue.js, tone selection, edit-before-post | [GitHub](https://github.com/proxyvector/twitter-ai-reply) | | **Smart AI Reply** | X, LinkedIn | Proprietary | One-click contextual reply generation | [smart-ai-reply.com](https://smart-ai-reply.com/) | | **GM Bot** | X/Twitter | Built-in | Auto scrolls, replies, likes, follows based on settings | Chrome Web Store | ### 2.3 Open-Source Automation Frameworks #### twitter-automation-ai (Most Comprehensive) - **GitHub:** [github.com/ihuzaifashoukat/twitter-automation-ai](https://github.com/ihuzaifashoukat/twitter-automation-ai) - **Stack:** Python, Selenium, undetected-chromedriver, selenium-stealth - **LLMs:** OpenAI, Azure OpenAI, Gemini (via LangChain) - **Key features:** Multi-account management, keyword-based reply automation with recency filters, LLM relevance scoring (0-1 scale), competitor interaction, sentiment analysis, proxy pool rotation, per-account metrics tracking - **Configuration:** `config/settings.json` (global) + `config/accounts.json` (per-account overrides with keywords, LLM preferences, proxy settings) #### ElizaOS + client-twitter - **GitHub:** [github.com/elizaos-plugins/client-twitter](https://github.com/elizaos-plugins/client-twitter) - **Framework:** ElizaOS — TypeScript framework for autonomous AI agents - **Innovation:** Twitter client without API key using browser cookies - **Features:** Post generation, interaction handling, search, Twitter Spaces, optional Discord approval workflow, character files for agent personality, long-term memory - **Community:** Massive open-source community (ai16z origin) #### socialautonomies - **GitHub:** [github.com/Prem95/socialautonomies](https://github.com/Prem95/socialautonomies) - **Stack:** Next.js 14, TypeScript, Prisma, Supabase auth, Stripe - **Architecture:** Full SaaS platform with auto-reply, auto-engage, tweet scheduling, analytics dashboard - **Twitter client:** ElizaOS agent-twitter-client (cookie-based, no API key) #### AT-bot (BlueSky MCP-Native) - **Source:** [Automating Bluesky for AI Agents](https://dev.to/p3ngu1nzz/automating-bluesky-for-ai-agents-at-protocol-bot-44k1) - **Architecture:** CLI (Bash 4.0+) + MCP Server (TypeScript/Node.js 18+) - **31 MCP tools** across Authentication, Content, Feed, Profile, Search, Engagement - **Performance:** Auth ~500ms, post creation ~300ms, <5MB memory, 100+ ops/minute - **License:** CC0-1.0 (public domain) ### 2.4 Workflow Orchestration Pipelines #### LangChain social-media-agent (Reference Implementation) - **GitHub:** [github.com/langchain-ai/social-media-agent](https://github.com/langchain-ai/social-media-agent) - **The gold standard** for monitor-filter-generate-review-post pipelines - **Stack:** LangGraph, Claude (Anthropic API), FireCrawl, Supabase, TypeScript/React - **Architecture:** Content sources -> FireCrawl scraping -> Claude relevance evaluation -> marketing report -> platform-specific post generation -> image suggestion -> human review (Agent Inbox UI) -> OAuth posting -> Slack notification - **Human-in-the-loop:** LangGraph interrupts at decision points. Users approve/modify/reject via Agent Inbox web UI. - **Batch mode:** Slack channel ingestion with daily cron triggers #### n8n Pipelines - **515+ social media templates** at [n8n.io/workflows/categories/social-media/](https://n8n.io/workflows/categories/social-media/) - Self-hosted, native LangChain support (n8n 2.0), human-in-the-loop patterns - Notable templates: "AI-powered news monitoring & social post generator", "Social media sentiment analysis dashboard", "Multi-platform content creation with AI" - Pipeline pattern: Trigger nodes (RSS, webhooks, cron) -> AI processing (summarization, adaptation) -> Quality control (Slack approval) -> Multi-platform publishing -> Feedback loops ### 2.5 AI Agent Frameworks | Framework | Language | Social Media Relevance | |-----------|----------|----------------------| | **LangGraph** | Python/TS | Best for stateful monitor-filter-generate-review pipelines with human-in-the-loop interrupts | | **CrewAI** | Python | Role-based agent teams: "social media manager" + "content writer" + "reviewer" | | **ElizaOS** | TypeScript | Native Twitter/Discord/Telegram clients, personality system, long-term memory | | **AutoGen** | Python | Multi-agent debate on reply quality before posting | | **LangChain** | Python/TS | Foundation layer, 1000+ integrations | --- ## 3. X Algorithm and Engagement Mechanics (2026) Understanding the algorithm is critical for any reply strategy. In January 2026, xAI released a Grok-powered transformer model replacing the legacy system. ### Engagement Weight Hierarchy | Action | Algorithmic Weight (relative to like) | Notes | |--------|---------------------------------------|-------| | **Reply** | **~15x** | Most valuable single action | | **Reply + author reply back** | **~150x** | Conversation = massive distribution boost | | **Retweet** | 20x | | | **Profile click** | 12x | | | **Link click** | 11x | | | **Bookmark** | 10x | | | **Like** | 1x (baseline) | Weakest signal | **Thread compounding:** A thread with 5+ back-and-forth replies receives 3-4x the impressions of a tweet with 5 standalone likes. Author response to replies triggers 2.5x more out-of-network reach. **Premium boost:** Premium subscribers receive ~10x more impressions (4x in-network, 2x out-of-network). (Source: [Reply Guy Framework](https://www.replyguyframework.com/blog/algorithm-2026), [PostEverywhere](https://posteverywhere.ai/blog/how-the-x-twitter-algorithm-works), [OpenTweet](https://opentweet.io/blog/how-twitter-x-algorithm-works-2026)) ### Time Decay - **Critical window:** First 30 minutes determines distribution trajectory - **Half-life:** 18-43 minutes - **95% of distribution** occurs within 24 hours - **Velocity test:** 50 engagements in 1 hour = massive distribution; 50 over 24 hours = buried - **Consistency signal:** Missing 3+ consecutive activity days triggers algorithmic throttling ### What This Means for Reply Strategy Replies are the single most heavily weighted engagement signal. One genuine reply chain where the author engages back is worth more than hundreds of likes. Consistent high-quality replies build your account reputation score, meaning your original posts start with better distribution. The flywheel: replies -> reputation -> better distribution on original content. --- ## 4. Architecture Pattern: Monitor -> Filter -> Generate -> Review -> Post ``` ┌─────────────────────────────────────────────────────────────────┐ │ 1. MONITOR │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │ │ │ Keywords │ │ Mentions │ │ Competitor│ │ Jetstream/ │ │ │ │ Tracking │ │ Listener │ │ Scraper │ │ Firehose │ │ │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └──────┬───────┘ │ │ └──────────────┴─────────────┴───────────────┘ │ │ ↓ │ │ 2. FILTER │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ Relevance scoring (LLM-based, 0-1 threshold) │ │ │ │ Sentiment analysis (positive/negative/question) │ │ │ │ Deduplication (Redis/DB state tracking) │ │ │ │ Recency filter (configurable time window) │ │ │ │ Engagement threshold (min likes/retweets) │ │ │ │ Author authority filter (follower count, blue check)│ │ │ └─────────────────────┬───────────────────────────────┘ │ │ ↓ │ │ 3. GENERATE │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ LLM reply generation with: │ │ │ │ - Brand voice / tone guidelines │ │ │ │ - Character limits (280 Twitter / 300 BlueSky) │ │ │ │ - Context window (original post + thread) │ │ │ │ - Few-shot examples of ideal replies │ │ │ │ - Product mention rules (when/how to reference) │ │ │ │ - Structured JSON output for metadata │ │ │ └─────────────────────┬───────────────────────────────┘ │ │ ↓ │ │ 4. REVIEW (Human-in-the-Loop) │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ Options: │ │ │ │ a) Slack/Discord notification with approve/reject │ │ │ │ b) Google Sheet staging (original + draft pairs) │ │ │ │ c) Web UI dashboard (LangGraph Agent Inbox) │ │ │ │ d) Email digest with one-click approval │ │ │ │ Conditional: auto-approve high-confidence, flag low │ │ │ └─────────────────────┬───────────────────────────────┘ │ │ ↓ │ │ 5. POST │ │ ┌─────────────────────────────────────────────────────┐ │ │ │ Platform API posting (Tweepy/API, AT Protocol) │ │ │ │ Rate limiting and natural timing jitter │ │ │ │ Screenshot/proof capture │ │ │ │ Analytics logging (Airtable, JSON, DB) │ │ │ │ Feedback loop → refine future scoring │ │ │ └─────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘ ``` ### Implementation Approaches by Complexity | Complexity | Stack | Best For | Cost | |------------|-------|----------|------| | **Low** | Chrome extension (Qura/TweetStorm) | Solo creators, manual reply-by-reply | Free-$20/mo | | **Medium** | n8n/Zapier + OpenAI + Slack approval | Small teams, scheduled content | $20-100/mo | | **High** | LangGraph + custom agents + Supabase | Brands needing full pipeline control | Dev time + API costs | | **Maximum** | twitter-automation-ai + multi-account | Growth hackers (high platform risk) | Dev time + account risk | ### Platform-Specific Pipeline Considerations **For X/Twitter monitoring:** - Best approach: Official API (pay-per-use at ~$0.01/tweet) or x-twitter-scraper ($0.00015/call) - Search endpoint for keyword monitoring - Streaming not available below Pro tier ($5K/mo) - Cookie-based scrapers (twscrape, Scweet) for budget-constrained monitoring **For BlueSky monitoring:** - Best approach: Jetstream WebSocket (free, real-time, ~850 MB/day) - Connect to `wss://jetstream2.us-east.bsky.network/subscribe?wantedCollections=app.bsky.feed.post` - Filter client-side by keyword matching on post text - Or build a feed generator that indexes matching posts server-side - Public API search endpoint (free, requires auth) --- ## 5. BlueSky vs X Comparison: Scraper-Friendliness | Dimension | X/Twitter | BlueSky | |-----------|-----------|---------| | **API cost for reads** | $200/mo minimum (Basic) or ~$0.01/tweet (pay-per-use) | **Free** (AT Protocol public API) | | **Real-time stream** | Pro tier ($5K/mo) or unofficial scrapers | **Free** (Jetstream WebSocket, 4 public instances) | | **Auth for public reads** | Required (API key or cookies) | **Not required** for profiles/posts (search needs auth) | | **Rate limits** | Aggressive (varies by tier) | **Generous** (5K points/hr, 35K/day) | | **Bot policy** | Must label, engagement automation banned | Must label, opt-in interaction only | | **Scraping stance** | Explicitly banned in ToS since Sept 2023 | **Open protocol, encouraged** | | **Data format** | Proprietary GraphQL (reverse-engineered) | **Open AT Protocol** (documented, stable) | | **Community tools** | Many, but all in gray area | Growing, all legitimate | | **User base** | ~600M+ accounts | ~30M+ accounts | | **Developer audience** | Mixed | **High concentration** of developers and tech community | | **Feed generators** | No equivalent | **Custom algorithmic feeds** anyone can build | | **MCP integration** | XActions (140+ tools), x-twitter-scraper | AT-bot (31 tools), growing | | **Legal risk (EU)** | High (scraping = GDPR violation per Dutch DPA) | Lower (open protocol, but GDPR still applies to personal data) | **Verdict:** BlueSky is dramatically more scraper-friendly. The AT Protocol's openness, free Jetstream firehose, and explicit bot support make it the clear choice for automated monitoring. X has a larger audience but higher cost, legal risk, and platform risk. The trade-off is reach (X) vs. accessibility (BlueSky). --- ## 6. Browser Automation Agents for Social Media The browser agent market is exploding ($4.5B in 2024, projected $76.8B by 2034): | Agent | Type | Social Media Capability | Pricing | |-------|------|------------------------|---------| | **Browser Use** | Open-source Python | Mass posting, follower engagement, account tasks | Free | | **Skyvern** | AI + computer vision | LinkedIn bulk actions, CAPTCHA handling | Freemium | | **Axiom.ai** | No-code Chrome extension | Bulk uploads, data scraping, GPT-drafted replies | Freemium | | **PhantomBuster** | Cloud automation | LinkedIn/Twitter/Instagram bots, auto-following | Credit-based | | **Browserbase + Stagehand** | Cloud + open SDK | Enterprise LinkedIn at scale, session persistence | Usage-based | | **Vercel Agent Browser** | Headless CLI | General browser automation, 12.1K GitHub stars | Free | **Browser Use** (github.com/browser-use/browser-use): 89.1% success rate on WebVoyager benchmark. Open-source Python framework that gives any LLM (GPT-4, Claude, local models) browser control. Self-hosted, customizable, no vendor lock-in. --- ## 7. Case Studies and Reported Results ### ReplyGuy Users - Saves 30-60 hours/month per project - Fully automated Twitter replies + semi-manual Reddit/LinkedIn ### Maybe AI Users - 1-2 hours/day reclaimed from manual reply work - 3x increase in comments posted - More natural-sounding than fully manual (counterintuitive) ### Hootsuite AI Experiment - AI outperforms humans for bottom-of-funnel CTAs - AI underperforms for brand humor, cultural references, current events - Best results: hybrid (AI drafts, human refines) ### "Reply Guy" Growth Strategy (Manual) - Documented pattern: find high-engagement tweets -> post valuable replies -> gain impressions -> convert to followers - Best templates: respectful contrarian, data nuggets, operator lens, mini-case studies - 10 high-value replies > 50 generic ones - Replies are worth 15-27x more than likes algorithmically ### TweetHunter/Taplio Exit ($10M+) - Built 2021, sold 2022 for 8 figures - Most common complaint: AI content lacks authenticity, requires extensive editing - Lesson: market is proven, but quality remains the bottleneck --- ## 8. Academic Research | Paper | Focus | URL | |-------|-------|-----| | "Can LLMs Simulate Social Media Engagement?" | Action-guided response generation | [arxiv.org/html/2502.12073v1](https://arxiv.org/html/2502.12073v1) | | "SoMe: Realistic Benchmark for LLM-based Social Media Agents" | Evaluating AI agent social media behavior | [arxiv.org/html/2512.14720v1](https://arxiv.org/html/2512.14720v1) | | "@grokSet: Multi-party Human-LLM Interactions" | Human-LLM interaction in real social media | [arxiv.org/html/2602.21236](https://arxiv.org/html/2602.21236) | --- ## Implications for Kendo 1. **BlueSky is the low-risk monitoring opportunity.** Jetstream provides free, real-time, legally defensible access to all public posts. A keyword monitor for developer tool discussions is technically trivial to build and doesn't violate any ToS or GDPR rules (as long as you don't store personal data beyond what's needed). 2. **X monitoring is expensive or legally risky.** The legitimate path is $200/mo API or ~$0.01/tweet pay-per-use. Cookie-based scrapers work but violate ToS and create GDPR exposure for a Dutch company. 3. **Human-in-the-loop is non-negotiable.** Every successful implementation uses human review before posting. Fully automated replies violate X's ToS, risk BlueSky community backlash, and trigger EU AI Act transparency obligations from August 2026. 4. **The LangGraph social-media-agent is the reference architecture.** If Kendo ever builds a content pipeline, this is the pattern: LangGraph state machine with interrupt-based human review, multi-platform posting, and observability. 5. **For Kendo's current stage, manual engagement is the right strategy.** AI drafting + human review + manual posting is the sweet spot — zero legal risk, zero platform risk, and the algorithm rewards genuine conversation over volume. 6. **MCP-native tools are the 2026 trend.** XActions (140+ tools), x-twitter-scraper, AT-bot, and Apify's MCP endpoints show that social media data is becoming a first-class data source for AI agents. This aligns with Kendo's MCP-aware architecture. ## Open Questions - How reliable are the new MCP-native social media tools (XActions, x-twitter-scraper) in practice? Are they production-stable or demo-ware? - What is the actual suspension rate for accounts using cookie-based scrapers (twscrape, Scweet, ElizaOS) at moderate volumes? - Can a BlueSky Jetstream-based keyword monitor be productized as a Kendo feature (e.g., "social listening" for project-related discussions)? - How will the EU AI Act's Article 50 transparency requirements (August 2026) be enforced in practice for social media bots? - Is BlueSky's developer audience large enough to justify platform-specific monitoring for a dev tool like Kendo? - What approval UX works best for a solo founder? Slack notifications, web dashboard, or spreadsheet staging? --- # Automated Social Media Engagement Risks 2026 *Source: `research/automated-social-media-engagement-risks-2026.md` · [Raw](https://kendo-hq.pages.dev/raw/research/automated-social-media-engagement-risks-2026.md)* --- title: "Automated Social Media Engagement: Best Practices and Risks (X/Twitter & BlueSky)" date: 2026-04-04 type: investigation tags: [social-media, automation, twitter, bluesky, gdpr, eu-ai-act, legal, marketing, shadow-ban, ToS] sources: - https://help.x.com/en/rules-and-policies/x-automation - https://opentweet.io/blog/twitter-automation-rules-2026 - https://opentweet.io/blog/twitter-shadowban-check-fix-avoid-2026 - https://opentweet.io/blog/how-twitter-x-algorithm-works-2026 - https://bsky.social/about/support/tos - https://bsky.social/about/support/community-guidelines - https://docs.bsky.app/docs/starter-templates/bots - https://docs.bsky.app/docs/support/developer-guidelines - https://docs.bsky.app/docs/advanced-guides/rate-limits - https://pixelscan.net/blog/twitter-shadowban-2026-guide/ - https://multilogin.com/blog/twitter-shadow-bans/ - https://securiti.ai/scraping-almost-always-illegal-netherlands-dpa-declares/ - https://www.pinsentmasons.com/out-law/news/dutch-web-scraping-guidance-warn-businesses-gdpr-breach-risk - https://iapp.org/news/a/the-state-of-web-scraping-in-the-eu - https://artificialintelligenceact.eu/article/50/ - https://www.jonesday.com/en/insights/2026/01/european-commission-publishes-draft-code-of-practice-on-ai-labelling-and-transparency - https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content - https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks - https://www.prokopievlaw.com/post/eu-ai-act-article-50-imposes-transparency-obligations-for-ai-generated-content-european-union-augu - https://techcrunch.com/2023/09/08/x-updates-its-terms-to-ban-crawling-and-scraping/ - https://www.tendx.app/blog/x-twitter-limits-2026 - https://www.teract.ai/resources/twitter-strategy-indie-hackers-2026 - https://www.highperformr.ai/blog/twitter-for-indie-hackers - https://www.feedhive.com/blog/the-impact-of-ai-generated-content-on-social-media-engagement-balancing-automation-and-authenticity - https://www.socialmediatoday.com/news/x-formerly-twitter-announces-ai-bot-crackdown/812289/ - https://www.replyguyframework.com/blog/algorithm-2026 - https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/ - https://postproxy.dev/blog/x-api-pricing-2026/ - https://weventure.de/en/blog/ai-labeling - https://www.twobirds.com/en/insights/2026/taking-the-eu-ai-act-to-practice-understanding-the-draft-transparency-code-of-practice --- # Automated Social Media Engagement: Best Practices and Risks (X/Twitter & BlueSky) ## Summary Automating engagement on X/Twitter and BlueSky carries significant legal, platform, and reputational risks — particularly for a Dutch company. **X explicitly requires "prior written approval" for AI reply bots** and bans all engagement automation (likes, follows, retweets). BlueSky is more bot-friendly architecturally but mandates opt-in interaction only. In the EU/Netherlands, scraping social media data is "almost always illegal" per the Dutch DPA (May 2024 guidance, unchanged as of April 2026), and the **EU AI Act Article 50 transparency obligations take effect August 2, 2026**, requiring disclosure of AI-to-human interactions and machine-readable labeling of AI-generated content. The safest path for a solopreneur: use AI to *draft* content, manually review and post, engage authentically, and never automate interactions with other users. --- ## 1. Platform Terms of Service ### X (Twitter) **Allowed:** - Scheduling your own original content via authorized tools (Buffer, Hootsuite, Typefully) - AI-generated content creation (using ChatGPT, Claude, etc. to write tweets) - Bot accounts with clear bot label in bio - API-based posting within rate limits - RSS-to-Twitter auto-sharing of your own content **Explicitly banned:** - **Automated replies**: AI reply bots require "prior written and explicit approval from X" — deploying one without approval violates ToS - **Engagement automation**: Automated likes, retweets, bookmarks, follows, unfollows - **Bulk/automated DMs**: Unsolicited direct messages in bulk or automated manner - **Scraping/crawling**: Banned since September 29, 2023 — "crawling or scraping the Services" without "prior written consent" is prohibited - **Coordinated behavior**: Duplicate content across accounts, reply networks, retweet rings - **Engagement pods**: Services exchanging likes/retweets - **AI training on X data**: Using API or platform data to "fine-tune or train a foundation or frontier model" is banned **Core principle**: Automate content *creation* and scheduling — never automate *engagement*. The moment you automate interactions with other users, you're violating ToS. **Enforcement escalation:** 1. Temporary feature limitation 2. Account lock (identity verification required) 3. Temporary suspension (7-30 days) 4. Permanent suspension (Source: [X Automation Rules](https://help.x.com/en/rules-and-policies/x-automation), [OpenTweet](https://opentweet.io/blog/twitter-automation-rules-2026)) ### BlueSky BlueSky's approach is more open due to the AT Protocol's decentralized architecture, but has clear boundaries. **Allowed:** - Bot accounts (must self-label using `upsertProfile` with label value `'bot'`) - Scheduled automated posting - API-based content creation - Feed generators and custom algorithms - Firehose/Jetstream consumption for monitoring **Prohibited:** - **Spam**: "Do not send spam or repeatedly post content in ways that disrupt normal conversations" - **Engagement manipulation**: "Do not artificially manipulate features or social signals to gain unearned reach or mislead users, including engagement metrics, follower counts" - **Automated bulk interactions**: Developer guidelines explicitly ban "generating automated or bulk interactions, including any that would cause a notification to a user like a message, follow, like or reply" - **Automated follower generation**: "Any method to automate generating followers or interactions, including account generation tools" - **System abuse**: "Do not attempt to compromise, exploit, bypass, abuse, or disrupt Bluesky's systems, security features, APIs, rate limits, or infrastructure" **Critical rule for bots**: "If your bot interacts with other users, please only interact (like, repost, reply, etc.) if the user has tagged the bot account." **Unsolicited bot interactions = spam.** (Source: [BlueSky ToS](https://bsky.social/about/support/tos), [Community Guidelines](https://bsky.social/about/support/community-guidelines), [Developer Guidelines](https://docs.bsky.app/docs/support/developer-guidelines)) ### Key Difference X requires *written approval* for any AI reply bot. BlueSky allows bots more freely but mandates opt-in interaction (users must tag the bot first). Neither platform allows automated engagement farming. --- ## 2. Ban Risks and Shadow Banning ### X/Twitter Shadow Banning X uses algorithmic visibility reduction ("deboosting") rather than traditional bans. The platform does not notify you. **Types of shadow bans:** | Type | Effect | Detection | |------|--------|-----------| | Search Suggestion Ban | Account absent from search suggestions | Search @handle logged out | | Search Ban | Tweets excluded from search results | Search your tweets logged out | | Ghost Ban | Replies invisible in threads to non-followers | Check replies from another account | | Reply Deboosting | Replies pushed to bottom of conversations | Monitor visibility from other accounts | **Behavioral triggers:** | Behavior | Threshold | Risk Level | |----------|-----------|------------| | Posting velocity | >30 tweets/day | High | | Reply velocity | >30 replies/hour | High | | Like velocity | >100 likes/hour | High | | Retweet velocity | >50 retweets/hour | High | | Follow/unfollow | >100 follows/day | High | | Identical content | Repeated text/links | High | | Non-human rhythm | ML-detectable automated cadences | High | | Rapid engagement spikes | Sudden jump from baseline | Medium | | New account + high volume | Lower thresholds apply (~500 posts/day vs 2,400) | Medium | | Generic short replies | "Great post!" / "This!" patterns | Medium | **Safe operating ranges:** - 3-8 original tweets per day - 10-20 genuine replies per day (specific to the tweet, not generic) - 15-20 tweets per day maximum total - Space activity throughout the day (not bursts; 5+ tweets in quick succession triggers spam detection) - Mix content types (text, images, threads, quotes) - Vary posting times slightly - Write 1-2 sentences minimum per reply showing you read the content (Source: [Pixelscan Guide](https://pixelscan.net/blog/twitter-shadowban-2026-guide/), [Multilogin](https://multilogin.com/blog/twitter-shadow-bans/), [OpenTweet](https://opentweet.io/blog/twitter-shadowban-check-fix-avoid-2026)) **Shadow ban duration**: Most lift within 48-72 hours after stopping triggering behavior. Repeated offenses: 2-14 days. Chronic violators: permanent suspension. **Shadow ban detection tools (2026):** - shadowban.yuzurisa.com — tests for four ban types - x.voyagard.com/shadowban — free checker - Manual testing: search your tweets/replies from a logged-out browser - Note: X has cracked down on most checkers; reliability is hit-or-miss ### X/Twitter Platform Rate Limits (Updated 2026-04-04) | Metric | Free | Premium | |--------|------|---------| | Posts/day | 2,400 | Up to 10,000 | | Rolling 30-min post cap | ~50 | Higher | | DMs/day | 500-2,000 | Higher | | Follows/day | 400 | 1,000 | | Likes/day | 1,000 | Higher | | Informal follows/hour | 40-50 | — | **API rate limits (updated Feb 2026):** | Tier | Cost | Read Volume | Write Volume | |------|------|-------------|--------------| | Free | $0 | Minimal | 1,500 posts/mo | | Basic | $200/mo | 10K tweets/mo | 3,000 posts/mo | | Pro | $5,000/mo | 1M tweets/mo | 300K posts/mo | | Enterprise | $42K-50K+/mo | Custom | Custom | | Pay-per-use | ~$0.01/tweet | 2M cap | Variable | (Source: [X API Pricing 2026](https://www.xpoz.ai/blog/guides/understanding-twitter-api-pricing-tiers-and-alternatives/), [Postproxy](https://postproxy.dev/blog/x-api-pricing-2026/)) ### BlueSky Rate Limits BlueSky uses a points-based system (generous for typical use): | Metric | Limit | |--------|-------| | Points/hour | 5,000 | | Points/day | 35,000 | | CREATE cost | 3 points | | UPDATE cost | 2 points | | DELETE cost | 1 point | | Max creates/hour | ~1,666 | | Max creates/day | ~11,666 | | API requests/5 min | 3,000 (per IP) | BlueSky's 2025 Transparency Report: automated systems flagged 2.54M potential violations. The platform uses automated tools for detection, with human moderation for review. (Source: [BlueSky Rate Limits](https://docs.bsky.app/docs/advanced-guides/rate-limits)) --- ## 3. Legal Risks in the EU/Netherlands ### GDPR and Social Media Scraping **Dutch DPA position (Autoriteit Persoonsgegevens, May 2024 — still current as of April 2026):** The Dutch DPA issued explicit guidelines stating that **data scraping by private companies will "almost always" violate the GDPR** due to lacking a legal basis for processing personal data. Key principles: - **Publicly available does not equal freely usable**: Even public social media posts remain personal data under GDPR. Scraping them requires a legal basis. - **Consent**: "Practically impossible" — can't get informed consent from thousands of scraped users. - **Contractual necessity**: Inapplicable — no direct relationship with data subjects. - **Legitimate interest**: The Dutch DPA takes the **strictest position in the EU** — "purely commercial interests are insufficient." Only legally protected interests qualify. **Three narrow exceptions the Dutch DPA considers potentially lawful:** 1. Scraping public news sites to track coverage of your own company 2. Scraping your own webshop for customer review analysis 3. Scraping public security forums to assess security risks to your own company **Controversy**: The European Commission and Dutch Council of State (highest administrative court) have criticized this narrow reading, arguing purely commercial interests *can* constitute legitimate interest. However, **the Dutch DPA has not changed its position**. For a Dutch company, this DPA position represents the enforcement risk you face. **French DPA (CNIL)**: More permissive — recommends safeguards (precise data criteria, filtering, pseudonymization, opt-out mechanisms) but doesn't inherently exclude commercial purposes. **Practical GDPR requirements if processing social media data:** - Data Protection Impact Assessment (DPIA) — mandatory for large-scale processing - Transparency — inform data subjects (practically difficult with scraping) - Purpose limitation — data collected for one purpose can't be repurposed - Data minimization — collect only what's necessary - Storage limitation — delete when no longer needed - Right to object — must honor opt-out requests - Special category data (Article 9) — health, political opinions, etc. require explicit consent (Source: [Securiti](https://securiti.ai/scraping-almost-always-illegal-netherlands-dpa-declares/), [Pinsent Masons](https://www.pinsentmasons.com/out-law/news/dutch-web-scraping-guidance-warn-businesses-gdpr-breach-risk), [IAPP](https://iapp.org/news/a/the-state-of-web-scraping-in-the-eu)) ### Dutch Scraping Law Beyond GDPR - **Computer Crime Act (Wet computercriminaliteit)**: Unauthorized access to computer systems is criminal. Scraping behind login walls or circumventing technical barriers could trigger criminal liability. - **Database Directive (EU)**: Sui generis database right protects "substantial investment" in databases. Social platforms arguably qualify. Extraction of substantial parts is prohibited. - **Unfair Commercial Practices**: Automated engagement that misleads consumers about the nature of interactions could violate Dutch consumer protection law. ### EU AI Act Article 50 — Transparency Obligations (Effective August 2, 2026) (Updated 2026-04-04) This is the most immediately relevant legal development for automated social media engagement. **Article 50(1) — AI-to-human interaction disclosure:** > "Providers shall ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that the concerned natural persons are informed that they are interacting with an AI system, unless this is obvious from the point of view of a reasonably well-informed, observant and circumspect natural person." **Article 50(2) — AI-generated content marking:** Providers of AI systems generating synthetic text "shall ensure the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated." **What this means for automated engagement:** - **Mandatory disclosure**: If you deploy an AI system that replies to people on social media, you must disclose it's AI at "the latest at the time of the first interaction" - **Machine-readable marking**: AI-generated text must be detectable as AI-generated (metadata marking) - **Format**: Must be "clear and distinguishable" and respect accessibility requirements - **Scope**: Applies to EU-based deployers *and* non-EU providers offering services to EU users - **Penalties**: Up to EUR 15 million or 3% of annual worldwide turnover **Code of Practice on AI Labeling (December 2025):** The European Commission published the first draft Code of Practice on Transparency of AI-Generated Content on December 17, 2025, providing practical guidance on Article 50 compliance. This covers AI labeling requirements on social media on top of platform-specific rules. **The "human in the loop" nuance**: If a human reviews and manually posts AI-generated content (rather than an automated system posting directly), the transparency obligation may not apply to the *poster* — it applies to systems that "directly interact." However, legal scholars debate whether AI-drafted content still falls under marking requirements. This is unsettled law. (Source: [EU AI Act Article 50](https://artificialintelligenceact.eu/article/50/), [Jones Day](https://www.jonesday.com/en/insights/2026/01/european-commission-publishes-draft-code-of-practice-on-ai-labelling-and-transparency), [LegalNodes](https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks), [Bird & Bird](https://www.twobirds.com/en/insights/2026/taking-the-eu-ai-act-to-practice-understanding-the-draft-transparency-code-of-practice)) ### Risk Summary for a Dutch/EU Company | Risk | Severity | Likelihood | Notes | |------|----------|------------|-------| | Dutch DPA enforcement for scraping social media data | High (fines up to EUR 20M / 4% turnover) | Medium | DPA actively issuing guidance, position unchanged | | EU AI Act violation for undisclosed AI interactions | High (up to EUR 15M / 3% turnover) | Medium | Enforcement begins Aug 2, 2026 | | Platform ban for ToS violation on X | Medium (account loss) | **High** | Automated enforcement, ML detection | | Platform ban on BlueSky | Low-Medium | Low | More permissive, opt-in model | | Criminal liability under Computer Crime Act | Low for public data | Low | High risk if bypassing login walls | | Reputational damage | Medium-High | Medium | Dev communities hostile to inauthentic engagement | --- ## 4. Best Practices: What Works ### Keyword/Topic Monitoring Strategies **For finding high-value posts to reply to:** 1. **Problem-signal keywords**: "looking for", "anyone recommend", "struggling with", "alternative to [competitor]", "switched from", "tired of" 2. **Niche hashtags**: #buildinpublic, #indiehacker, #devtools, platform-specific hashtags 3. **Competitor mentions**: Direct @mentions and name mentions of competitors 4. **Pain point language**: "Jira is...", "why does [tool] always...", "I hate when..." 5. **Question formats**: Posts ending in "?" about topics in your domain **Engagement signal filters (which posts to prioritize):** - Posts from accounts with 500-50K followers (engaged niche, not celebrity noise) - Posts less than 2 hours old (time decay means early replies get more visibility) - Posts with some engagement (5+ likes) but not viral (not drowned out) - Question posts or "looking for" posts (highest conversion intent) - Posts from people who match your ICP (check bio for role/company) ### Reply Quality and Authenticity **What makes a reply valuable (algorithm + human perception):** - **Specific to the post**: Reference something the poster said — show you read it - **Add new information**: Data point, personal experience, different angle - **1-2 sentences minimum**: One-word replies get deboosted - **No self-promotion in the reply**: Your profile does the selling - **Natural tone**: Match the poster's energy level (casual to casual, professional to professional) **Best reply templates for developer audiences:** 1. **Respectful contrarian**: "Interesting take. I've seen the opposite in my experience — [specific example]" 2. **Data nugget**: "Related data point: [stat or metric] from [source]" 3. **Operator lens**: "We ran into this at [context]. What worked was [approach]" 4. **Mini case study**: "Dealt with this exact problem. [2-sentence story]. Happy to share more" 5. **Genuine question**: "Curious about [specific aspect]. Have you tried [approach]?" ### Rate Limiting Your Own Activity **Safe X/Twitter activity levels:** - 3-8 original tweets per day - 10-20 genuine replies per day - Space throughout the day (never burst 5+ tweets) - Vary timing by 15-30 minutes (not machine-like precision) - Mix content types: text, images, threads, quotes, polls - Weekend/evening posting to show human patterns **Safe BlueSky activity levels:** - Higher volume is tolerable (generous rate limits) - Still avoid machine-like patterns - Bot label required if automated - Only interact with users who tag you first (if bot) ### The 45-Minute Daily Engagement Routine (Manual) 1. **15 min**: Reply to 10 tweets from other indie hackers (#buildinpublic, #indiehacker) 2. **15 min**: Respond to 5 tweets from potential customers addressing their pain points 3. **15 min**: Post your daily update + respond to all replies within 1 hour This is the highest-ROI social media activity for a solo founder — zero risk, maximum algorithmic reward. --- ## 5. Bad Practices and Risks: What to Avoid ### Spammy Reply Patterns That Get Flagged - **Generic praise**: "Great post!", "This!", "So true!", "Love this!" — flagged as bot behavior - **Template with variables**: "Great post about {topic}, {name}!" — ML models detect patterns - **Product plugs in replies**: "You should try [product]" on strangers' tweets - **Volume without variation**: Same reply structure repeated across many posts - **Rapid-fire replies**: More than 30 replies/hour - **Replying to viral posts only**: Pattern of only engaging with high-follower accounts ### Shadow Banning Triggers (Specific to Automated Behavior) - **Non-human posting rhythm**: Exact intervals (every 15 minutes) vs. human randomness - **Coordinated accounts**: Same content or complementary engagement from multiple accounts - **Sudden volume spikes**: Going from 2 tweets/day to 30 overnight - **Link-heavy replies**: Replies containing URLs to the same domain repeatedly - **Hashtag stuffing**: Excessive hashtags in replies ### Common Failure Modes of Automated Reply Systems 1. **Context blindness**: Replying to a grief post with a product recommendation 2. **Tone deafness**: Corporate tone in casual threads, or casual in professional discussions 3. **Stale context**: Replying to old tweets (>24h) with "just saw this!" 4. **Duplicate detection failure**: Replying to the same person multiple times with similar content 5. **Thread ignorance**: Replying without reading the full thread context 6. **Sarcasm/irony misread**: Taking sarcastic posts literally and generating sincere replies 7. **Language mismatch**: Generating English replies to non-English posts 8. **Self-promotion blindness**: Not detecting when a "question" is actually the poster's own promotion ### The Reputational Risk (Especially for Dev Tools) For a product company (especially B2B/developer tools), being caught using engagement bots is a **serious reputational risk**. Developer communities are particularly hostile to inauthentic engagement. The "ick factor" of discovering a founder uses bots for social selling can permanently damage trust. This is arguably a bigger risk than any fine or ban. --- ## 6. Ethical Considerations ### Core Concerns - **Authenticity erosion**: AI replies that appear human-authored erode trust in online discourse - **Consent and autonomy**: People didn't consent to being targeted by automated systems - **Power asymmetry**: Automation gives disproportionate voice to tool operators - **Spam ecosystem**: Even well-intentioned bots normalize bot behavior and degrade platform quality - **Information quality**: Algorithms already prioritize engagement over accuracy; AI replies amplify this ### When Automation Crosses Into Spam Any automated reply that promotes a product/service to strangers is spam, regardless of how "helpful" the AI makes it sound. The test: "Would a reasonable person feel this is a genuine human contribution to the conversation, or would they feel targeted by a marketing system?" ### The Safe Zone: Human-in-the-Loop AI Content - Use AI (Claude, ChatGPT) to *draft* tweets, threads, and reply ideas - Review, edit, personalize, and manually post everything - "AI generates text. You review it. You click post." — zero platform risk, zero legal risk - This is universally accepted and doesn't violate any ToS --- ## 7. What Successful Indie Hackers Actually Do The indie hacker community has converged on strategies that grow audiences without automation risk. The key insight: **the algorithm rewards genuine engagement far more than volume.** ### The Playbook **Build in Public (#1 Strategy):** - Daily build updates: features, bugs, metrics, screenshots, demo GIFs - Weekly progress threads (Friday recap) — 3-5x more engagement than single tweets - Revenue transparency: monthly MRR updates, even at $0 - Visual content gets 3x more engagement than text-only **The 80/20 Content Rule:** - 80% valuable content (educational, entertaining, community) - 20% promotional (launches, features, asks) - Never self-promote in replies to others **The "Reply Guy" Strategy (Legitimately):** - Reply to popular tweets in your niche within the first hour - Add genuine value — insight, experience, different perspective - Never self-promote in the reply itself - People check your profile when you leave helpful replies -> organic follows - Replies are worth 15-27x more than likes algorithmically - Consistent replies build your account reputation score, improving distribution on original content **Thread Strategy:** - Multi-tweet threads generate 54% more engagement than single tweets - Best for: case studies, how-to guides, lessons learned, process breakdowns ### Growth Benchmarks (Realistic 6-Month Timeline) | Month | Followers | Phase | |-------|-----------|-------| | 1-2 | 0-200 | Foundation: finding voice, building consistency | | 2-3 | 200-500 | Recognition: community notices consistency | | 3-4 | 500-1,200 | Network effects begin | | 4-5 | 1,200-2,500 | Algorithm boost from consistent engagement signals | | 5-6 | 2,500-6,000 | Launch-ready audience | **Conversion**: Build-in-public audiences convert at 15-25% with founder discounts. Accounts posting 1-3 high-quality tweets daily with regular engagement see 10%+ monthly follower growth vs. 2-5% for sporadic posters. **Consistency beats volume.** ### Tools Safe to Use - **Content scheduling**: Buffer, Typefully, Hootsuite (explicitly allowed by all platforms) - **AI drafting**: Any AI tool for writing — just review before posting - **Analytics**: Native Twitter Analytics, Highperformr, Brand24 - **Thread creation**: Typefully, Chirr App - **Cross-posting**: Tools that post your content to both X and BlueSky --- ## Implications for Kendo 1. **Do not build or deploy automated engagement tools for X or BlueSky.** The legal risk (GDPR + AI Act) and platform risk (bans) are too high for the potential benefit, especially for a Dutch company under the strictest DPA in Europe. 2. **The EU AI Act transparency obligation is imminent.** Article 50 takes effect August 2, 2026. Any AI system that directly interacts with users on social media must disclose it's AI. The December 2025 Code of Practice draft provides implementation guidance. 3. **Build-in-public is the right strategy.** Jasper's personal journey content (metrics, decisions, challenges) is the highest-converting content type with zero legal/platform risk. 4. **Human-in-the-loop AI content creation is the sweet spot.** Use AI to draft tweets and threads, review and edit, post manually. Explicitly allowed by all platforms, avoids all legal risk. 5. **BlueSky monitoring is legally safer than X.** The AT Protocol is open and scraping-friendly. However, GDPR still applies to personal data even on open platforms — use monitoring for topic/trend awareness, not for building user databases. 6. **Any scraping of social media data for lead generation requires legal review.** Under Dutch DPA guidance, even scraping public posts likely lacks legal basis for commercial purposes. 7. **Replies are the highest-leverage engagement on X.** At 15x the algorithmic weight of likes, and 150x when the author replies back, 10-20 genuine daily replies are more valuable than hundreds of likes or dozens of posts. ## Open Questions - Will the Dutch DPA's restrictive legitimate-interest interpretation survive an ECJ challenge? - How will EU AI Act Article 50 be enforced in practice for social media bots starting August 2026? - Will X ever create a practical approval path for AI reply bots, or is "prior written approval" effectively a ban? - How will BlueSky's moderation approach evolve under EU regulatory pressure? - What's the actual enforcement probability for small companies using AI engagement tools? - Does the human-in-the-loop exception to Article 50 hold up for AI-drafted social media content that's manually reviewed and posted? --- # Dev Tool Growth Playbooks *Source: `research/dev-tool-growth-playbooks.md` · [Raw](https://kendo-hq.pages.dev/raw/research/dev-tool-growth-playbooks.md)* --- title: "Dev Tool Growth Playbooks — How Developer-Focused Startups Went from Zero to Traction" date: 2026-04-04 type: investigation tags: [growth, go-to-market, marketing, community, developer-tools, launch-strategy] sources: - https://www.eleken.co/blog-posts/linear-app-case-study - https://medium.com/linear-app/building-at-the-early-stage-e79e696341db - https://research.contrary.com/report/linear - https://www.ideaplan.io/blog/how-linear-grew-to-1-billion-with-35k-marketing-spend - https://www.command.ai/blog/linear-growth-case-study/ - https://sequoiacap.com/article/linear-spotlight/ - https://newsletter.pragmaticengineer.com/p/linear - https://www.thoughtlytics.com/newsletter/linear-playbook-growth-engine - https://review.firstround.com/vercels-path-to-product-market-fit/ - https://www.reo.dev/blog/how-developer-experience-powered-vercels-200m-growth - https://www.decibel.vc/articles/from-open-source-to-enterprise-how-vercel-built-a-product-led-motion-on-top-of-nextjs - https://sacra.com/c/vercel/ - https://www.saastr.com/how-vercel-hit-9-3b-and-replit-hit-3b-after-a-decade-the-long-paths-to-ai-overnight-success/ - https://dev.to/michaelaiglobal/reverse-engineering-vercel-the-go-to-market-playbook-that-won-the-frontend-3n5o - https://www.stacksync.com/blog/one-word-changed-everything-the-origin-story-of-supabase - https://dev.to/fmerian/how-dev-first-startup-supabase-grew-from-0-to-50k-github-stars-5d4d - https://www.craftventures.com/articles/inside-supabases-product-community-led-growth - https://www.producthunt.com/stories/how-we-launch-at-supabase - https://sacra.com/research/supabase-at-70m-arr-growing-250-yoy/ - https://research.contrary.com/company/railway - https://blog.railway.com/p/zeno-rocha-resend - https://ownerpreneur.com/case-studies/resend-com-how-zeno-rocha-built-a-20000-user-email-platform-in-9-months/ - https://news.ycombinator.com/item?id=36309120 - https://evilmartians.com/events/podcast-zeno-rocha-resend - https://hackernoon.com/founder-interviews-anurag-goel-of-render-cfcad8e92dae - https://techcrunch.com/2019/04/25/render-gets-2-25m-seed-round-as-infrastructure-alternative-to-biggest-names-in-tech/ - https://devops.com/render-now-serving-more-than-one-billion-web-requests-monthly-expands-into-europe-with-easiest-cloud-for-hosting-any-app-or-website/ - https://changelog.com/founderstalk/85 - https://newsletter.posthog.com/p/how-we-got-our-first-1000-users - https://posthog.com/founders/story-about-pivots - https://www.gv.com/news/posthog-open-source-analytics - https://www.howtheygrow.co/p/how-posthog-grows-the-power-of-being - https://www.markepear.dev/blog/dev-tool-hacker-news-launch - https://medium.com/@baristaGeek/lessons-launching-a-developer-tool-on-hacker-news-vs-product-hunt-and-other-channels-27be8784338b - https://www.markepear.dev/blog/developer-marketing-guide - https://growthmode.co/article/how-to-get-the-first-1000-users-for-your-startup - https://business.daily.dev/resources/2025-developer-tool-trends-what-marketers-need-to-know/ --- # Dev Tool Growth Playbooks — How Developer-Focused Startups Went from Zero to Traction > How did Linear, Vercel, Supabase, Railway, Render, Resend, PlanetScale, and PostHog actually grow? What specific channels, strategies, and launch moves worked — and what lessons apply to a solo-founder dev tool like Kendo? --- ## Executive Summary Every developer tool that broke through followed a version of the same playbook: **build something developers genuinely love, then let them do the marketing for you.** But the details of how each company executed that principle — and the specific channels, timelines, and tactics involved — vary dramatically and contain lessons Kendo can act on immediately. **Key findings:** 1. **Product quality is the only marketing strategy that compounds.** Linear reached a $1.25B valuation on $35K lifetime ad spend. Supabase went from 8 databases to 800 in 72 hours after a tagline change, not an ad campaign. Railway hit 2M developers with zero marketing dollars. Every successful dev tool grew primarily through word of mouth triggered by product experience. 2. **Hacker News is the single highest-leverage launch channel for dev tools.** Supabase (1,120 upvotes, 100x user growth overnight), PostHog (300 deployments in days), Fly.io (53 founder comments in one thread) — HN consistently delivers the first inflection point. Product Hunt matters but converts differently (broader audience, less technical depth). 3. **The "open-source wedge" strategy is the most repeatable growth engine** — but it requires building a genuinely useful open-source project that stands alone, not just open-sourcing your product. Resend (React Email, 12K+ stars), Vercel (Next.js, 850K+ developers), Supabase (entire platform), PostHog (self-hosted analytics). 4. **Founder personal brand is a solo founder's unfair advantage.** Zeno Rocha built a 20K+ waitlist before Resend launched, purely from years of open-source work and Twitter engagement. Linear's founders generated a 10K waitlist through their Twitter following alone. 5. **Positioning as "the X alternative" or "X for developers" unlocks instant comprehension.** Supabase's pivot from "realtime Postgres" to "the open-source Firebase alternative" caused a 100x growth spike. Resend positioned as "the Stripe of email." Render positioned as "the modern Heroku." --- ## Company-by-Company Playbooks ### Linear — The $35K Marketing Masterclass **Timeline:** - **2018:** Jori Lallo pitches the idea to Tuomas Artman and Karri Saarinen - **2019:** Founded; bootstrapped initially; $4.2M seed from Sequoia (November) - **2019-2020:** Private invite-only beta for nearly a full year - **June 2020:** Public launch after 1,000+ customers already on the platform - **December 2020:** $13M Series A (Sequoia) - **June 2021:** Profitable — more cash in bank than total raised - **January 2022:** $35M Series B at $400M valuation - **September 2024:** $35M Series C at $1.25B valuation **What worked:** 1. **Invite-only beta as a demand engine.** Linear kept the product behind a "Request Access" button for nearly a year. This was not a rush to market — it was a strategic constraint that built anticipation and ensured quality. They only launched publicly after having 1,000+ paying customers. 2. **Founder networks as first distribution.** The founders' backgrounds (Airbnb, Uber, YC) gave them access to exactly the right first users — tech leads at small startups. Karri Saarinen had already built a Chrome extension at Airbnb to simplify Jira, which gained internal adoption. Their Twitter followings generated a 10,000-person waitlist before launch. 3. **Design and speed as viral mechanics.** Sub-50ms response times, smooth animations, dark UI, keyboard-first interactions — the product was so visually and functionally distinct that users posted screenshots organically. Design quality was self-marketing. 4. **The Linear Method as ideology.** Before achieving significant traction, they published a philosophy document ("The Linear Method") outlining principles for software team workflows. This attracted teams who shared their worldview — creating identity-based loyalty, not just functional preference. 5. **Delayed marketing hire.** No marketing-focused employee until 2022 (three years post-founding). The rationale: premature marketing masks product problems. The $35K lifetime ad spend went to small sponsorships and minor experiments. 6. **Community through Slack and Twitter.** Built a Slack community reaching 16,000+ members. Founders actively participated in support and feature discussions. Twitter/X presence focused on genuine product development opinions. 7. **Network effects built into the product.** One person starts using Linear, invites their team, which invites other teams. The collaborative nature made adoption self-propagating inside organizations. **Key metrics:** - $35K total lifetime marketing spend through Series C - 80%+ daily active usage (vs. 40-60% industry standard) - 419 HN upvotes on launch announcement - 2,100 Product Hunt upvotes - Profitable by June 2021 **Lesson for Kendo:** You do not need a marketing budget. You need a product that is so fast, so well-designed, and so opinionated that users become evangelists. Linear proved that word-of-mouth is not a cliche — it is a literal growth strategy that scales to a billion-dollar valuation if the product earns it. --- ### Vercel — The Open-Source Framework Play **Timeline:** - **2014:** Guillermo Rauch publishes "Seven Principles of Rich Web Applications" (the intellectual blueprint) - **November 2015:** Launches ZEIT (later Vercel) after leaving WordPress/Automattic - **October 2016:** Next.js released as open source — "from basically day one" it got traction - **2019:** $1M revenue - **2020:** Rebrand to Vercel; $5M revenue - **2021:** $21M revenue (320% YoY growth) - **2022:** $51M revenue - **2023:** $86M revenue; v0 launched (generative UI) - **2025:** $200M+ ARR; 100,000+ monthly signups **What worked:** 1. **Building the framework first.** Rauch created Next.js (initially called "N4") as an internal tool before launching the hosting platform. The framework solved a real pain point — deploying production React was a mess of Webpack/Babel configuration. By the time Vercel needed customers, Next.js had already attracted them. 2. **Open source as top-of-funnel at scale.** 850,000+ developers use Next.js. Vercel never had to buy ads to find React developers — developers found Vercel while searching for solutions. The framework powered ChatGPT, TikTok, and Notion's web experiences. 3. **Freemium with no feature restrictions on the framework.** Next.js remained completely free with no feature gating. The business model positioned Vercel's hosting as the optimal deployment choice, but never forced adoption. Developers could self-host if they wanted. 4. **Zero-friction onboarding.** Git integration, preview deployments for every PR, zero-config deployments — the product felt natural from the first interaction. Users could start small, deploy quickly, and scale without re-architecting. 5. **Intent-based sales (not SDR outreach).** Instead of cold outreach, Vercel employed "Product Advocates" who monitored behavioral signals (docs exploration, repo cloning, pricing page visits, team additions). Outreach triggered only on clear intent signals. 6. **Launch Weeks as recurring events.** Quarterly events with daily product releases generated buzz and community engagement repeatedly. **Revenue growth:** $1M (2019) → $5M (2020) → $21M (2021) → $51M (2022) → $86M (2023) → $200M+ (2025) **Lesson for Kendo:** Kendo cannot build a framework (that ship has sailed), but the principle applies: **create free, high-value content or tools that solve a pain point adjacent to your product, then let the audience convert naturally.** Kendo's MCP server, technical blog posts, and comparison pages can serve as the "Next.js equivalent" at a smaller scale — free resources that build trust and funnel developers toward the paid product. --- ### Supabase — The Positioning Pivot and Community Machine **Timeline:** - **Early 2020:** Founded by Paul Copplestone and Anthony Wilson; $100K seed raised - **April 2020:** 8 databases in production after four months of building under "realtime Postgres" tagline - **May 27, 2020:** Changed tagline to "the open-source Firebase alternative"; hit HN front page (1,120 upvotes); went from 8 to 800 databases in 72 hours - **Summer 2020:** Y Combinator S20 batch - **December 2020:** 3,100 databases; 5,500 GitHub stars; $6M seed (Coatue) - **September 2021:** 50,000 databases; 30,000+ GitHub stars; $30M Series A - **2025:** $70M ARR (250% YoY growth); 4M+ developers; $5B valuation **What worked:** 1. **The positioning pivot that changed everything.** Four months of building as "realtime Postgres" produced 8 databases. Three days after changing to "the open-source Firebase alternative," they had 800. The product was the same — only the words changed. Lesson: positioning is not marketing fluff; it is the difference between 8 users and 800. 2. **Hacker News as the primary launch channel.** The HN post on May 27, 2020 (1,120 upvotes) was the inflection point. Developers who were frustrated with Firebase found exactly what they were looking for in that framing. 3. **Personal outreach at inhuman scale.** Rory Wilding (Head of Growth) spent an entire weekend manually contacting 3,000+ new signups via their GitHub profiles. Not surveys — genuine conversations. This direct engagement informed the product roadmap and built trust in the early community. 4. **Launch Weeks as a growth machine.** Every 3-4 months, Supabase ships one major feature per day for a week. Principle: "fixed timeline, flexible scope." The compound effect of repeated launches keeps the community engaged and generates recurring press/social media coverage. Supabase even open-sourced the concept at launchweek.dev. 5. **Open source as trust engine.** Being fully open-source (not open-core) meant developers could inspect the code, self-host, and contribute. This built the kind of trust that closed-source competitors cannot replicate. 6. **Database stickiness as a retention moat.** Copplestone understood that databases are the stickiest layer of any app — developers do not migrate databases casually. Every new project started on Supabase was likely a lifetime customer for that project. **Key metric:** 2,500 new databases launched daily by 2025. **Lesson for Kendo:** The positioning pivot is the single most actionable lesson here. Kendo's current positioning ("AI-native issue tracker with MCP support") is accurate but may be too technical for first contact. Testing a simpler frame — "the Linear alternative with time tracking" or "the issue tracker that lives in your terminal" — could unlock the same kind of "aha, that is what I need" response that "open-source Firebase alternative" created for Supabase. --- ### Resend — The Founder Brand and Open-Source Wedge **Timeline:** - **Pre-2023:** Zeno Rocha builds personal brand through years of open-source work (Dracula Theme, etc.) and consistent Twitter engagement (started tweeting in English in 2014) - **December 2022:** React Email launches as open-source project - **January 2023:** Resend announced; waitlist opens - **Winter 2023:** Y Combinator W23 batch; product behind waitlist - **June 2023:** Public launch; 6,469 on waitlist within 7 weeks; Launch HN post - **9 months post-launch:** 20,000 users; 7M+ transactional emails sent - **July 2023:** $3M seed round - **2025:** 80,000+ developers; team of 6 people; $18M Series A **What worked:** 1. **Founder brand built over years, not weeks.** Zeno's master plan had three steps: (1) build an open-source project (React Email), (2) establish as email experts, (3) launch a SaaS around it. The 20,000-person waitlist came from years of building a developer audience — his personal brand was the primary distribution channel. 2. **The "jab jab right hook" open-source strategy.** React Email (12,000+ GitHub stars) launched a month before Resend was even announced. It was a genuinely useful standalone project — not a marketing gimmick. The documentation deliberately showed how to use React Email with every competitor. This "promote your competitors on your own docs" confidence built trust. 3. **"Stripe of email" positioning.** Anchoring to Stripe — the gold standard for developer APIs — instantly communicated what Resend was about. Paul Graham himself called it "the Stripe for email." 4. **Waitlist as validation and refinement.** Keeping the product behind a waitlist through YC allowed rapid iteration based on early user feedback before the public launch. 5. **First paying customer as the green light.** They sent a $10 payment link to a friend using the MVP. He paid. That signal — someone willing to pay before being asked — gave them the confidence to go all-in and apply to YC. 6. **Tiny team, massive output.** 80,000+ developers served by 6 people demonstrates the leverage of focusing on developer experience over feature breadth. **Lesson for Kendo:** Jasper does not have Zeno's pre-built audience, but the principle applies: **start building the personal brand now.** Tweeting about the building process, publishing technical deep-dives, and engaging in developer communities creates compound interest that pays off at launch. The open-source wedge (a useful standalone tool that leads naturally to the paid product) is also highly relevant — could Kendo open-source an MCP integration library or a time tracking CLI? --- ### Railway — The Template Marketplace and Community Flywheel **Timeline:** - **June 2020:** Founded by Jake Cooper (previously scaled infrastructure at Uber) - **May 2022:** $20M Series A (Redpoint Ventures); 50,000 developers; 900,000+ projects; 1,000+ paid users; 20-50% monthly revenue growth - **February 2023:** 300,000 users; 20,000+ new applications daily - **2025:** 2M+ developers; zero marketing spend - **January 2026:** $100M Series B (TQ Ventures) **What worked:** 1. **Zero marketing spend with 2M+ developers.** Railway grew entirely through product quality and community. No paid acquisition, no marketing team — just a product that developers told other developers about. 2. **Discord as the community hub.** Railway's active Discord community (45K+ members) became the center of user support, feature feedback, and community building. The company built tooling to index Discord threads and pull telemetry from Railway projects to help solve user problems. 3. **Template marketplace with revenue sharing.** Developers can create and publish deployment templates, earning 25% of CPU/RAM revenue from applications deployed using their templates. Some community members earn six figures from templates alone. This turned users into invested distributors. 4. **The Conductor Program.** A formalized community ambassador program bringing together the most active community members as a bridge between the Railway team and the broader community. 5. **Strategic angel investors.** Vercel CEO Guillermo Rauch and GitHub co-founder Tom Preston-Werner as angels provided credibility and network effects in the developer tool ecosystem. **Lesson for Kendo:** The template marketplace concept translates well. Kendo could create shareable board templates, workflow configurations, or MCP integration recipes that users can publish and share — turning power users into distribution channels. --- ### Render — The Heroku Replacement Narrative **Timeline:** - **2017:** Anurag Goel (Stripe's 5th engineer) starts building prototypes after leaving Stripe in 2016 - **April 2019:** Public launch with $2.25M seed (General Catalyst) - **October 2019:** Wins TechCrunch Startup Battlefield at Disrupt SF ($100K prize) - **April 2020:** 1B+ HTTP requests/month (2x growth from March) - **November 2021:** $20M Series A - **January 2025:** $80M Series C; 2M+ developers; 100K+ new signups/month - **February 2026:** $100M Series C extension at $1.5B valuation; 4.5M+ developers; 250K+ new signups/month **What worked:** 1. **The "modern Heroku" narrative.** Heroku had stagnated under Salesforce ownership. Render positioned itself as the natural next step for developers who had outgrown Heroku or were frustrated by its lack of evolution. This gave developers a familiar mental model with a clear upgrade path. 2. **TechCrunch Startup Battlefield win.** Winning the competition in 2019 provided visibility, credibility, and $100K — a significant boost for an early-stage company. Competitions and awards can provide outsized returns for unknown startups. 3. **Stripe pedigree as trust signal.** Being founded by Stripe's 5th engineer immediately communicated "this person understands developer experience at scale." Founder credibility matters enormously in developer tools. 4. **Pure word-of-mouth growth.** Render serves over a billion requests monthly, with growth continuing to swell purely through developer recommendations. The platform's simplicity compared to AWS/GCP made it easy to recommend. 5. **Riding the "vibe coding" wave.** OpenAI's Codex coding app allows users to deploy web apps they create on Render — catching the AI-generated-app trend and turning it into a growth channel. **Lesson for Kendo:** Positioning against an incumbent's weakness works when the frustration is real and widespread. Kendo's "Jira fatigue" narrative is well-chosen — but needs to be as crisp as "Render is the modern Heroku." The tagline should complete the sentence: "If you are tired of _____, Kendo is _____." --- ### PlanetScale — The Git Workflow for Databases **Timeline:** - **2018:** Founded, built on Vitess (the technology behind YouTube's MySQL infrastructure) - **2021:** Database-as-a-service launched; $50M funding (Kleiner Perkins, a16z) - **2022:** Rapid growth; profitable within a year of Sam Lambert becoming CEO; grew Vitess usage 61,000% in four years - **2023:** Ranked 188th on Deloitte Technology Fast 500 - **March 2024:** Removed free tier (controversial "rug pull"); layoffs **What worked:** 1. **"Git branching for databases" as the hook.** The Git-like workflow for schema changes (branching, deploy requests, non-blocking migrations) gave developers a familiar mental model for something previously terrifying — database schema changes in production. 2. **Open-source foundation (Vitess).** Being built on Vitess — the technology powering YouTube — provided immediate technical credibility and an existing community of contributors. 3. **Enterprise logos as social proof.** GitHub, Slack, SoundCloud, Square as early customers provided powerful validation for a database product where trust is paramount. 4. **Quick beta traction.** When Sam Lambert joined and rebuilt the product, the new beta attracted more than double the old product's total user base (750) on the first day alone. **What went wrong (cautionary tale):** 5. **Free tier removal backlash.** Removing the hobby tier in March 2024 (requiring $39/month minimum) caused significant community backlash. Many developers called it a "rug pull." This demonstrates the risk of building community on a free tier and then removing it — the trust damage can outweigh the revenue gain. **Lesson for Kendo:** The free tier is a trust contract with developers. If Kendo offers a free tier, it should be sustainable and permanent. PlanetScale's experience is a warning: developers have long memories for broken promises. --- ### PostHog — The YC Ecosystem Play **Timeline:** - **Early 2020:** Founded by James Hawkins and Tim Glaser after five pivots - **Week 1-4:** Built MVP; ~100 users from personal network - **Week 5:** Hacker News launch; 300 deployments within days - **Month 2-3:** Steady word-of-mouth; reached 1,000 users - **~6 months:** 1,000 users total - **2021:** 50+ YC startups onboarded; $15M Series B - **2025:** 1,500 companies signing up weekly; 100,000+ total users; $1.4B valuation **What worked:** 1. **YC as the ultimate network.** Every single week was spent with YC founders or partners. YC provided advice, fundraising connections, marketing, and sales leads. 65% of every Y Combinator batch now uses PostHog. For founders without YC access, the equivalent is deep embedding in a specific community. 2. **Open source with self-hosting priority.** MIT-licensed, self-hostable analytics aligned with developer preferences for data control and eliminated the "sending data to a third party" objection that killed their previous pivot attempt. 3. **Manual first, self-serve later.** Started with manual database edits to create user accounts, then built one-click Heroku deployment, then full self-serve. Each step toward lower friction increased conversion rates. 4. **$2,000 GitHub trending boost.** Spent roughly $2,000 promoting their GitHub repository on Twitter, which helped them trend on GitHub and amplified the Hacker News effect. 5. **Ultra-fast response times.** Targeted 30-second response times for user questions — treating speed of support as a competitive advantage that startups have over enterprises. 6. **Content marketing through founder transparency.** Blog posts about the building journey, pivots, and lessons learned generated additional Hacker News frontpage hits beyond the launch post. **Key insight:** "New users are more likely to pay than existing ones." Free-at-launch creates monetization challenges with early adopters who expected to never pay. **Lesson for Kendo:** The manual-to-automated progression is critical. Start with high-touch onboarding for the first 50-100 users (Slack DMs, personal setup help), then automate what you learn. Also, the PostHog content strategy — honest writing about the building journey — is the highest-ROI content a solo founder can create. --- ## Cross-Cutting Patterns ### The Five Channels That Actually Work for Dev Tools Based on analysis of all eight companies, these are the channels that consistently produced results, ranked by solo-founder feasibility: | Rank | Channel | Evidence | Solo-Founder Feasibility | |------|---------|----------|--------------------------| | 1 | **Hacker News** | Supabase (1,120 upvotes, 100x growth), PostHog (300 deployments), Fly.io, Linear (419 upvotes) | **High** — requires one great post and 18 hours of active engagement | | 2 | **Twitter/X founder brand** | Resend (20K waitlist from personal brand), Linear (10K waitlist from founder following) | **High** — requires consistent daily investment over months | | 3 | **Open-source project as wedge** | Vercel (Next.js, 850K devs), Resend (React Email, 12K stars), Supabase (50K+ stars), PostHog | **Medium** — requires building a genuinely useful standalone tool | | 4 | **Product Hunt** | Linear (2,100 upvotes), Supabase (multiple launches), Stripe (68 launches over 10 years) | **High** — requires preparation but execution is straightforward | | 5 | **Discord/Slack community** | Railway (45K Discord), Linear (16K Slack), Supabase (Discord + SupaSquad) | **Medium** — requires ongoing time investment to maintain | ### Channels That Did NOT Drive Early growth - **Paid advertising:** Linear spent $35K lifetime. Railway spent $0. The pattern is clear — paid acquisition is not how dev tools get traction. - **SEO/blog content (alone):** Content marketing supported growth but was never the primary driver in the 0-to-1000 phase. It becomes important in the 1,000-to-10,000 phase. - **Enterprise sales:** Every company studied went bottom-up first, enterprise later. No dev tool in this cohort started with outbound sales. - **Conferences and sponsorships:** Mentioned by none of the studied companies as a significant early growth driver. ### Timeline Patterns | Phase | Typical Duration | What Happens | |-------|-----------------|--------------| | **Pre-launch** | 3-12 months | Build product, build founder brand, collect waitlist, run invite-only beta | | **Launch spike** | 1-2 weeks | HN post, Product Hunt, first wave of signups | | **Post-launch dip** | 1-3 months | Growth settles to level "noticeably higher than before" (PostHog) | | **Word-of-mouth compounding** | 6-18 months | Organic referrals build; content marketing starts contributing | | **Inflection point** | 12-24 months | 10K+ users; revenue growing 20-50% MoM; fundraise or profitable | ### Pricing and Free Tier Lessons | Company | Free Tier Strategy | Outcome | |---------|-------------------|---------| | Linear | Generous free tier, transparent pricing | 80%+ DAU, organic upgrade | | Supabase | Free tier for hobby projects | 4M+ developers, strong conversion | | PostHog | Free at launch, added paid later | Learned "new users pay more easily than converted free users" | | PlanetScale | Had free tier, removed it | Major backlash, trust damage | | Railway | Usage-based with free credits | Template marketplace drives natural upgrades | **Key principle:** A free tier is a trust contract. Removing it destroys trust. Make it sustainable from day one. --- ## The Solo-Founder Playbook Distilling the patterns above into a sequence a solo founder can execute: ### Phase 0: Foundation (Weeks 1-8, before any public launch) 1. **Nail the one-sentence positioning.** Test it with 10 developers. If they say "oh, interesting" — wrong. If they say "oh, I need that" — right. Supabase changed five words and got 100x growth. 2. **Start building in public.** Tweet 3-5x/week about the building process. Share screenshots, technical decisions, architecture choices. Use #buildinpublic. This is compound interest — it pays off at launch. 3. **Ship something free and useful.** Not the product itself — an adjacent tool, library, or resource. Resend shipped React Email before Resend. Vercel shipped Next.js before Vercel hosting. What can Kendo ship that developers need regardless of whether they use Kendo? 4. **Collect a waitlist.** Even 200 emails from strangers is enough. The goal is not volume — it is signal that the positioning resonates. ### Phase 1: Private Beta (Weeks 8-16) 5. **Onboard 10-50 users manually.** DM them on Twitter/Discord. Set up their accounts personally. Watch them use the product. This is where you learn what to fix before the public launch. 6. **Be ultra-responsive.** PostHog targeted 30-second response times. This is the one advantage a solo founder has over every funded competitor — speed and personal attention. 7. **Get one user to pay.** Resend sent a $10 payment link to a friend. The moment someone pays, you have validated the product, not just the interest. ### Phase 2: Public Launch (Week 16-17) 8. **Launch on Hacker News first.** Use "Show HN:" prefix. Post Tuesday-Thursday, 8-10 AM PT. Write a first comment explaining who you are and why you built this. Engage every comment for 18 hours. Do not send direct links to friends for upvotes (HN detects this). Instead, tell them to search for your post. 9. **Launch on Product Hunt within the same week.** Prepare assets in advance: demo video (90 seconds), screenshots, tagline. Launch early in the week to ride momentum into newsletters that aggregate from PH/HN. 10. **Post the technical story everywhere.** The same HN post, adapted for Reddit (r/programming, r/webdev), dev.to, Hashnode, and your own blog. Each platform has different norms — adapt the format, not the substance. ### Phase 3: Post-Launch Compounding (Months 2-6) 11. **Build the community hub.** Discord or Slack — pick one. Railway chose Discord (45K members). Linear chose Slack (16K members). Be there every day answering questions. 12. **Ship and announce regularly.** Supabase's Launch Week model (one feature/day for a week, every 3-4 months) keeps the community engaged and generates recurring press cycles. Even a solo founder can do a "mini launch week" — three announcements in one week. 13. **Write the honest founder blog.** PostHog's most successful content was transparent writing about pivots, mistakes, and lessons learned. Developers respect honesty over polish. This content naturally hits HN frontpage. 14. **Comparison pages and SEO.** Now (not at launch) is when content marketing starts paying off. "Kendo vs Linear," "Kendo vs Jira," "best issue tracker with time tracking" — these pages compound over months. --- ## Cross-Check Against Kendo's Marketing Strategy Comparing these findings against `company/marketing-strategy.md`: ### Validated Channel Choices | Kendo's Plan | Research Validation | Confidence | |-------------|---------------------|------------| | **Hacker News "Show HN"** | Highest-leverage single launch action for dev tools. Supabase, PostHog, Linear, Fly.io all cite HN as inflection point. | **Strongly validated** | | **Product Hunt launch** | Important secondary channel. Linear got 2,100 upvotes. Stripe launched 68 times over 10 years. | **Validated** | | **Twitter/X engagement** | Critical for founder brand. Resend's entire waitlist came from Twitter. Linear's founders' followings drove initial demand. | **Strongly validated** | | **Blog posts / technical deep-dives** | Works in Phase 3 (post-launch compounding), not Phase 1. PostHog's founder blog was high-ROI. Vercel's docs drove organic traffic. | **Validated but timing matters** — Kendo's plan puts this in Month 1-3, which is correct | | **Discord/Slack communities** | Railway (45K Discord), Linear (16K Slack) — community hub is essential for retention and feedback. | **Validated** | | **Comparison pages** | Standard practice for SEO in the space. Every mature dev tool has them. | **Validated** | | **Claude Code / MCP community** | No direct precedent, but adjacent: Render benefited from OpenAI Codex integration; Railway benefited from being Resend's infra. Being embedded in the AI coding tool ecosystem is a high-leverage niche play. | **Validated as strategic bet** | ### Challenged or Missing from Kendo's Plan | Finding | Impact on Kendo's Strategy | |---------|---------------------------| | **No open-source wedge strategy** | Every breakout dev tool studied had an open-source component driving awareness. Kendo's plan has no equivalent. Consider: open-source MCP integration library, time tracking CLI, or board template system. This is the biggest gap. | | **Founder personal brand not emphasized** | Kendo's plan mentions "social media 3-5x/week" but treats it as company posts. The research shows founder-personal-brand posts vastly outperform company accounts. Jasper should be the voice, not @kendo. | | **Positioning may be too technical** | "AI-native issue tracker with MCP support" describes the product accurately but does not trigger the "I need that" response. Supabase's "open-source Firebase alternative" and Resend's "Stripe of email" were instantly comprehensible. Test simpler frames. | | **No invite-only / waitlist phase planned** | Linear's year-long invite-only beta built demand. Resend's waitlist built 20K signups. Kendo's plan goes straight to public launch. Consider: a 4-8 week invite-only beta to build anticipation and refine the product with early users. | | **No Launch Week concept** | Supabase's recurring Launch Weeks are the most repeatable growth tactic in the research. Kendo should plan its first "Launch Week" for Month 2-3 — three days of announcements, not necessarily five. | | **Mastodon/Fediverse prioritized alongside Twitter** | No studied company cited Mastodon as a growth channel. The developer audience there is real but small. Deprioritize in favor of concentrating effort on Twitter/X where the developer tool audience is densest. | | **Reddit strategy too passive** | Kendo's plan says "contribute value first, mention Kendo naturally." Research shows Reddit works when you post genuinely useful technical content (not product mentions). The strategy should be "publish technical posts that happen to showcase Kendo's approach," not "be active in communities and sometimes mention Kendo." | | **No community ambassador / template sharing program** | Railway's template marketplace and Conductor Program turned users into distributors. Kendo could create a similar flywheel with shareable board templates and workflow configurations. | | **Month 3 targets may be aggressive** | 1,000 organic visits/month and 100 signups in 3 months is achievable but only if HN/PH launches go well. PostHog took ~6 months to reach 1,000 users. Linear had 1,000 after a year of invite-only beta. Calibrate expectations. | ### Positioning Recommendations Based on the research, test these alternative positioning frames alongside the current one: | Frame | Model | Why It Might Work | |-------|-------|-------------------| | "The issue tracker that lives in your terminal" | Linear's "issue tracking for software teams" simplicity | Immediately differentiates from every browser-based competitor | | "Linear + time tracking + EU hosting, no add-ons" | Comparison-anchor positioning (Supabase: "Firebase alternative") | Instant comprehension for Linear users frustrated by missing features | | "Your board, your terminal, one flow" | Current tagline option — already good | Works for the MCP-aware audience | | "Stop switching tabs. Manage issues from your IDE." | Pain-point-first positioning | Targets the context-switching frustration directly | --- ## Key Takeaways 1. **Product quality is the marketing strategy.** Every company studied grew primarily through word of mouth. The marketing budget for getting from 0 to 1,000 users is effectively $0-2,000 (PostHog's GitHub promotion). The real investment is in making the product remarkable enough to talk about. 2. **The biggest gap in Kendo's current strategy is the lack of an open-source wedge.** Consider open-sourcing a useful standalone tool (MCP library, time tracking CLI, board template format) that builds awareness independent of the product itself. 3. **Founder brand > company brand in the early stage.** Jasper's personal Twitter/X account should be the primary distribution channel, not a company account. 4. **Hacker News + Product Hunt is the right launch sequence.** But invest 4-8 weeks in a private beta first to build demand and refine the product with real users. 5. **Launch Weeks are the most underrated repeatable growth tactic.** Plan for quarterly mini-launch-weeks (3 days of announcements) starting in Month 2-3. 6. **The positioning needs to be instantly comprehensible.** Test "the X alternative with Y" frames. If someone needs to understand MCP to understand why Kendo matters, the positioning is too narrow for initial growth. --- ## Related - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — Kendo's go-to-market plan (cross-checked above) - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Competitive landscape for positioning context - [../company/brand-guide.md](https://kendo-hq.pages.dev/raw/company/brand-guide.md) — Voice and tone guidelines for executing these tactics --- # Dev Tool Pricing Landscape 2026 Q2 *Source: `research/dev-tool-pricing-landscape-2026-q2.md` · [Raw](https://kendo-hq.pages.dev/raw/research/dev-tool-pricing-landscape-2026-q2.md)* --- title: "Developer Tool Pricing Landscape — Q2 2026" date: 2026-04-04 type: comparison tags: [pricing, competitors, strategy] sources: - https://linear.app/pricing - https://linear.app/docs/billing-and-plans - https://costbench.com/software/developer-tools/linear/ - https://linear.app/ai - https://linear.app/changelog/2026-03-24-introducing-linear-agent - https://www.atlassian.com/software/jira/pricing - https://costbench.com/software/project-management/jira/ - https://www.eesel.ai/blog/jira-ai-pricing - https://asana.com/pricing - https://costbench.com/software/project-management/asana/ - https://help.asana.com/s/article/ai-studio-pricing - https://monday.com/pricing - https://support.monday.com/hc/en-us/articles/115005320209-Available-plan-types-on-Work-Management - https://www.shortcut.com/pricing - https://www.shortcut.com/korey - https://www.shortcut.com/blog/why-we-built-the-shortcut-mcp-server - https://clickup.com/pricing - https://clickup.com/brain/pricing - https://help.clickup.com/hc/en-us/articles/29754533547415-Time-Tracking-feature-availability-and-limits - https://plane.so/pricing - https://developers.plane.so/dev-tools/mcp-server - https://github.com/pricing - https://github.com/features/issues - https://costbench.com/software/developer-tools/github/ - https://www.notion.com/pricing - https://www.notion.com/product/ai - https://trello.com/pricing - https://costbench.com/software/project-management/trello/ - https://www.gb-advisors.com/blog/monday-com-ai-2026-sidekick-vibe-agents - https://www.eesel.ai/blog/atlassian-intelligence-ai-in-jira - https://www.korey.ai/ --- # Developer Tool Pricing Landscape — Q2 2026 This report surveys the pricing, feature gating, and AI strategy of 10 project management tools competing in Kendo's market segment. It is intended to validate or challenge the $0 / $8 / $14 pricing proposed in `company/marketing-strategy.md` (Section 6). --- ## 1. Pricing Matrix (per user/month, annual billing) | Tool | Free | Tier 2 | Tier 3 | Tier 4 | Enterprise | |------|------|--------|--------|--------|------------| | **Linear** | $0 (250 issues, 2 teams) | Basic $10 | Business $16 | — | Custom | | **Jira** | $0 (10 users) | Standard $7.91 | Premium $14.54 | — | Custom | | **Asana** | $0 (2 users) | Starter $10.99 | Advanced $24.99 | — | Custom | | **Monday.com** | $0 (2 users) | Basic $9 (min 3 seats) | Standard $12 | Pro $19 | Custom | | **Shortcut** | $0 (10 users) | Team $8.50 | Business $12 | — | Custom | | **ClickUp** | $0 (unlimited users) | Unlimited $7 | Business $12 | — | Custom | | **Plane** | $0 (12 users) | Pro $6 | Business $13 | — | Custom | | **GitHub** | $0 (unlimited) | Team $4 | — | — | Enterprise $21 | | **Notion** | $0 (1 user) | Plus $10 | Business $20 | — | Custom | | **Trello** | $0 (unlimited) | Standard $5 | Premium $10 | Enterprise $17.50 | — | | **Kendo (proposed)** | $0 (3 users, 1 project) | Team $8 | Business $14 | — | — | ### Key observations - The **median entry-level paid tier** across these 10 tools is **$8.50/user/month**. Kendo's proposed $8 sits just below the median. - The **median mid-tier** is roughly **$14/user/month**. Kendo's $14 lands exactly at the median. - **Linear has increased its entry price from $8 to $10** since `marketing-strategy.md` was written (which listed Linear at $10 but described Kendo as "cheaper than Linear ($10)"). This is consistent but the gap has widened: Kendo at $8 would be $2/user cheaper than Linear Basic. - **Plane remains the price floor** at $6/seat for Pro. Kendo cannot compete on price alone against Plane. --- ## 2. Feature Gating Matrix — What's Included vs. Paywalled | Feature | Linear | Jira | Asana | Monday | Shortcut | ClickUp | Plane | GitHub | Notion | Trello | |---------|--------|------|-------|--------|----------|---------|-------|--------|--------|--------| | **Time Tracking** | No | Add-on | Advanced+ ($25) | Pro+ ($19) | No | Business+ ($12) | Pro+ ($6) | No | No | No | | **AI Features** | Free (basic), Bus ($16) for full | Premium+ ($14.54) | Starter+ (basic), Advanced+ (full) | Standard+ ($12) via Sidekick | All tiers | Add-on $9/user | All tiers (credit-based) | Copilot (code only) | Business+ ($20) | No | | **MCP Server** | Yes (all tiers) | No | No | No | Yes (all tiers) | No | Yes (all tiers) | No | No | No | | **Audit Logging** | Enterprise only | Premium+ ($14.54) | Enterprise only | Enterprise only | Enterprise only | Enterprise only | Business+ ($13) | Enterprise ($21) | Enterprise only | Enterprise only | | **SSO/SAML** | Enterprise only | Standard+ ($7.91) | Enterprise only | Enterprise only | Enterprise only | Enterprise only | Business+ ($13) | Enterprise ($21) | Business+ ($20) | Enterprise only | | **Sprints/Cycles** | All tiers | All tiers | Starter+ ($10.99) | All tiers | All tiers | All tiers | All tiers | No | No | No | | **Custom Roles** | Business+ ($16) | Premium+ ($14.54) | Advanced+ ($24.99) | Enterprise | Business+ ($12) | Business+ ($12) | Business+ ($13) | Enterprise ($21) | Business+ ($20) | Enterprise | ### Kendo advantage — features included in every tier Kendo's proposed pricing bundles these features that competitors gate behind higher tiers: | Feature | Kendo (all tiers) | Cheapest competitor tier offering it | |---------|-------------------|--------------------------------------| | Time tracking | Free+ | Plane Pro ($6), ClickUp Business ($12) | | MCP server | Free+ | Linear Free, Shortcut Free, Plane Free | | AI features | Free+ (limited), Team ($8) full | Plane Free (500 credits), Linear Free (basic) | | Audit logging | Team ($8) | Plane Business ($13), Jira Premium ($14.54) | | Hash-chained audit logs | Team ($8) | No competitor offers this | --- ## 3. AI Feature Pricing Trends ### The landscape is splitting into three models **Model A — AI included at all tiers (credit-gated)** - **Plane**: 500 AI credits/seat free, 1,000 at Pro ($6), 2,000 at Business ($13) - **Linear**: Linear Agent (beta) on free tier; Triage Intelligence + automations at Business ($16) - **Shortcut**: Korey AI agent included on all tiers (launched Q1 2026) **Model B — AI as a paid add-on** - **ClickUp**: Brain at $9/user/month add-on; AI Autopilot at $28/user/month. Charged per workspace member, not per user who uses AI. This is the most expensive AI pricing in the market. - **Asana**: AI Studio Basic included on Starter+; AI Studio Plus and Pro are paid add-ons with credit-based pricing. **Model C — AI gated behind mid/upper tiers** - **Jira**: Atlassian Intelligence only on Premium ($14.54+) and Enterprise. Standard plan users see AI buttons but get upgrade prompts. - **Notion**: AI bundled into Business ($20) and Enterprise only since May 2025. Free and Plus users can no longer purchase the AI add-on. - **Monday.com**: Sidekick included in Enterprise; available for purchase on Standard ($12) and Pro ($19). Tiered into Lite, Plus, and Super Sidekick. ### Trend direction The market is moving toward **Model A** (AI included, credit-gated). Shortcut, Plane, and Linear all include AI features on free or low tiers. The holdouts (ClickUp's $9 add-on, Notion's Business-only gate) are increasingly seen as anti-competitive. ClickUp's per-workspace-member billing for AI is particularly criticized. **Kendo's "AI in every tier" strategy is aligned with the winning trend.** Gating full AI behind Team ($8) while offering limited AI on Free is the right play — it matches Plane and Linear's approach while keeping the differentiator accessible. --- ## 4. Free Tier Comparison | Tool | User Limit | Project/Issue Limit | Key Restrictions | |------|-----------|---------------------|------------------| | **Linear** | Unlimited | 250 active issues, 2 teams | 10MB file uploads | | **Jira** | 10 users | Unlimited | 2GB storage, no audit logs | | **Asana** | 2 users | Unlimited | No timeline, no custom fields, no AI | | **Monday.com** | 2 users (min 3 seats on paid) | 3 boards | No integrations, no automations | | **Shortcut** | 10 users | Unlimited | 1 team, 1 workflow, 5GB storage | | **ClickUp** | Unlimited | 200 active tasks | No time tracking, no goals | | **Plane** | 12 users | Unlimited | No epics, no custom types, 500 AI credits | | **GitHub Projects** | Unlimited | Unlimited | Basic views only, no sprints, no time tracking | | **Notion** | 1 user | Unlimited | No AI, 5MB file uploads | | **Trello** | Unlimited | 10 boards | No timeline, no calendar, no dashboard | | **Kendo (proposed)** | 3 users | 1 project | All core features, MCP, limited AI | ### Assessment Kendo's proposed free tier (3 users, 1 project) is **restrictive on users and projects** compared to: - Shortcut (10 users, unlimited) - Plane (12 users, unlimited) - Linear (unlimited users, 250 issues) - ClickUp (unlimited users, 200 tasks) However, Kendo's free tier is **feature-rich** — MCP, time tracking, and basic AI are included where competitors strip those out. The trade-off is narrower capacity for broader capability. This is a defensible position for developer-focused users who need the right features more than scale on a free plan. **Potential concern:** A 3-user limit may prevent team evaluation. Most dev teams evaluating a tool want to trial it with 5-10 people before committing. Shortcut (10 users) and Plane (12 users) understand this. Consider raising the free limit to 5 users to enable team pilots. --- ## 5. Discrepancies with Existing Kendo Docs ### `marketing-strategy.md` discrepancies | Claim in marketing-strategy.md | Current reality | Status | |-------------------------------|-----------------|--------| | "Cheaper than Linear ($10) with time tracking included" | Linear Basic is now $10/user/month (annual only). Kendo at $8 is $2 cheaper. | **Still valid but margin is thin** — Linear changed to annual-only billing, making the comparison slightly different. | | "Cheaper than Shortcut ($8.50) with AI story generation included" | Shortcut Team is still $8.50/year, $10/month. But Shortcut now includes Korey AI agent on all tiers, including free. | **Weakened** — Shortcut now bundles a capable AI agent (Korey) at no extra cost. Kendo's AI advantage over Shortcut is less clear-cut. | | "More feature-rich than Plane ($6) at the AI layer" | Plane now has AI credits on all tiers, MCP server, and agent framework with @mention support. Plane Pro is $6 with time tracking. | **Weakened** — Plane's AI and MCP capabilities have grown significantly. The gap is narrower than when this was written. | | Asana listed as "$0-24.99" | Asana Starter is now $10.99, Advanced $24.99, Personal (free) limited to 2 users. | **Minor update needed** — pricing tiers renamed (Personal, Starter, Advanced). | | Monday.com listed as "$0-24" | Monday.com now $0-19 for standard tiers. Pro at $19. 18% price increase on monday service in Feb 2026. | **Update needed** — tier structure has shifted. | | ClickUp AI add-on listed as "$9/user" | ClickUp Brain is still $9/user/month add-on. AI Autopilot is now $28/user/month. | **Still accurate** — but the landscape has added AI Autopilot tier. | ### `competitors.md` discrepancies | Claim | Current reality | Status | |-------|-----------------|--------| | Shortcut described as "no AI differentiator, stagnating" | Shortcut launched Korey AI agent (Q1 2026) and has an active MCP server. Not stagnating. | **Outdated** — Shortcut has made significant AI moves. | | Plane described as "no built-in time tracking" | Plane Pro ($6+) now includes time tracking. | **Outdated** — Plane added time tracking. | | Linear described as "no native time tracking" | Still true as of April 2026. | **Still accurate.** | --- ## 6. Kendo Positioning Assessment ### Is $8 / $14 competitive? **$8 Team tier: Yes, competitive.** It sits below Linear ($10), below Shortcut ($8.50), and matches Jira Standard ($7.91). With time tracking, full AI, MCP, and audit logging included at $8, the value proposition is strong. The only tool offering more for less is Plane at $6, but Plane gates audit logs and advanced features behind its $13 Business tier. **$14 Business tier: Yes, competitive.** It matches Jira Premium ($14.54 for cross-team planning + AI). It undercuts Linear Business ($16) which is the first tier with full AI triage. Plane Business at $13 is close but Kendo offers hash-chained audit logs that no competitor has. ### The real question is not price — it is perceived value At $8, Kendo must convince buyers they get more than Plane at $6 and comparable value to Linear at $10. The pitch: - **vs. Plane ($6):** "Kendo includes audit logging and full AI at $8. Plane gates audit logs behind $13 and AI credits are limited." - **vs. Linear ($10):** "Kendo is $2/user cheaper with time tracking and audit logging included. Linear charges $16 for full AI triage and gates audit logs behind Enterprise." - **vs. Shortcut ($8.50):** "Kendo includes time tracking and audit logging. Shortcut has neither." - **vs. Jira ($7.91):** "Same price, no admin overhead, no add-on ecosystem tax. Time tracking and AI included, not sold separately." ### Risk factors 1. **Plane's trajectory**: At $6 with MCP, time tracking, and improving AI — Plane is the biggest pricing threat. If Plane adds audit logging to Pro, Kendo's $8 Team tier loses a key differentiator. 2. **Shortcut's Korey**: Shortcut's AI agent is now included on all tiers. "AI included" is no longer unique to Kendo + Plane + Linear. 3. **Linear's annual-only billing**: Linear removed monthly billing for paid plans. If Kendo offers monthly billing, that flexibility is a small but real advantage. 4. **ClickUp's AI pricing backlash**: ClickUp's $9/user AI add-on (charged per workspace member) is widely criticized. Kendo can explicitly position against this: "AI for everyone, not an add-on tax." ### Recommendation **The $8 / $14 pricing holds up.** No changes needed to the price points themselves. The positioning copy in `marketing-strategy.md` needs updating to reflect: - Shortcut's Korey AI launch - Plane's time tracking and expanding AI - Linear's move to annual-only billing - The broader trend of AI becoming included rather than add-on --- ## 7. Summary Table — Kendo vs. Top 4 Direct Competitors | | Kendo ($8) | Linear ($10) | Shortcut ($8.50) | Plane ($6) | Jira ($7.91) | |-|------------|--------------|-------------------|------------|---------------| | **Time tracking** | Included | No | No | Included | Add-on | | **AI features** | Full | Basic (full at $16) | Korey included | Credit-based (1,000/seat) | Premium only ($14.54) | | **MCP server** | 26 tools, 10 resources | Yes | Yes | Yes | No | | **Audit logging** | Hash-chained | Enterprise only | Enterprise only | Business ($13) | Premium ($14.54) | | **EU hosting** | Amsterdam | US | US | Self-host option | US (data residency on Premium) | | **Free tier users** | 3 | Unlimited | 10 | 12 | 10 | | **Monthly billing** | TBD | Annual only | Available | Available | Available | --- ## Related - [marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — Section 6 (Pricing Strategy) references these competitors - [competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Competitive positioning angles (some now outdated, see Section 5) - [kendo-app-plans-landscape.md](https://kendo-hq.pages.dev/raw/research/kendo-app-plans-landscape.md) — Feature map and architectural decisions --- *[Librarian] Research conducted 2026-04-04. Prices verified against official pricing pages and independent aggregators. All prices are per user/month with annual billing unless noted.* --- # Developer PM Sentiment 2026 *Source: `research/developer-pm-sentiment-2026.md` · [Raw](https://kendo-hq.pages.dev/raw/research/developer-pm-sentiment-2026.md)* --- title: "Developer PM Sentiment 2025-2026: What Developers Actually Say About Their Project Management Tools" date: 2026-04-04 type: investigation tags: [developer-sentiment, jira-fatigue, pain-points, user-research, positioning-validation] thesis: "Kendo's positioning assumes developers are frustrated with Jira complexity, context switching, tool fragmentation, and missing time tracking. Is this validated by real sentiment data, or are we building on assumptions?" sources: - https://survey.stackoverflow.co/2025/ - https://survey.stackoverflow.co/2025/developers/ - https://survey.stackoverflow.co/2025/work - https://www.atlassian.com/teams/software-development/state-of-developer-experience-2025 - https://www.atlassian.com/blog/developer/developer-experience-report-2025 - https://www.capterra.com/resources/2025-pm-software-trends/ - https://www.businesswire.com/news/home/20250904859415/en/AI-and-Security-Drive-Project-Management-Software-Investment-Capterra-Survey-Finds - https://apmic.org/blogs/effectiveness-of-agile-project-management-tools-original-industry-report-2026-27 - https://speakwiseapp.com/blog/context-switching-statistics - https://dev.to/teamcamp/why-developers-hate-jira-and-10-best-jira-alternatives-1hl1 - https://dev.to/imdone/why-developers-hate-jira-and-what-were-doing-about-it-1h06 - https://dev.to/alina_chyzh_fd772c6bfce37/why-developers-hate-jira-and-how-to-make-it-dev-friendly-again-67j - https://thenewstack.io/why-developers-hate-jira-and-what-atlassian-is-doing-about-it/ - https://news.ycombinator.com/item?id=31813957 - https://news.ycombinator.com/item?id=41659128 - https://news.ycombinator.com/item?id=25590846 - https://news.ycombinator.com/item?id=17596293 - https://ifuckinghatejira.com/49/ - https://community.atlassian.com/forums/Jira-questions/Anyone-else-hate-the-new-jira-experience/qaq-p/684161 - https://community.atlassian.com/forums/Jira-articles/quot-Why-Your-Jira-is-Slow-Understanding-Bloat-and-How-to-Fix-It/ba-p/3040249 - https://blog.scalefusion.com/best-jira-alternatives-and-competitors/ - https://devrev.ai/blog/jira-alternatives - https://www.shortcut.com/blog/8-best-jira-alternatives-for-engineering-teams-in-2026 - https://electroiq.com/stats/jira-statistics/ - https://6sense.com/tech/bug-and-issue-tracking/jira-software-market-share - https://www.zenhub.com/blog-posts/why-switch-from-github-projects - https://complex.so/insights/why-most-project-management-tools-fail-small-teams-(and-what-to-use-instead) - https://www.producthunt.com/products/linear/reviews - https://plane.so/blog/plane-versus-linear-which-should-you-choose-in-2026 - https://europeanpurpose.com/tool/plane - https://www.weirdware.com/2026/03/30/fix-context-switching-at-the-ide-access-layer-says-jetbrains/ - https://jellyfish.co/library/developer-productivity/pain-points/ - https://dev.to/pockit_tools/mcp-vs-a2a-the-complete-guide-to-ai-agent-protocols-in-2026-30li - https://dev.to/custodiaadmin/mcp-automation-is-booming-audit-trails-are-missing-54oa - https://www.techclass.com/resources/learning-and-development-articles/data-sovereignty-what-it-means-for-european-businesses-in-2025 - https://secureprivacy.ai/blog/data-privacy-trends-2026 - https://www.jira-alternatives.org/ - https://stackoverflow.blog/2025/12/29/developers-remain-willing-but-reluctant-to-use-ai-the-2025-developer-survey-results-are-here/ - https://linear.app/next - https://www.devclass.com/development/2026/03/27/linear-moves-sideways-to-agentic-ai-as-ceo-declares-issue-tracking-dead/5211661 - https://sdtimes.com/ai/report-ai-productivity-gains-cancelled-out-by-friction-points-in-other-areas-that-slow-developers-down/ --- # Developer PM Sentiment 2025-2026: What Developers Actually Say About Their Project Management Tools ## Thesis Kendo's positioning rests on four core assumptions: (1) developers are frustrated with Jira's complexity, (2) context switching between tools is a top productivity killer, (3) time tracking as an add-on is a pain point, and (4) developers want PM tools that meet them where they already work (the terminal/IDE). This investigation tests those assumptions against real developer sentiment data from surveys, community discussions, and market research across 2025-2026. **Verdict:** All four assumptions are validated by data. But the research also surfaces pain points Kendo's messaging underweights, and one assumption that is weaker than expected. --- ## 1. Is "Jira Fatigue" Real and Quantifiable? **Answer: Yes. It's real, widespread, and measurable — but Jira remains dominant.** ### The numbers - **Jira holds 89.45% market share** in bug-and-issue-tracking (6sense, 2026). Over 116,800 companies use it. It is not going anywhere. - **But desire is shifting.** The 2025 Stack Overflow Developer Survey (49,000+ respondents, 177 countries) shows Jira stepped down as the most desired collaboration/documentation tool. GitHub is now the top desired tool for code documentation and collaboration. - **46% of developers report using Jira** (Stack Overflow 2025), but usage does not mean satisfaction. The "Jira alternative" keyword cluster is one of the most competitive in the PM tool SEO space, with at least 15 "best Jira alternatives" articles published in 2025-2026 alone — an indicator of search demand driven by dissatisfaction. - **A dedicated anti-Jira community exists.** The site ifuckinghatejira.com catalogs developer grievances. HN threads like "I fucking hate Jira" (2022, hundreds of comments), "Why Jira Sucks" (2021), "JIRA is an antipattern" (2018), and "The Slow, Painful Death of Agile and Jira" (2024) consistently surface with high engagement. This is not a passing fad — it's a decade-long pattern. ### What developers specifically say The complaints cluster into five categories: **1. Performance and slowness.** > "Every transition takes 3-5 seconds, and those transitions happen dozens of times per day." Jira's Java-based architecture is noticeably slower than modern tools like Linear. When a developer updates a task dozens of times daily, cumulative latency destroys flow state. **2. Configuration bloat.** > "You need to fill out 87 fields just to report a simple bug." Over time, Jira instances accumulate hundreds of inactive projects, thousands of unused custom fields, orphaned workflows, stale dashboards, and automation rules nobody remembers creating. Atlassian's own community documentation addresses "Why Your Jira is Slow" with advice on cleaning up configuration bloat. **3. Process overhead.** > "You spend more time updating tickets than writing code." Developers consistently report that Jira becomes a tool for management surveillance rather than developer productivity. Over-engineered workflows — mandatory fields, approval chains, status transitions with required comments — slow teams down before development work even starts. **4. UI complexity.** > "The Jira interface is overwhelmingly cluttered." Atlassian's 2025 UI overhaul drew mixed reactions. While 97% of early access testers opted to stay with the new navigation, broader community feedback on Atlassian Community forums reflects frustration with constant UI changes ("change fatigue") and remaining complexity. **5. Pricing structure.** > "Pricing quickly becomes unsustainable as teams grow." Key features are locked behind Premium ($14.54/user/month) or Enterprise tiers. AI features (Rovo) require Premium. Audit logging requires Premium. This creates a perception that Jira nickels-and-dimes teams. ### The nuance Not everyone agrees. Some HN commenters argue the anti-Jira sentiment is an "echo chamber about perceived project management overhead by individual contributors rather than issues with the product itself." The tool works well for large enterprises with dedicated Jira admins. The fatigue is concentrated among: - Small-to-mid teams without Jira admin resources - Individual contributors who see the tool as management overhead - Developers who compare Jira's speed to modern alternatives like Linear **Kendo relevance:** Jira fatigue is real and validated. But Kendo should be careful not to position purely as "anti-Jira" — the market is saturated with that message. The more compelling angle is what Kendo adds (terminal-native, time tracking included, EU-hosted) rather than what Jira lacks. --- ## 2. Context Switching: The Biggest Productivity Tax **Answer: This is the #1 developer pain point by multiple measures.** ### The numbers - **12-15 major context switches per day** for the average developer (industry data, 2025-2026). - **23 minutes recovery time per switch.** That's 4.5+ hours of lost deep focus daily. - **$78,000/year per developer** in estimated lost productivity from context switching (industry analysis, 2026). - **1,200 app toggles per day** for the average digital worker — roughly one every 24 seconds during an 8-hour workday (Speakwise, 2026). - **50% of developers lose 10+ hours per week** to non-coding tasks; 90% lose 6+ hours (Atlassian State of Developer Experience 2025, n=3,500). ### The paradox Atlassian found Atlassian's own 2025 DevEx report uncovered an irony: developers save ~10 hours/week using AI tools, but lose ~10 hours/week to organizational inefficiencies. The productivity gains from AI are being canceled out by tool friction, context switching, and information-finding overhead. The top time-wasters cited: finding information (services, docs, APIs), adapting new technology, and switching context between tools. ### What developers want JetBrains publicly stated in March 2026 that the solution is to "fix context switching at the IDE access layer" — bringing external tools into the IDE rather than forcing developers out of it. This aligns directly with the MCP approach: instead of opening a browser to update a Jira ticket, the developer manages issues from their terminal. The MCP ecosystem validates this demand. By early 2026, at least 10 major PM tools have official MCP servers. Over 1,000 MCP servers exist across services. MCP was donated to the Linux Foundation in December 2025. The market consensus is clear: developers want their tools to come to them, not the other way around. **Kendo relevance:** This is Kendo's strongest validated pain point. The terminal-first positioning is not a nice-to-have — it addresses the single biggest productivity complaint developers have. The messaging should lead with the productivity math: "12 context switches/day x 23 min recovery = 4.5 hours lost. What if your board lived in your terminal?" --- ## 3. Time Tracking: The Quiet Frustration **Answer: Validated, but it's a mid-tier pain point — not the top one.** ### The landscape Time tracking is surprisingly absent from the most popular developer PM tools: - **Linear:** No native time tracking. Requires Toggl, Harvest, or similar. - **Asana:** No native time tracking. Requires integrations. - **GitHub Projects:** No time tracking at all. - **Shortcut:** No native time tracking. - **Plane:** Time tracking on Pro+ ($6/user/month) only. - **ClickUp:** Time tracking on Unlimited+ ($7/user/month) only. - **Jira:** Via add-ons only (Tempo, etc.). ### Developer sentiment The frustration surfaces most clearly among: - **Freelancers and agencies** who need time tracking for client billing. Using Jira + Toggl means two subscriptions and manual sync between systems. - **Small teams** where the tech lead needs time data for sprint retrospectives and capacity planning but doesn't want to manage a separate tool. - **Toggl users** who report frustrations with cross-tool sync, manual tag/description copying, disconnects between machines, and the cognitive overhead of remembering to start/stop timers. Linear's response is telling: when asked about time tracking, Linear's team has consistently declined to add it, instead pivoting toward AI-driven automation ("Issue Tracking is Dead," March 2026). This creates a persistent gap that competitors can exploit. ### The nuance Time tracking is not universally desired by all developers. Many actively resist it — they see it as management surveillance. The demand comes from specific personas: - Freelancers/contractors who bill by the hour - Team leads who need capacity data - CTOs who need time data for client reporting **Kendo relevance:** "Time tracking included" is validated as a real differentiator, especially for the Floris (freelancer) and Tessa (tech lead) personas. But it's not the lead pain point for most developers — context switching and tool complexity rank higher. Position time tracking as a "no extra subscription" bonus, not the primary pitch. --- ## 4. What Developers Praise: Lessons from Linear and Plane Understanding what developers love is as important as what they hate. ### What makes Linear beloved Linear consistently receives the highest developer praise among PM tools. The reasons are instructive: - **Speed.** "The absence of loading states, combined with keyboard navigation that flows naturally between actions, creates something that project management software almost never achieves: a flow state." Developers compare it to an IDE, not a web app. - **Keyboard-driven workflow.** S for status, A for assign, P for priority, L for label, Cmd+K for command palette. Every action is reachable without a mouse. - **Opinionated simplicity.** Linear limits status types to six. You can't create 47 custom statuses. This is both its greatest strength (reduces cognitive load) and its limitation (teams with unique processes feel constrained). - **Design quality.** The UI is described as "closer to an IDE than a spreadsheet." Developer aesthetics matter — ugly tools get resented. ### What makes Plane attractive Plane (46,500+ GitHub stars) appeals to a different developer archetype: - **Open source and self-hostable.** For data-sovereignty-conscious teams, being able to inspect and host the code is non-negotiable. - **Zero cost for basic use.** Internal sprint planning, bug tracking, and roadmap management at zero cost (apart from hosting). - **Clean design.** Keyboard shortcuts, inbox triage, rich issue detail pages. - **Limitations acknowledged.** Advanced reporting, resource management, and portfolio views are less developed than commercial tools. Time tracking is only on Pro+. ### The pattern Developers consistently praise PM tools that exhibit: 1. **Speed** — sub-second response times on every interaction 2. **Keyboard-first navigation** — command palettes, single-key shortcuts 3. **Opinionated defaults** — fewer choices, better out-of-box experience 4. **Developer aesthetics** — dark themes, monospace fonts, dense information display 5. **Minimal ceremony** — create an issue in seconds, not minutes **Kendo relevance:** Kendo's dark-themed, dense UI with JetBrains Mono and keyboard shortcuts aligns well with what developers praise. The MCP integration takes this further by eliminating the browser entirely. But speed is non-negotiable — if Kendo is not sub-second on every interaction, none of the other advantages matter. --- ## 5. The Small Team Problem **Answer: Small teams are underserved and actively frustrated.** ### The core finding Most PM tools were built for enterprises and then offer stripped-down free tiers for small teams. The result: > "Many teams have signed up for tools, gotten overwhelmed, and quietly gone back to spreadsheets or group chats." The 2025-2026 data shows that small teams (1-15 developers) face specific frustrations: - **Feature overwhelm.** They use only 20% of the tool's features but pay for 100%. - **Setup overhead.** Tools require extensive onboarding before producing value. Jira reportedly takes 2-3 weeks to master. - **Price-to-value mismatch.** Paying for Jira + Toggl + Slack integrations is overkill for a 3-person team. - **Missing middle ground.** GitHub Issues is too basic (no proper board, no time tracking, no sprint planning at scale). Jira is too complex. The tools in between (Linear, Shortcut) are good but lack time tracking or EU hosting. ### What small teams actually want Based on community discussions and market research: 1. A board and a backlog that work out of the box — no setup wizard 2. Quick issue creation without mandatory fields 3. Time tracking if they bill clients — built in, not bolted on 4. GitHub integration that syncs automatically 5. A price that doesn't scale to Jira-level before they've hired their tenth developer **Kendo relevance:** This directly validates the Floris persona. The positioning of "When GitHub Issues isn't enough, but Jira is too much" resonates with documented small-team frustration. The "all-in-one without bloat" angle is strong here. --- ## 6. AI as a Purchase Driver — But Trust Is Low ### The numbers - **55% of PM software buyers** say AI functionality is their primary motivator for purchasing new software (Capterra 2025, n=2,545 project management professionals globally). - **84% of developers use AI** tools in some form (Stack Overflow 2025). - **But 46% report distrust** in AI outputs (Stack Overflow 2025). - **41% of organizations** say AI adoption is their top software challenge (Capterra 2025). - **39%** report a lack of AI skills on staff. ### The paradox AI is simultaneously the #1 purchase trigger and the #1 adoption challenge. Developers want AI features but don't trust them yet. The winning position is not "we have AI" (everyone does) but "our AI is transparent and trustworthy": - Audit trails for AI actions (what did the AI do, when, and why) - Human-in-the-loop workflows (AI drafts, human approves) - AI that reduces work rather than creating new work to manage ### MCP as the practical AI interface The terminal/MCP approach to AI in PM tools is interesting because it puts the developer in control. The AI assistant reads issues and updates boards via MCP, but the developer types the command. This is a more trusted interaction model than "AI agent autonomously triages your backlog." **Kendo relevance:** Kendo's hash-chained audit logging for AI actions is a genuine differentiator in this trust gap. The positioning should emphasize AI transparency: "AI manages your board — and every action is audited." The MCP approach (developer-initiated AI actions) maps to higher trust than autonomous AI agents. --- ## 7. EU Data Sovereignty: A Growing Wedge ### The numbers - **GDPR fines totaled €5.65 billion** since 2018, with 2025 alone accounting for €2.3 billion — a 38% year-over-year increase. - The **EU Data Act** (effective September 2025) extends sovereignty to non-personal and industrial data. - The **US CLOUD Act** allows US authorities to demand data from US-based providers regardless of where data is stored — directly conflicting with GDPR. - 71% of PM software buyers rank **security as their #1 selection factor** — above price, usability, and features (Capterra 2025). ### Developer sentiment The question is no longer just "Where is my data stored?" but "Who owns the platform, and which courts can issue binding orders to it?" For EU teams evaluating PM tools, this creates a real decision factor: - Jira, Linear, Asana, GitHub, ClickUp, Monday.com — all US-owned, US-headquartered - Plane — US-based but self-hostable (sovereignty via infrastructure control) - Kendo — Amsterdam-hosted, Dutch-owned, database-per-tenant The dedicated site jira-alternatives.org specifically lists "Top 8 European Jira Alternatives" — indicating that EU origin/hosting is becoming a distinct market segment. **Kendo relevance:** The Christiaan (CTO) persona is validated. EU data sovereignty is a growing purchase factor, especially with 2025's enforcement escalation. "Amsterdam-hosted, Dutch-owned, database-per-tenant" is a message that sells itself to this segment without needing to be the primary pitch. --- ## 8. What the Market Wants in 2026-2027 The APMIC 2026-27 report on agile PM tool effectiveness and other market analyses converge on what the market is moving toward: 1. **Operational impact over feature count.** "The real question is whether a tool improves delivery quality under pressure — can it reduce decision lag, expose blocked work early, and give leaders visibility without burying teams in reporting overhead?" 2. **Financial features rising.** Budget tracking, variance analysis, and forecast dashboards inside the PM tool rather than in separate finance systems. 3. **Integration depth over breadth.** Teams prefer deeper integration with fewer tools over shallow connections to everything. 4. **Adoption over power.** "No matter how powerful a PM tool is, if dev teams find it annoying, they won't use it — adoption matters more than features." 5. **AI that improves judgment.** Not AI that creates noise, but AI that helps prioritize, triage, and surface what matters. --- ## Findings Summary: Kendo's Assumptions vs. Reality | Kendo Assumption | Validated? | Strength | Notes | |---|---|---|---| | Developers are frustrated with Jira complexity | **Yes** | Strong | Decade-long pattern, quantifiable in surveys, search demand, community discussion | | Context switching is a top pain point | **Yes** | Very strong | #1 productivity killer by multiple measures; $78K/year estimated cost | | Time tracking as add-on is a pain point | **Yes** | Moderate | Real for freelancers/agencies/team leads; not universal across all developer types | | Developers want terminal-native PM | **Yes** | Strong | JetBrains, MCP ecosystem, and 10+ PM tools adding MCP servers validate the direction | | EU hosting matters | **Yes** | Growing | €2.3B in GDPR fines in 2025 alone; dedicated "European alternatives" market segment exists | | AI is a differentiator | **Partially** | Eroding | AI is now table stakes (55% purchase trigger), but audit trails and trust gap create differentiation opportunity | ### Surprises and gaps 1. **Speed is non-negotiable and underweighted in Kendo's messaging.** Linear's praise centers on sub-second performance. Every "why developers love X" discussion starts with speed. Kendo's marketing doesn't lead with speed — it should at least address it. 2. **"Opinionated simplicity" is what developers actually want** more than feature richness. The Kendo pitch of "everything included" could backfire if it reads as "another ClickUp." The framing should be "everything you need, nothing you don't" rather than "everything included." 3. **The small-team segment is more frustrated than expected.** These teams don't just want a cheaper Jira — they want a fundamentally simpler tool. Many have tried Linear, Shortcut, and GitHub Projects and found each lacking in different ways. This is Kendo's sweet spot. 4. **Developer aesthetics matter more than expected.** Dark themes, monospace fonts, dense layouts, keyboard shortcuts — these are not cosmetic preferences but signals that a tool "gets" developers. Kendo's design language (JetBrains Mono, kendo-red, dark theme) aligns well. 5. **The audit trail gap in MCP automation is a real finding.** MCP adoption is booming, but no one is solving audit trails for what AI agents do via MCP. Kendo's hash-chained audit logging positions it uniquely here. --- ## Recommendations for Kendo ### Messaging adjustments 1. **Lead with context switching, not Jira fatigue.** Everyone says "Jira alternative." Few say "your board lives in your terminal — zero context switches." The data shows context switching is the bigger pain point; use it. 2. **Add speed to the pitch.** Even a brief mention — "sub-second everything" — signals developer seriousness. Linear's reputation was built on speed before features. 3. **Reframe "everything included" as "nothing extra."** Instead of: "Board + sprints + time tracking + GitHub + AI." Try: "No Toggl subscription. No Tempo add-on. No enterprise tier for audit logs. Just one tool, one price." 4. **Amplify the MCP audit trail angle.** "MCP automation is booming. Audit trails are missing." This is a market gap Kendo uniquely fills. It resonates with both the CTO persona (compliance) and the broader AI trust gap (46% developer distrust). ### Persona validation - **Floris (Freelance Dev):** Strongly validated. The "paying for Jira + Toggl is overkill" pain point appears repeatedly in community discussions. - **Tessa (Tech Lead):** Strongly validated. Context switching, team visibility, and "we use 10% of Jira" are well-documented frustrations. - **Christiaan (CTO):** Validated with growing strength. EU data sovereignty is an increasingly distinct market segment with dedicated comparison sites. ### Content opportunities The planned blog post "Jira fatigue: what developers actually want" (listed in marketing-strategy.md) should be written using this research data. It can cite the Stack Overflow shift, the Atlassian DevEx paradox (save 10 hours with AI, lose 10 hours to friction), and the $78K/year context-switching cost. --- ## Related - [../shared/user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — Persona assumptions validated by this research - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — Positioning angles that this data supports/adjusts - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Competitive landscape context - [ai-native-dev-tools-landscape-2026.md](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md) — AI commoditization and MCP ecosystem data - [dev-tool-pricing-landscape-2026-q2.md](https://kendo-hq.pages.dev/raw/research/dev-tool-pricing-landscape-2026-q2.md) — Pricing and feature gating analysis --- # EU Data Sovereignty Demand 2026 *Source: `research/eu-data-sovereignty-demand-2026.md` · [Raw](https://kendo-hq.pages.dev/raw/research/eu-data-sovereignty-demand-2026.md)* --- title: "EU Data Sovereignty as a Purchase Driver — Demand Analysis 2026" date: 2026-04-04 type: investigation tags: [gdpr, eu, data-sovereignty, positioning, compliance, market-trends] thesis: "EU data sovereignty is a real and accelerating purchase driver — not just a checkbox — but its strength varies dramatically by buyer segment. For regulated enterprises and public sector, it's a binary qualifier. For SMB dev teams, it's a tiebreaker that becomes decisive when geopolitical risk is salient." sources: - https://www.techclass.com/resources/learning-and-development-articles/data-sovereignty-what-it-means-for-european-businesses-in-2025 - https://www.kiteworks.com/gdpr-compliance/european-saas-data-sovereignty-rfps/ - https://www.kiteworks.com/gdpr-compliance/eu-data-sovereignty-gdpr-compliance/ - https://www.orrick.com/en/Insights/2026/01/Data-Localization-and-the-Sovereign-Cloud-EU-Cloud-Regulations-Explained - https://www.softwareone.com/en/blog/articles/2026/01/12/your-2026-digital-sovereignty-guide - https://massivegrid.com/blog/european-companies-leaving-us-cloud/ - https://europealternatives.com/en/ - https://blog.elest.io/eu-data-residency-laws-are-breaking-your-saas-stack-heres-how-to-fix-it/ - https://superlinear.eu/insights/articles/cloud-after-schrems-can-eu-companies-still-use-us-cloud-services - https://blog.clickmeeting.com/european-alternatives - https://wire.com/en/blog/european-alternatives-enterprise-guide - https://www.fortunebusinessinsights.com/sovereign-cloud-market-112386 - https://news.broadcom.com/sovereign-cloud/three-predictions-for-sovereign-cloud-in-2026 - https://www.theregister.com/2026/02/09/europe_sovereign_cloud_spend/ - https://finance.yahoo.com/news/europe-going-sovereign-cloud-investment-104000333.html - https://www.datacenterdynamics.com/en/news/europe-spending-on-sovereign-cloud-infrastructure-to-triple-from-2025-2027-gartner/ - https://www.cnbc.com/2026/02/13/four-charts-europes-reliance-us-digital-infrastructure.html - https://www.cnbc.com/2026/02/18/europe-digital-sovereignty-geopolitical-tensions.html - https://ecfr.eu/publication/get-over-your-x-a-european-plan-to-escape-american-technology/ - https://unit8.com/resources/eu-cloud-sovereignty-emerging-geopolitical-risks/ - https://www.computerworld.com/article/4109029/global-uncertainty-is-reshaping-cloud-strategies-in-europe.html - https://spectrum.ieee.org/europe-cloud-sovereignty - https://www.theregister.com/2025/12/22/europe_gets_serious_about_cutting/ - https://digital-strategy.ec.europa.eu/en/policies/nis2-directive - https://opsiocloud.com/knowledge-base/does-nis2-apply-to-saas/ - https://securance.com/news/nis2-directive-explained-scope-compliance-and-requirements-for-2026/ - https://nordlayer.com/blog/nis2-implementation-guide-for-saas/ - https://itmedialaw.com/en/nis2-compliance-2025-relevance-for-saas-and-media-start-ups/ - https://www.enisa.europa.eu/publications/eucs-cloud-service-scheme - https://www.cep.eu/fileadmin/user_upload/cep.eu/cepItalia/cepInput_EU_Cloud_Certification_at_an_Impasse.pdf - https://www.digitalsme.eu/changes-to-the-eu-cloud-services-cybersecurity-certification-scheme-put-eu-citizens-data-at-risk-a-call-for-digital-sovereignty/ - https://www.hoganlovells.com/en/publications/eucs-controversial-data-sovereignty-issues-continue-to-drive-debate-around-the-eu-certification-scheme-for-cloud-services - https://cloudsecurityalliance.org/gdpr/eu-cloud-code-of-conduct - https://eucoc.cloud/en/about/about-eu-cloud-coc - https://www.jentis.com/blog/noyb-will-challenge-the-new-data-privacy-framework - https://iapp.org/news/a/schrems-addresses-emerging-questions-around-eu-us-data-privacy-framework - https://www.lexology.com/pro/content/third-eu-us-data-transfer-challenge-may-still-be-the-horizon - https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20251201-european-court-of-justice-to-review-challenge-to-eu-us-data-privacy-framework - https://shardsecure.com/blog/schrems-iii-prepared - https://www.workforcebulletin.com/adequacy-of-the-eu-u-s-data-privacy-framework-survives-challenge - https://privacymatters.dlapiper.com/2025/09/eu-u-s-data-privacy-framework-survives-first-challenge/ - https://digitalcompliance.snellman.com/us-data-transfers-in-the-trump-era/ - https://iapp.org/news/a/how-could-trump-administration-actions-affect-the-eu-u-s-data-privacy-framework- - https://www.digitalsamba.com/blog/trump-presidency-threatens-the-stability-of-the-eu-us-data-transfer-agreement - https://cms-lawnow.com/en/ealerts/2025/01/is-the-eu-u.s.-data-privacy-framework-in-danger - https://ogletree.com/insights-resources/blog-posts/data-in-the-balance-political-influence-on-eu-u-s-data-transfers/ - https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2025/03/transatlantic-data-transfers-eu-raises-concerns-over-us-oversight.html - https://www.enforcementtracker.com/ - https://www.kiteworks.com/gdpr-compliance/gdpr-fines-data-privacy-enforcement-2026/ - https://complydog.com/blog/biggest-gdpr-fines-of-2025 - https://www.gtlaw.com/en/insights/2025/9/cloud-switching-under-the-eu-data-act - https://www.addleshawgoddard.com/en/insights/insights-briefings/2025/data-protection/eu-data-act-gamechanger-saas-contracts/ - https://www.gerrishlegal.com/blog/the-data-act-explained-how-switching-rights-impact-saas-providers - https://www.ve3.global/the-multi-tenancy-why-a-database-per-tenant-model-is-the-new-standard-for-saas/ - https://redis.io/blog/data-isolation-multi-tenant-saas/ - https://complydog.com/blog/multi-tenant-saas-privacy-data-isolation-compliance-architecture - https://europeanpurpose.com/tool/plane - https://www.digitalsamba.com/blog/does-gdpr-require-eu-data-hosting - https://linear.app/dpa - https://www.feroot.com/blog/gdpr-saas-compliance-2025/ - https://www.getmonetizely.com/articles/how-does-gdpr-compliance-impact-saas-pricing-in-europe - https://www.riskpal.com/data-resilience-2026-geopolitical-risk/ --- # EU Data Sovereignty as a Purchase Driver — Demand Analysis 2026 ## Thesis **EU data sovereignty is a real and accelerating purchase driver, but its force varies by buyer segment.** For regulated enterprises and the public sector, it is a binary qualifier — vendors without EU data residency are eliminated before price or features are evaluated. For SMB dev teams (Kendo's primary audience), it functions as a weighted tiebreaker that becomes more decisive each quarter as regulations tighten and geopolitical risk makes headlines. The 2025-2026 regulatory wave (NIS2, EU Data Act, DORA, Schrems III threat) is converting "nice-to-have" into "must-have" faster than most SaaS vendors expected. --- ## 1. The Demand Signal: Is Anyone Actually Buying on This? ### Hard numbers | Data point | Source | Year | |-----------|--------|------| | 54% of European IT decision-makers prioritize data sovereignty in purchasing decisions | Techclass / industry survey | 2025 | | 73% of European business decision-makers consider data privacy a significant factor when selecting SaaS vendors | Forrester | 2025 | | 61% of Western European CIOs intend to shift more workloads to local or regional providers due to geopolitical concerns | Gartner survey | 2026 | | 53% of Western European CIOs plan to restrict use of global hyperscalers | Gartner survey | 2026 | | 44% of organizations are actively considering sovereign cloud solutions | Industry survey | 2025 | | 73% of credit institutions updated vendor risk assessments in 2024-2025 to address CLOUD Act exposure | European Banking Authority | 2025 | ### Sales impact for sovereign-positioned vendors Vendors that demonstrate architectural data sovereignty in enterprise deals report: - **15-30% higher contract values** compared to non-sovereign alternatives - **Sales cycles compress from 9-12 months to 4-6 months** when sovereignty is demonstrated upfront - **40-60% of new enterprise wins** come from competitive displacement driven by sovereignty requirements - **3-5x expansion in qualified pipeline** from regulated sectors within 12-18 months These numbers come from enterprise SaaS (Kiteworks data), not developer tools specifically. But the direction is clear: sovereignty sells. ### The "European alternatives" ecosystem The movement has its own infrastructure now: - **europealternatives.com** catalogs 354 European companies and 137 open-source projects as alternatives to US services, covering cloud infrastructure, business software, developer tools, communication, security, and AI - Multiple curated directories (bridgeapp.ai, wire.com guides) serve the same function - Denmark's public sector announced plans to phase out Microsoft Teams for European-governed alternatives - Germany's Schleswig-Holstein is migrating 25,000 government employees from Microsoft 365 to LibreOffice + Nextcloud - France mandated Nextcloud for its national education digital workspace, replacing Google Workspace - The Netherlands found compliance conditions for Google Workspace and Microsoft 365 "so stringent that migration to European alternatives became simpler" **Verdict: Real demand, not theoretical.** Both survey data and procurement behavior confirm that EU data sovereignty is a genuine purchase driver. --- ## 2. The Regulatory Wave Making It Worse (Better for Kendo) Five overlapping regulatory forces are tightening simultaneously. Each one individually nudges procurement toward EU-hosted solutions. Together, they create compounding pressure. ### 2.1 GDPR (Enforcement Escalation) GDPR itself doesn't mandate EU hosting. But enforcement is making non-EU hosting increasingly expensive: - Cumulative GDPR fines exceed **EUR 7.1 billion**, with 2,800+ fines issued through mid-2025 - Over **EUR 1.6 billion** in fines issued in 2024 alone — more than 60% of the total has landed since January 2023 - Italian DPAs are ordering organizations to stop using US-based SaaS tools that transfer data outside the EU - German federal agencies are issuing guidelines that effectively prohibit new deployments of US cloud tools in the public sector - Finance, healthcare, telecom, and public sector are now firmly in enforcement scope — not just Big Tech ### 2.2 Schrems III Threat (Structural Instability) The EU-US Data Privacy Framework (DPF) — the legal basis for transatlantic data transfers — is structurally unstable: - The DPF survived its first court challenge in September 2025 (General Court upheld adequacy) - But **Philippe Latombe's appeal** is before the European Court of Justice, which was more skeptical in Schrems I and II - The Trump administration dismissed all three Democrat members of the **Privacy and Civil Liberties Oversight Board (PCLOB)** in January 2025 — the very body the EU Commission cited as a safeguard in the DPF adequacy decision - Structural changes to the FTC further weaken the framework's foundations - Max Schrems announced a challenge within two weeks of the DPF's finalization - Many EU data protection lawyers now assess the U.S. as likely inadequate without supplementary measures **What this means for Kendo:** If the DPF is invalidated (Schrems III), every company using US-hosted SaaS for EU personal data faces an immediate legal scramble. EU-hosted providers become the safe harbor. This is not a question of "if" but "when and how disruptive." ### 2.3 NIS2 Directive (October 2024 / Enforcement 2026) NIS2 applies to essential and important entities across **18 critical sectors**: - Imposes cybersecurity risk management requirements on entities and their supply chains - SaaS providers are affected even without EU physical presence, as long as they serve EU customers - Penalties: up to **EUR 10 million or 2% of global turnover** for essential entities - Compliance deadline: October 2026 for full enforcement - Key implication: companies in scope must evaluate their SaaS vendors' cybersecurity posture, including where data is hosted and processed ### 2.4 EU Data Act (September 2025) The EU Data Act became fully applicable on September 12, 2025, with provisions that specifically benefit EU-hosted providers: - **Switching rights**: Customers can exit contracts with just two months' notice, regardless of original contract length - **Switching charges prohibited** from January 12, 2027 - **Data portability** in machine-readable formats via open APIs - **Extraterritorial scope**: Applies to non-EU SaaS providers serving EU customers - **Impact**: Makes it structurally easier to leave US-hosted tools and harder for them to lock in customers ### 2.5 EUCS (EU Cybersecurity Certification Scheme) — In Progress EUCS is the most uncertain but potentially most impactful regulation: - Under development for 4+ years, still not adopted - Early drafts included sovereignty requirements (EU headquarters, no non-EU jurisdictional exposure) — these were controversial and removed from recent drafts - The debate continues: EU SME associations and some member states are pushing to reintroduce sovereignty requirements - If sovereignty requirements are included in the final version, non-EU cloud providers would be structurally excluded from the highest assurance level - The Cybersecurity Act revision (2026) may address this gap --- ## 3. The Geopolitical Accelerant The regulatory arguments above are structural. The geopolitical situation in 2025-2026 adds emotional and strategic urgency: ### US-EU tensions - **US tariff threats** extend to countries with "Digital Taxes, Digital Services Legislation, and Digital Markets regulations" — a direct threat to EU regulatory sovereignty - **CLOUD Act exposure**: US cloud providers can be compelled to hand over data stored anywhere, including EU servers, under US law - **Starlink precedent**: American officials threatened to shut off Starlink in Ukraine; the ICC chief prosecutor lost access to US digital services after US sanctions. These incidents make "what if they cut us off?" a board-level question - **PCLOB dismantling** signals reduced US commitment to privacy safeguards ### Market response - European sovereign cloud IaaS spending: **$6.9 billion (2025) → $23.1 billion (2027)** — a tripling, with Europe growing at 83% vs US at 29% - **Schwarz Gruppe (STACKIT)** investing EUR 11 billion in a European cloud provider - OVHcloud, Hetzner, Scaleway accelerating as EU-sovereign alternatives to hyperscalers - French and German governments held a **Summit on European Digital Sovereignty** in November 2025 - Gartner analyst Rene Buest: "Uncertainty is not good for an organization, because they don't know how to plan" ### The dependency problem - US cloud providers hold **85% of the European cloud market** (Amazon, Microsoft, Google alone: 70%+) - European cloud providers have been losing share for nine consecutive years, holding under 15% in 2025 - This concentration is now seen as a strategic vulnerability, not just a market dynamic --- ## 4. Segmented Analysis: Who Actually Cares? Data sovereignty demand is not uniform. It matters differently to different buyer segments. ### Tier 1: Regulated Enterprise + Public Sector (Binary Qualifier) **Sectors**: Finance (DORA), healthcare, telecoms, public administration, energy, transport **Behavior**: EU data residency is a procurement prerequisite. Vendors without it are eliminated before feature evaluation. **Evidence**: 73% of credit institutions updated vendor risk assessments for CLOUD Act exposure. German/Dutch/Italian DPAs actively blocking US SaaS deployments. **Kendo relevance**: Low for now — Kendo targets small dev teams, not regulated enterprise. But this tier validates that the requirement is real, and teams inside these organizations need tools too. ### Tier 2: EU-Based Companies With Compliance Awareness (Strong Preference) **Sectors**: B2B SaaS, agencies, consultancies, tech companies (10-50 people) **Behavior**: Data sovereignty is weighted heavily in tool evaluation, especially when a CTO or DPO is involved. Not always a dealbreaker, but tilts decisions. **Evidence**: 54% of IT decision-makers prioritize sovereignty; the CTO "Christiaan" persona is a real archetype. **Kendo relevance**: High — this is where Kendo's EU positioning converts. When two tools are comparable on features and price, "Amsterdam-hosted, database-per-tenant" closes the deal. ### Tier 3: Small Dev Teams + Freelancers (Tiebreaker) **Sectors**: Indie devs, small agencies, startups (1-10 people) **Behavior**: Aware of GDPR, not deeply engaged with compliance. Hosting location is a nice-to-have, not a decision driver. Price, UX, and workflow integration matter more. **Evidence**: No survey data showing indie devs choosing tools primarily for EU hosting. Stack Overflow developer surveys don't surface hosting location as a major concern. **Kendo relevance**: Limited as a primary driver, but valuable as positioning differentiation. "Your data stays in Europe" is a one-liner that registers without requiring explanation. ### Tier 4: US/Global Companies (Irrelevant to Negative) **Behavior**: May see EU-only hosting as a limitation rather than a feature. Prefer providers with multi-region options. **Kendo relevance**: Not the target market. Don't dilute EU messaging to appeal here. --- ## 5. Competitive Positioning: How Do Competitors Handle This? | Tool | Hosting | Data Residency | Multi-Tenancy | Compliance Features | |------|---------|----------------|---------------|-------------------| | **Linear** | US-hosted (default) | No EU residency option; relies on DPA + SCCs | Shared database (presumed) | No audit logging below enterprise | | **Plane** | US-hosted (cloud); self-hostable on EU infra | EU residency via self-hosting only | Shared database (cloud) | SOC 2, ISO 27001, GDPR certified | | **Shortcut** | US-hosted | No EU residency | Shared database (presumed) | No audit logging below enterprise | | **Jira** | Multi-region (EU available) | EU data residency on Premium+ | Shared database | Audit logging on Premium ($14.54) | | **GitHub Projects** | US-hosted (GitHub) | EU residency on Enterprise | Shared database | Audit logging on Enterprise | | **Kendo** | **Amsterdam (Fly.io)** | **EU by default** | **Database-per-tenant** | **Hash-chained audit logs, all tiers** | **Key insight**: Among developer-focused project management tools at Kendo's price point, none offer EU hosting by default. Linear and Shortcut are US-only. Plane offers it only via self-hosting (additional ops burden). Jira offers EU residency but gates it behind Premium and requires enterprise-grade configuration. Kendo is the only tool in this segment where EU data residency is the default, not an upgrade. --- ## 6. Counter-Evidence: Where the Signal Is Weaker Intellectual honesty requires noting where the data sovereignty narrative falls short: ### GDPR doesn't legally require EU hosting GDPR requires "adequate protection" for transfers, not EU-only storage. The DPF (while it holds) makes US transfers legally permissible. Some buyers know this and don't weight hosting location heavily. ### Most developers don't care (yet) Developer surveys (Stack Overflow, Developer Nation) don't surface hosting location as a top concern. Developers choose tools for UX, speed, integrations, and price. The compliance buyer is usually the CTO, DPO, or procurement lead — not the individual developer using the tool. ### Small teams rarely have a DPO Kendo's primary persona (Floris, 1-3 person team) likely doesn't have a dedicated compliance role. GDPR awareness is there, but it doesn't drive tool selection for most indie devs. ### EU hosting can mean latency tradeoffs Teams outside Europe may experience slower performance with EU-only hosting. This is a real limitation if Kendo ever pursues US or APAC markets. ### The DPF is currently valid As of April 2026, the EU-US Data Privacy Framework is legally valid. It survived its first challenge. Organizations that rely on it can legally use US-hosted tools. The Schrems III risk is real but hasn't materialized yet. --- ## 7. Verdict: Is Kendo's "European by Default" Positioning Validated? ### Yes — but with nuance. **What's validated:** - EU data sovereignty is a genuine, accelerating purchase driver with concrete market data behind it - The regulatory trajectory (NIS2, EU Data Act, DORA, potential Schrems III) only strengthens the case over time - Geopolitical tensions are converting theoretical regulatory concern into visceral organizational urgency - Kendo has a genuine structural advantage: EU hosting + database-per-tenant + audit logs at every tier is a combination no competitor in the developer PM segment matches - The "Christiaan" CTO persona is real — 54% of IT decision-makers prioritize sovereignty, and 73% weight privacy in vendor selection **What needs calibration:** - For Kendo's primary audience (small dev teams, "Floris" persona), EU hosting is a tiebreaker, not a primary driver. Leading with MCP and workflow integration is correct — EU hosting is a supporting differentiator, not the headline - The messaging should acknowledge that GDPR doesn't require EU hosting, but position EU hosting as the simplest path to compliance: "You could do SCCs and TIAs and DPIAs... or you could just use a tool that keeps your data in Europe" - As Kendo moves upmarket toward the "Tessa" (tech lead, 5-15 person team) and "Christiaan" (CTO, 10-20 person team) personas, EU hosting becomes increasingly important as a differentiator - The database-per-tenant architecture is potentially more defensible than hosting location alone — it satisfies both data isolation requirements (NIS2, enterprise security reviews) and data residency (GDPR) ### Positioning recommendation Keep "European by default" as strategic priority #3 (per `company/mission.md`). It's correctly weighted: behind core product quality (#1) and MCP ecosystem (#2), but ahead of pricing (#4). The recommended messaging hierarchy: 1. **Lead with workflow**: "The issue tracker that lives in your terminal" (MCP + developer experience) 2. **Support with value**: "Time tracking included, no add-ons" (pricing advantage) 3. **Close with trust**: "Amsterdam-hosted, database-per-tenant, audit logs at every tier" (EU compliance) For the "Christiaan" persona specifically, invert the hierarchy — lead with trust, support with features. Create dedicated compliance-focused content (blog post: "Why we host in Amsterdam," comparison page highlighting EU hosting gap in Linear/Shortcut/Plane). --- ## 8. Regulatory Developments Kendo Should Prepare For | Regulation | Timeline | Kendo Impact | Action Needed | |-----------|----------|--------------|---------------| | **NIS2 enforcement** | October 2026 | Kendo's customers in scope will need to evaluate vendor compliance | Prepare a NIS2 compliance statement; document security measures | | **EU Data Act switching rights** | Fully enforceable; switching charges banned Jan 2027 | Kendo benefits (easier for prospects to leave competitors); must also comply itself | Ensure Kendo's ToS includes compliant termination/switching provisions | | **Schrems III** (potential DPF invalidation) | 2026-2027 (ECJ appeal pending) | Would immediately advantage EU-hosted tools over US competitors | Monitor; have a "Schrems III landed" content plan ready | | **EUCS adoption** | 2026-2027 (uncertain) | If sovereignty requirements are included, creates structural advantage for EU-hosted SaaS | Monitor; consider pursuing EU Cloud Code of Conduct certification | | **EU AI Act** (compliance deadlines rolling) | 2025-2027 | Kendo uses AI features; transparency and risk classification requirements apply | Ensure AI features (story generation, MCP) meet transparency requirements | --- ## 9. Key Findings Summary 1. **EU data sovereignty is a real purchase driver**, supported by survey data (54% of IT decision-makers prioritize it), procurement behavior (73% of banks updated vendor assessments), and market investment (European sovereign cloud spending tripling from $6.9B to $23.1B by 2027). 2. **The demand is segment-dependent.** For regulated enterprise and public sector, it's binary (you're in or out). For mid-market EU companies (Kendo's sweet spot for growth), it's a strong preference. For small dev teams (Kendo's launch audience), it's a tiebreaker. 3. **The regulatory trajectory only gets steeper.** NIS2 (October 2026), EU Data Act (live), DORA (financial sector), and the Schrems III threat create compounding compliance pressure that favors EU-hosted vendors. 4. **Geopolitics is the accelerant.** The PCLOB dismantling, tariff threats, CLOUD Act concerns, and the Starlink/ICC precedents have moved "digital sovereignty" from policy wonk territory to board-level strategy. 5. **Kendo has a genuine structural advantage** in its segment. No other developer-focused PM tool at this price point offers EU hosting by default + database-per-tenant + audit logs at every tier. Linear, Shortcut, and Plane (cloud) are all US-hosted. 6. **The positioning is correctly weighted** at strategic priority #3. It should support, not lead, for the primary audience — but it should lead for the compliance-conscious secondary audience ("Christiaan" persona). 7. **The database-per-tenant architecture is underemphasized** in current messaging. It addresses both data isolation (NIS2, security reviews) and data residency — and is technically harder to replicate than simply deploying to an EU region. --- ## Related - [../company/mission.md](https://kendo-hq.pages.dev/raw/company/mission.md) — Strategic priority #3: European trust - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — EU hosting in SWOT (S9), opportunities (O3), messaging matrix - [../shared/user-personas.md](https://kendo-hq.pages.dev/raw/shared/user-personas.md) — CTO "Christiaan" persona: the EU-focused buyer - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Competitor hosting and compliance comparison - *../company/legal/legal.md* — Back to code B.V., Dutch entity, Amsterdam hosting --- # Kendo App Plans Landscape *Source: `research/kendo-app-plans-landscape.md` · [Raw](https://kendo-hq.pages.dev/raw/research/kendo-app-plans-landscape.md)* --- title: "Kendo App Plans Landscape — Feature Map, Decisions, and Process Evolution" date: 2026-04-04 type: analysis tags: [engineering, plans, architecture, process] sources: - ~/Code/kendo/docs/plans/ --- # Kendo App Plans Landscape ## Summary Analysis of 97 plan directories in `~/Code/kendo/docs/plans/`, spanning the full history of Kendo's engineering effort. The plans cover 13 distinct feature domains, contain 224+ architectural decisions (almost all accepted), and show a clear evolution from minimal task lists to structured plan-and-decide workflows. The heaviest investment areas are **permissions/RBAC** (3 major plans, 31 decisions), **multi-tenancy/architecture** (5 plans, 18 decisions), and **MCP tooling** (8 plans). 54 plans are fully completed; 12 are materially stalled; the rest are in progress. The planning format improved dramatically around the KD-0157 era, when DECISIONS.md was introduced as a separate artifact. --- ## 1. Feature Landscape — Where Has Engineering Effort Concentrated? ### Domain Categorization (97 plans) | Domain | Plan Count | Key Plans | Decisions | |--------|-----------|-----------|-----------| | **MCP Tooling** | 8 | KD-0101, KD-0106, KD-0166 (x2), KD-0167 (x2), KD-0168, KD-0224, KD-0320, KD-XXXX | 17 | | **Permissions / RBAC** | 3 | KD-0115-0117, KD-0315, KD-0340 | 31 | | **Multi-Tenancy / Architecture** | 5 | IT-130, KD-0190, KD-0193-0195, KD-0205-0208 | 18 | | **UI / UX Polish** | 13 | IT-0072, IT-0095, IT-0188, KD-0163, KD-0166-fix, KD-0170, KD-0249, KD-0254, KD-0278, KD-0283, KD-0284, KD-0301, KD-0304 | 18 | | **Time Tracking** | 4 | KD-0097, KD-0153, KD-0161, KD-0311 | 0 | | **AI Features** | 7 | KD-0112, KD-0125, KD-0280, KD-0281, KD-0343, KD-0355, ai-prompt-section, ai-workflow-optimization, feature-planner | 15 | | **Reports / Feedback** | 4 | KD-0157, KD-0186, KD-0265, KD-0349 | 22 | | **Notifications** | 4 | KD-0174, KD-0175, KD-0288, KD-0289, KD-0293, KD-0294, inbox-notifications | 11 | | **Attachments / Files** | 3 | KD-0183, KD-0236, KD-0240 | 16 | | **CLI / Distribution** | 3 | KD-0237, KD-0261, KD-XXXX-mcp-cli | 21 | | **Authentication / Security** | 4 | IT-115, KD-0167-change-pw, KD-0260, KD-0314, KD-0341 | 26 | | **Infrastructure / Shared Code** | 7 | KD-0142, KD-0179, KD-0204, KD-0228, KD-0266, KD-0286, KD-0302 | 12 | | **Integrations** | 4 | KD-0113, KD-0270, KD-0277, claude-code-sdk | 6 | | **Other** | 6 | IT-0073, IT-0079, KD-0102-105, KD-0244, KD-0292, KD-0305, KD-0312, KD-0316, KD-0318, KD-0324, KD-0328, KD-0335, KD-0347, tenancy-refactor | — | ### Where the Most Effort Landed **By decision count** (proxy for architectural complexity): 1. **Permissions/RBAC** — 31 decisions across KD-0315 (23 decisions alone, the single most decision-heavy plan), KD-0340 (8 decisions). This is the most architecturally complex area of the product, touching backend policies, frontend composables, route middleware, role management UI, and security testing. 2. **Authentication/Security** — 26 decisions. KD-0341 (OAuth login, 17 decisions) and KD-0260 (email verification, 8 decisions) are full-stack features with significant security considerations. 3. **Reports/Feedback** — 22 decisions. The reports system (KD-0186 backend, KD-0265 frontend, KD-0349 attachments) was delivered as a deliberate 3-plan split, each with its own decision log. 4. **CLI/Distribution** — 21 decisions. The Go CLI (KD-0237) and its distribution system (KD-0261) required decisions spanning OAuth device flow, GoReleaser, R2 storage, and self-update mechanics. **By plan count** (proxy for breadth): 1. **UI/UX Polish** — 13 plans. Most are small, focused fixes (clickable rows, avatar colors, sidebar membership display). Low decision count per plan. 2. **MCP Tooling** — 8 plans. Kendo's MCP server evolved through incremental tool additions (blocking relations, sprint management, comments, time logs, epics). Each plan is small and focused. 3. **AI Features** — 7 plans. Ranging from early experiments (AI time estimation, multi-project bot) to the latest multi-agent pipeline (KD-0355). --- ## 2. Decision Archaeology — Architectural Decisions Still Relevant for Future Work ### Total Decisions Found **224+ architectural decisions** across 37 DECISIONS.md files. Status breakdown: - **Accepted:** ~210 (94%) - **Under discussion:** 7 (all in KD-0294, notification retention research) - **Superseded:** 3 (KD-0236 D2, KD-0315 D8, KD-0315 D10) - **Research/Informational:** 1 (KD-0289 D2, watchers system scope analysis) ### Decisions That Shape Future Work #### Multi-Tenancy and Architecture | Decision | Plan | Impact on Future Work | |----------|------|----------------------| | **Separate CentralUser model** | KD-0190 D1 | Central and tenant auth are completely independent. Any central app auth feature must use `CentralUser`, not `User`. | | **Explicit @shared/@tenant/@central aliases** | KD-0190 D2 | Import boundaries are enforced by pattern. New shared code must live in `@shared/`. ESLint rules enforce this. | | **Single Vite instance, multi-entry** | KD-0190 D10 | Both apps build with one `npm run build`. No `APP_NAME` env var. Blade templates reference `$manifest['src/apps/tenant/main.ts']`. | | **Test directory mirrors source structure** | KD-0190 D8 | Tests live in `tests/js/apps/tenant/domains/` mirroring `src/apps/tenant/domains/`. Central app tests go in `tests/js/apps/central/`. | | **Factory pattern for shared services** | KD-0194 D1 | Pagination, markdown, HTTP, storage all use `createXxxService(deps)`. New shared services should follow this pattern. Toast/modal/error are the exceptions (singletons per KD-0204 D1). | #### Permissions System | Decision | Plan | Impact on Future Work | |----------|------|----------------------| | **15 permission resources with CRUD+Own** | KD-0315 | Every new feature that needs authorization must decide: new resource (ID 16+), or fits existing? KD-0343 already added resources 16-17 for BYOK keys. | | **Project owner bypasses all project-scoped permissions** | KD-0315 D17 | `PermissionResourceEnum::isProjectScoped()` is the authoritative list. Owner bypass is automatic for all 12 project-scoped resources. | | **Always-readable resources** | KD-0315 D16 | 12 of 15 resources force `can_read: true`. Only SystemPrompts, ProjectTokens, Roles, and AppSettings have toggleable read. New resources should evaluate whether read is meaningful to restrict. | | **canInProject() wrapper for frontend** | KD-0315 D21 | Project-scoped permission checks must use `canInProject()`, not `can()`. The wrapper handles owner bypass. | #### Data Patterns | Decision | Plan | Impact on Future Work | |----------|------|----------------------| | **Reports as lightweight intake, not full issues** | KD-0186 D2 | Reports have title + description only. Promotion to issues is the gate. New report-like features should follow this pattern. | | **Hard delete for notifications** | KD-0294 D2 | Notifications use hard delete, not soft delete. Only `User` model uses `SoftDeletes` in the entire codebase. | | **Token-based verification (not signed URLs)** | KD-0260 D1 | Email verification uses token + expiry on the User model, matching the invite flow. Future verification flows should use the same pattern. | | **Pennant feature flags in central DB** | KD-0286 ADR-001 | Features table is in central DB, scoped to Tenant model. `Feature::for($tenant)->active(...)` is the check pattern. | | **Client-side filtering is the default** | KD-0276 D1 | Board/Backlog/Overview load all issues client-side and filter in memory. New filters should follow this pattern unless there's a specific server-side need. | #### Integration Patterns | Decision | Plan | Impact on Future Work | |----------|------|----------------------| | **GitHub App token, not user OAuth** | KD-0277 D3 | Agent tools always use GitHub App installation tokens. User OAuth tokens are only for the repo-linking flow. | | **OAuth Device Flow for CLI** | KD-0237 D1 | Terminal-based auth uses RFC 8628 device flow. IDE extensions use Authorization Code + PKCE. | | **Socialite for login, GithubService for repos** | KD-0341 D1 | Two separate GitHub OAuth apps with different scopes. Login uses Socialite, repo access uses custom GithubService. | ### Decisions Under Active Discussion (Not Yet Accepted) KD-0294 has 7 decisions all marked "Under discussion" — a notification retention and deletion research plan. Key open questions: - Notification retention period (proposed 6 months) - Hard delete vs soft delete (proposed hard delete) - User-initiated deletion + system cleanup (proposed both) - Scheduled cleanup command pattern These are ready for implementation if the CEO approves. --- ## 3. Stalled vs Shipped — Completion Analysis ### Methodology A plan is **completed** if all tasks in TASKS.md are checked (`[x]`). **Stalled** means significant unchecked tasks remain with no recent activity. **In progress** means partially done with active work likely. ### Completed Plans (54 of 73 with TASKS.md) All tasks checked off in these plans: | Plan | Domain | Tasks | |------|--------|-------| | IT-0072 (blocking issue nav) | UI | 4/4 | | IT-0073 (assignee on create) | UI | 11/11 | | IT-0095 (unassign button) | UI | 2/2 | | IT-115 (2FA TOTP) | Security | 5/5 | | IT-130 (multi-tenancy) | Architecture | 11/11 | | KD-0101 (MCP blocking) | MCP | 3/3 | | KD-0106 (MCP sprints) | MCP | 3/3 | | KD-0113 (Mattermost adapter) | Integrations | 7/7 | | KD-0115 (project action gating) | Permissions | 2/2 | | KD-0116 (board action gating) | Permissions | 7/7 | | KD-0117 (user management gating) | Permissions | 2/2 | | KD-0161 (timelog refactor) | Time Tracking | 17/17 | | KD-0163 (team edit search) | UI | 3/3 | | KD-0166 (lane settings bugs) | UI | 4/4 | | KD-0166 (MCP comment tools) | MCP | 2/2 | | KD-0167 (MCP issue fields) | MCP | 1/1 | | KD-0168 (MCP time/epic tools) | MCP | 2/2 | | KD-0170 (member search) | UI | 4/4 | | KD-0174 (expired invite notice) | Notifications | 3/3 | | KD-0183 (multi-file upload) | Attachments | 5/5 | | KD-0205-0208 (tenant CRUD) | Multi-Tenancy | 5/5 | | KD-0244 (project leader) | Other | 3/3 | | KD-0266 (project tokens) | Infrastructure | 5/5 | | KD-0278 (sidebar membership) | UI | 4/4 | | KD-0284 (task issue type) | UI | 3/3 | | KD-0286 (feature flags) | Infrastructure | 13/13 | | KD-0288 (mention notifications) | Notifications | 3/3 | | KD-0293 (mark unread) | Notifications | 4/4 | | KD-0305 (copy link issue key) | UI | 2/2 | | KD-0324 (time entry filter) | Time Tracking | 2/2 | | KD-0341 (OAuth login) | Authentication | 9/9 | | claude-code-sdk-integration | Integrations | 8/8 | | inbox-notifications | Notifications | 11/11 | | it-0079 (filter persistence) | Other | 9/9 | Plus ~20 more with all tasks checked. ### Stalled / Abandoned Plans (12) | Plan | Status | Tasks | Pattern | |------|--------|-------|---------| | **KD-0142** (frontend test improvement) | Stalled | 35/118 (30%) | Massive scope (118 tasks across all test files). Clearly too ambitious for a single plan. | | **KD-0283** | Stalled | 0/3 | Zero tasks completed. Browser tab title feature — may have been deprioritized. | | **tenancy-refactor-request-scoped** | Stalled | 0/9 | Zero tasks completed. Request-scoped tenancy refactor — likely superseded by KD-0190. | | **KD-0125** (multi-project bot) | Stalled | 5/8 | 3 remaining tasks. Multi-project bot may have been overtaken by MCP evolution. | | **KD-0157** (feedback image upload) | Partial | 3/6 | Half done. May need to be revisited after KD-0349 (report attachments). | ### In Progress (actively worked, some tasks remaining) | Plan | Tasks | Notes | |------|-------|-------| | KD-0315 (CRUD permissions) | 18/19 | Nearly complete, 1 task remaining | | KD-0320 (epic MCP tools) | 11/12 | Nearly complete | | KD-0190 (multi-app architecture) | 14/15 | Nearly complete | | KD-0343 (BYOK AI keys) | 8/9 | Nearly complete | | KD-0292 (orphaned assignments) | 13/14 | Nearly complete | ### Patterns in Completion **What gets finished:** - Plans with **3-15 tasks** complete reliably. The sweet spot is 5-10 tasks. - Plans with a **clear single-PR scope** (one feature, one domain) ship. - Plans with **DECISIONS.md** tend to complete — the upfront decision-making prevents mid-stream confusion. - **MCP tool plans** have a 100% completion rate (8/8). They're small, well-scoped, and follow established patterns. **What stalls:** - Plans with **100+ tasks** (KD-0142 at 118 tasks is the extreme example). These should be split into multiple plans. - Plans that **overlap with later work** (tenancy-refactor superseded by KD-0190, KD-0125 superseded by MCP evolution). - Plans **without PLAN.md or DECISIONS.md** — pure TASKS.md-only plans from the early era are more likely to stall because there's no documented rationale for the approach. --- ## 4. Process Evolution — How Planning Format Changed Over Time ### Era 1: Task Lists Only (IT-series, early KD-series) **Examples:** IT-0072, IT-0073, IT-0095, KD-0101, KD-0106 **Format:** - TASKS.md only, no PLAN.md or DECISIONS.md - Tasks are detailed with **Context** blocks containing Why, Architecture, Key refs, Watch out - Tasks include file paths, line numbers, and code patterns to follow - RED/GREEN markers for test-first workflow - Very prescriptive: tells the implementer exactly what to do **Strengths:** Extremely detailed, no ambiguity for the engineer agent. **Weaknesses:** No high-level plan, no recorded decisions, no separation between "what" and "how." ### Era 2: Plan + Tasks (mid KD-series) **Examples:** IT-130, KD-0112, KD-0113, KD-0142, KD-0146, KD-0153 **Format:** - PLAN.md appears alongside TASKS.md - Plans have context, scope, and phased approach - Tasks retain the detailed Context/Architecture/Key refs structure - Some plans have extra artifacts (email-preview.blade.php for KD-0146, mockup.html for feature-planner) **Strengths:** High-level rationale is now documented. Phases create natural work boundaries. **Weaknesses:** Decisions are implicit in the plan text, not structured for future reference. ### Era 3: Plan + Decisions + Tasks (KD-0157 onward) **Examples:** KD-0157, KD-0163, KD-0183, KD-0186, KD-0190 **Format:** - DECISIONS.md as a separate artifact with structured format: Status, Context, Options considered, Decision, Consequences - Plans reference decisions (e.g., "see D3") - Multiple options are explicitly listed and evaluated - Status tracking (Accepted, Superseded, Under discussion) **Strengths:** Decisions are discoverable, referenceable, and auditable. Future agents can search DECISIONS.md to understand why something was built a certain way. **Weaknesses:** Early decision docs don't always have "Options considered" — some jump straight to the decision. ### Era 4: Structured Plans with Key Design Decisions Table (KD-0315 onward) **Examples:** KD-0315, KD-0341, KD-0343, KD-0349, KD-0355 **Format:** - PLAN.md opens with metadata (Date, Status, Issue link, Decisions link) - **Key Design Decisions table** — 5-10 row summary of major choices with rationale - Scope section with explicit In scope / Out of scope lists - DECISIONS.md has the full structured format (Context, Options, Decision, Consequences) - Plans link to convention references (e.g., "Convention Reference" column in KD-0343 key decisions table) - REVIEW.md and SIMPLIFY-REVIEW.md appear as post-implementation quality checks (KD-0315, KD-0260) **Strengths:** The Key Design Decisions table lets the CEO review decisions at a glance without reading 23 full decision records. Convention references ensure the plan follows existing patterns. Post-implementation reviews catch issues before merge. **Weaknesses:** Plans can be quite long (KD-0315 has 6 plan variants). The convention reference pattern is new and not yet consistently applied. ### Era 5: Multi-File Audit Plans (KD-0340) **Examples:** KD-0340-permission-testing **Format:** - 16 files in a single plan directory, structured as a multi-phase audit - Numbered phases (1-INVENTARISATIE, 2-VERGELIJKING, 3-GAPLIJST, etc.) - Separate findings files per priority level - TLDR summary for quick consumption - DECISIONS.md records the fix strategy **Strengths:** Thorough, systematic testing approach. Findings are traceable to test scenarios. **Weaknesses:** Very heavy process — only appropriate for security-critical audits. ### Format Evolution Summary | Element | Era 1 | Era 2 | Era 3 | Era 4 | Era 5 | |---------|-------|-------|-------|-------|-------| | TASKS.md | Detailed | Detailed | Detailed | Detailed | N/A | | PLAN.md | No | Yes | Yes | Yes + metadata | Multi-file | | DECISIONS.md | No | No | Yes (basic) | Yes (full format) | Yes | | Key Decisions Table | No | No | No | Yes | No | | In/Out of Scope | No | Implicit | Implicit | Explicit | Explicit | | Convention References | No | No | No | Yes | No | | Post-Review | No | No | No | Yes | N/A | --- ## Implications for Kendo ### What's Working Well 1. **DECISIONS.md is the highest-value artifact.** The 224+ decisions form a searchable knowledge base that prevents re-debating settled questions. Future agents can grep for "Avatar" or "MCP" and find the architectural rationale. 2. **Small, focused plans ship.** Every MCP tool plan completed. Every small UI fix completed. The engineering process excels at well-scoped work. 3. **The permission system is remarkably well-documented.** KD-0315 (23 decisions) + KD-0340 (8 decisions + full audit) + KD-0185 (3 decisions) = the most thoroughly documented subsystem in the product. 4. **Convention references in plans** (introduced in Era 4) prevent the planner from inventing patterns that already exist in the codebase. This is a significant improvement. ### What Could Improve 1. **Large plans need splitting discipline.** KD-0142 (118 tasks) stalled at 30%. Plans over 15 tasks should be split into multiple issues, each with its own plan. 2. **Stale plans create confusion.** `tenancy-refactor-request-scoped` (0/9 tasks) was likely superseded but never archived. A periodic cleanup or "archived" status would help. 3. **Dutch/English inconsistency.** Early plans use Dutch for titles and task descriptions (IT-0073, KD-0097, KD-0153). Later plans use English. A single language (English, matching the codebase) would improve searchability. 4. **Plans without PLAN.md or DECISIONS.md** are harder to maintain. The 30+ plans that have only TASKS.md lack the context an engineer agent needs to make good decisions during implementation. --- ## Open Questions 1. **Should stale plans be archived?** Plans like `tenancy-refactor-request-scoped` and `KD-0125-multi-project-bot` appear superseded. A formal archival process could reduce confusion. 2. **Is the KD-0294 notification retention research approved?** Seven decisions are "Under discussion" — these seem ready for implementation but lack final approval. 3. **What happened to KD-0142 (frontend test improvement)?** At 35/118 tasks, this is the largest stalled plan. Was the approach abandoned, or is it being tackled incrementally through other plans? 4. **Should DECISIONS.md be required for all plans?** The correlation between having a DECISIONS.md and completing the plan is strong. Making it mandatory would be a low-cost process improvement. 5. **Is the multi-agent pipeline (KD-0355) the future of AI features?** It proposes replacing the dual quick/agent generation system with a 4-agent pipeline. If approved, it changes how every AI feature works going forward. --- ## Related - [kendo-architectural-patterns.md](https://kendo-hq.pages.dev/raw/research/kendo-architectural-patterns.md) — Synthesized patterns from the same DECISIONS.md files - [../company/product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Feature set these plans built - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — Go-to-market context for prioritization --- # The Kendo Way — Architectural Patterns *Source: `research/kendo-architectural-patterns.md` · [Raw](https://kendo-hq.pages.dev/raw/research/kendo-architectural-patterns.md)* --- title: "The Kendo Way — Architectural Patterns and Conventions" date: 2026-04-04 type: investigation tags: [architecture, patterns, conventions, engineering] sources: - ~/Code/kendo/docs/plans/ (38 DECISIONS.md files) - ~/Code/kendo/CLAUDE.md - ~/Code/kendo/backend/CLAUDE.md - ~/Code/kendo/frontend/CLAUDE.md - ~/Code/kendo/frontend/tests/CLAUDE.md - ~/Code/kendo/ARCHITECTURE.md --- # The Kendo Way — Architectural Patterns and Conventions ## Summary This report synthesizes the architectural patterns that define Kendo's codebase, distilled from 38 DECISIONS.md files (containing 224+ individual decisions), 4 CLAUDE.md files, and the master ARCHITECTURE.md. Kendo is a dark-themed project management application built on Vue 3.5 + TypeScript 5.9 frontend and Laravel 12 + PHP 8.4 backend, with database-per-tenant multi-tenancy and 100% test coverage requirements. The codebase follows a small set of rigid, heavily-enforced patterns — explicit over implicit, immutability by default, single-responsibility Actions, and factory-injected services. These patterns are not suggestions; they are enforced by 15+ architecture test suites that reject violations at CI time. The 38 DECISIONS.md files reveal that nearly every feature decision is made by evaluating whether a proposed approach is **consistent with existing patterns** — consistency is the primary decision heuristic, ahead of elegance or performance. --- ## 1. Backend Patterns ### 1.1 The Action Pattern (ADR-0011) **The single most important backend pattern.** All business logic lives in Action classes. Controllers are thin wrappers: validate, call action, return response. **Rules (enforced by `tests/Arch/ActionsTest.php`):** - `final readonly class` with suffix `Action` - Single public method: `execute()` (private helpers allowed) - Constructor dependency injection only - No facades, no HTTP layer dependencies (no controllers, middleware, or form requests) - No static-through-instance calls (`$this->model::where()` is banned; use `$this->model->newQuery()->where()`) - Transaction closures must use `void` return type - Every mutation wrapped in a database transaction via `ConnectionInterface` **Return patterns:** - Model: return created/updated entity - Void: delete/update operations with no return value - Exception: guard clauses throw custom exceptions (e.g., `LaneHasIssuesException`) - Nullable: `?Model` for optional results (controller decides HTTP response) **Evidence from decisions:** The Action pattern is so ingrained that when new features need cross-cutting logic, the answer is always "extract a new Action" rather than adding to existing ones. KD-0292 (orphaned assignments) extracted `FindOrphanedAssignmentsAction` as a shared collaborator. KD-0349 extracted `CopyAttachmentsAction` for attachment transfer during report promotion. KD-0355 extracted `AiGenerationPipeline` as an injectable collaborator (not a base class) to respect the `final readonly` constraint. ### 1.2 The FormRequest-to-DTO Flow (ADR-0012) No raw arrays cross the boundary between HTTP and domain layers. **Flow:** `FormRequest validates -> toDto() -> typed DTO -> Action` **Enforcement:** `tests/Arch/FormRequestsTest.php` and `tests/Arch/DataTransferObjectsTest.php` **Critical rule — cross-entity scoping:** Tables owned by a project must always be scoped in `exists` validation rules. Never use string-format `'exists:lanes,id'`. Always scope: `Rule::exists('lanes', 'id')->where('project_id', $projectId)`. This pattern was explicitly reinforced in KD-0320 D4 (same-project validation for epic_id) and KD-0340 D7 (global search enumeration scoping). ### 1.3 The ResourceData Pattern (ADR-0009) API responses use immutable, readonly DTOs with constructor-promoted properties. **Rules (enforced by `tests/Arch/ResourcesTest.php`):** - No manual `toArray()` — serialization is reflection-based - `from(Model): static` factory handles relation loading and computed values - `EAGER_LOAD` constant is the single source of truth for relation loading - Resources are dumb serializers — no request access, no auth logic, no service calls - Different contexts use separate Resource classes, not conditional logic - Collections via `ResourceData::collection()` — no `ResourceCollection` classes - **No nested collections** (KD-0349 D7 explicitly rejects nesting `AttachmentResourceData[]` inside `ReportResourceData`) **Controller patterns:** - Single: `ResourceData::from($model)->toResponse()` returns `JsonResponse` - Create: `ResourceData::from($model)->toResponseWithStatus(201)` returns `JsonResponse` - Collection: `ResourceData::collection($models)` returns `array` ### 1.4 Two-Tier Authorization (ADR-0006) The most architecturally complex area of the codebase, with 31+ decisions across KD-0315 and KD-0340 alone. **Tier 1 (Route):** `->can()` middleware on routes. Policy classes in `app/Policies/`. Answers: "Can this user access this resource?" Determined by User + Model from route bindings. **Tier 2 (Action):** `Gate::authorize()` inside Actions. Permission classes in `app/Policies/Interactions/`. Answers: "Can this user perform this specific operation in this context?" Requires runtime data beyond route bindings. **Rule of thumb:** User + Model only -> Tier 1. Needs runtime request data -> Tier 2. **The permission model:** 16 resources x 4 actions (Create boolean, Read boolean, Update scope [None/Own/All], Delete scope [None/Own/All]). Custom roles with any permission combination. **Bypass rules (KD-0315 D17):** 1. Admin bypass: `$user->isAdmin()` in policy `before()` hooks 2. Project owner bypass: owners get full access to all 12 project-scoped resources (KD-0315 D17). Implemented as early return in `CheckPermission::check()`. Does NOT apply to tenant-scoped resources. **Key decisions that shaped the permission system:** - KD-0315 D1: Auto-incrementing integer IDs for roles (not ULIDs), because every other model uses integers - KD-0315 D4: Replace all `isAdmin` checks with `can()` permission checks to enable custom roles - KD-0315 D10: Remove synthetic AppSettings resource, add Roles as real resource - KD-0315 D13: Auto-grant read when update/delete is set (enforced both frontend and backend) - KD-0315 D16: Force `can_read: true` on 12 of 16 resources; only 4 sensitive resources have toggleable read - KD-0340 D1: IDOR fixes via custom validation rules in FormRequest, not in Actions or Controllers - KD-0340 D6: NotificationPolicy is the only policy where admin bypass was removed (notifications are personal data) ### 1.5 Cascade Deletion (ADR-0002) **No `ON DELETE CASCADE` or `ON DELETE SET NULL` in migrations.** Foreign keys serve as guards — they throw on constraint violations, not silently clean up. All deletion logic must be explicit in the Action. Model declares what it owns, Action executes the cascade, tests verify completeness. Evidence: KD-0270 D2 chose "always cascade with confirmation" for GitHub repo unlinking, but the cascade happens explicitly in `DeleteProjectGithubRepoAction`, not via database FK cascades. ### 1.6 Config Attribute Injection (ADR-0016) All config access in container-resolved classes must use `#[Config('key')]` attribute. `config()` helper is prohibited except in ServiceProvider `register()`/`boot()`. Enforced by `tests/Arch/ConfigTest.php`. ### 1.7 Audit Logging (ADR-0001) Append-only per-entity audit tables with SHA-256 hash chaining for tamper detection. Triggered explicitly in Actions (not observers). Point-in-time user snapshots stored on each record. Architecture tests enforce that Actions touching auditable models depend on the corresponding audit logger. Three AI logging channels (ADR-0003): AiOutboundLogger (LLM calls), AiMcpLogger (MCP tool invocations), AiRunLogger (AI run completions). All with hash chaining. ### 1.8 Enum Conventions **Backend:** Integer-backed enums (`BackedEnum: int`). KD-0341 D17 explicitly switched `OAuthProviderEnum` from string-backed to int-backed with `slug()` methods, because every other backend enum uses integers. **Frontend:** `enumCollection` pattern with numeric values and slug fields. KD-0315 D6-D7 created `PermissionActionEnum` using the same `enumCollection` pattern as `TypeEnum`, `PriorityEnum`, etc. ### 1.9 Migration Conventions - No `ON DELETE CASCADE` or `ON DELETE SET NULL` in post-cutoff migrations (ADR-0002) - Undeployed migrations can be modified in place (KD-0315 D3: "never modify deployed migrations" applies only to production/staging) - Auto-incrementing integer IDs everywhere (KD-0315 D1 switched ULIDs to integers for consistency) --- ## 2. Frontend Patterns ### 2.1 Domain-Driven Structure (ADR-0014) Vertical slices organized by business domain, not by technical layer. Two apps (tenant + central) share the same domain pattern. **Each domain contains:** `store.ts`, `constants.ts`, `types.d.ts`, `route.ts`, `components/`, `relations/`, `__mocks__/` **Import boundaries (enforced by `frontend/tests/arch/domain-structure.spec.ts`):** - `@shared/` must not import from `@tenant/` or `@central/` - Tenant and central never import from each other - Cross-domain imports go through `@shared/`, never directly between domains - Cross-sibling imports between relations under the same parent domain are acceptable (KD-0280 D3: reports importing from issues) ### 2.2 The Adapter-Store Pattern (ADR-0009) Custom reactive state management (not Pinia). Each domain has a `store.ts` using `adapterStoreModuleFactory()`. The adapter handles REST communication and camelCase/snake_case conversion. **Store API:** `getAll`, `getById`, `getOrFailById`, `generateNew`, `retrieveAll` **Item API:** `mutable`, `reset()`, `update()`, `delete()`, `create()` **Items are frozen by default.** Editing requires calling `.mutable` to get a writable copy, then `.update()` to sync back. This is the immutability-by-default principle in action. **Custom store methods follow the same extension pattern:** KD-0236 D4 added `clearAll()` to `AttachmentStoreModule` following the existing `upload` extension pattern. ### 2.3 The Service Factory Pattern All shared services created via factory functions. Each app instantiates its own service instances with app-specific configuration. **Factories:** `createAdapterStoreModule()`, `createHttpService()`, `createStorageService()`, `createRouterService()`, `createPageLength()`, `createMarkdownRenderer()` **Per-app instances:** `src/apps/tenant/services/` and `src/apps/central/services/` call shared factories with their own config. **When to use factories vs singletons:** Services that need per-app configuration (different API base URLs, storage prefixes) use factories. Services with identical behavior in both apps (toast, modal, error) are singletons in `@shared/services/` (KD-0204 D1). **The factory decision is deeply consistent.** KD-0194 explicitly chose factories over provide/inject because "zero provide/inject usage exists in the codebase" and the factory pattern is used by "6+ services." KD-0195 D3 chose a factory for the markdown renderer. KD-0301 D1 chose props over provide/inject for MenuTabs. The codebase has zero provide/inject usage — it is an anti-pattern. ### 2.4 Component Code Style ``` ``` **Rules:** - `type` over `interface` (always, enforced by convention) - Avoid `any` — use `unknown` - Explicit return types for complex functions - Props destructured via `const {user} = defineProps<{...}>()` - `v-model` with named models for two-way bindings (KD-0194 D3: PaginationBar uses `v-model:page-length`) ### 2.5 The "Dumb Component, Smart Parent" Pattern Components should be presentational; parents orchestrate. This pattern is reinforced repeatedly: - KD-0280 D5: AI generation moved to page level, IssueForm becomes a pure form - KD-0280 D6: Shared IssueForm replaces inline promote form in ReportDetail - KD-0280 D7: Dismiss button belongs on the report card, not inside the shared form - KD-0301 D1: MenuTabs becomes a pure presentational component via props - KD-0349 D1: AttachmentGrid expects properly typed store data, not inline hacks ### 2.6 Multi-App Architecture (KD-0190) Two Vue apps sharing a common layer, served by a single Vite instance. **Key decisions:** - Explicit path aliases: `@shared/`, `@tenant/`, `@central/` — no generic `@/` (KD-0190 D2) - Separate CentralUser model in central database (KD-0190 D1) - No WebSocket/Echo for central app (KD-0190 D5) - Same kendo design system shared via `@shared/` (KD-0190 D6) - Single Vite instance for dev and build (KD-0190 D10) - Test directory mirrors source structure: `tests/js/apps/tenant/...` (KD-0190 D8) ### 2.7 Route Conventions - Tenant app uses Dutch route paths via `createRouteSettingsFactory` translation factory - Central app uses English paths with `createStandardRouteConfig` directly (KD-0302 D1) - Route meta: `permission: {resource, action}` for auth gating (KD-0315 D5), replacing the older `requiresAdmin` pattern - `canInProject(resource, action, projectOwnerId, entityOwnerId?)` for project-scoped checks (KD-0315 D21) - Domains get DomainLayout wrapping; standalone pages (pricing, login, invited) skip it (KD-0302 D2) - SidebarLink uses `currentRoute.matched.some()` for active detection (KD-0302 D3) ### 2.8 CSS and Layout Patterns - UnoCSS for utility classes - CSS custom properties (not TS constants) for layout dimensions (KD-0304 D1) - `:style` bindings to bypass UnoCSS specificity issues (KD-0304 D3) - Mobile padding applied at App.vue level, not per-layout (KD-0304 D2) - Formatting handled by oxfmt via PostToolUse hooks — no manual formatting --- ## 3. Cross-Cutting Patterns ### 3.1 The Consistency Heuristic Across all 38 DECISIONS.md files, the single most common reason for choosing one option over another is **consistency with existing codebase patterns**. Examples: - KD-0194 D1: Factory pattern chosen because "consistent with every other service in the codebase" - KD-0195 D3: Factory for markdown because "consistent with every other service" - KD-0204 D1: Singletons for toast/modal because they need no per-app config (unlike factories) - KD-0301 D1: Props over provide/inject because "no new patterns introduced" - KD-0315 D1: Integer IDs for roles because "every other model uses auto-incrementing integers" - KD-0341 D17: Int-backed enum because "all other backend enums use int-backed enums" - KD-0349 D1: Store pattern for report attachments because "consistent with issue/epic/comment pattern" - KD-0349 D7: No nested ResourceData collections because "no precedent" **The pattern is: when in doubt, do what the codebase already does.** Novel approaches require explicit justification. ### 3.2 The YAGNI Principle Features are scoped aggressively. If there is no current need, it does not get built: - KD-0195 D4: MentionSuggestions moved to tenant (not made generic) because "central app doesn't use RichTextArea. YAGNI." - KD-0265 D2: Multi-report-to-one-issue linking deferred because "ship faster, real usage informs the right UX later" - KD-0289 D1: Full subscribers/watchers system rejected (20-25 files) in favor of creator+assignee pattern (4-5 files) - KD-0294 D1: Configurable retention period rejected in favor of hardcoded 180 days ### 3.3 Backend Stays Lean, Frontend Does More A clear preference for pushing logic to the frontend when backend changes are unnecessary: - KD-0183 D1: Multi-file upload via parallel frontend calls, zero backend changes - KD-0183 D4: Attachment limit enforced frontend-only (internal tool, bypassing via API is negligible risk) - KD-0276 D1: Creator filter is client-side only (issues already loaded in memory) - KD-0280 D1: AI story generation for reports reuses existing issue endpoints, frontend maps the payload ### 3.4 The API Contract Pattern **HTTP status codes are semantic:** - 422: Validation errors (request body invalid) - 409: Conflict with current resource state (KD-0270 D1: can't delete because of dependencies) - 403: Authorization failure - 201: Created (via `toResponseWithStatus(201)`) **Feature flags:** Laravel Pennant for tenant-level feature gating. Web routes use `EnsureFeatureActive` middleware. MCP tools check `Feature::for($tenant)->active()` directly (KD-XXXX D2). Feature flag state delivered to frontend via `ProfileResourceData.features` (KD-0286 D1), not via a separate endpoint. **Token/scope model:** Passport OAuth tokens with `read` + `write` scopes. CLI and MCP share the same token infrastructure but use different OAuth grant types (Device Flow for CLI, Authorization Code + PKCE for MCP — KD-0237 D8). ### 3.5 Multi-Tenancy Affects Everything Database-per-tenant. DIY implementation, no external package. **Routing:** Subdomain-based. `IdentifyCentral` runs first (marks `central.*` requests). `IdentifyTenant` resolves subdomain to tenant DB. **Impact on every feature:** - OAuth callbacks need exchange tokens because session cookies are subdomain-scoped (KD-0341 D3) - Feature flags are per-tenant, resolved via Pennant (KD-0286) - CLI stores tenant URL in config (KD-0237 D4) - Scheduled commands must iterate all tenants via `TenantSwitcher` (KD-0294 D4) - Central and tenant models are completely separate (separate databases, separate auth guards) - Browser tab title uses tenant name from profile data (KD-0283) ### 3.6 The "Enrich Profile, Don't Add Endpoints" Pattern When the frontend needs a small piece of data available at boot time, it gets added to the existing `GET /auth/user` profile response rather than creating a new endpoint: - KD-0286 D1: Feature flags added to `ProfileResourceData` instead of separate `GET /api/features` - KD-0283 D1: Tenant name added to `ProfileResourceData` instead of separate `GET /api/tenant` ### 3.7 PR and Branching Strategy - Base branch for PRs: `development` (not `main`) - Large features split into multiple plans/issues (KD-0186 D6: split into 3 incremental issues) - Related small changes bundled into single PRs (KD-0320 D2: three related issues in one PR) - Chained PRs for multi-phase work (KD-0190 D3: 3 chained PRs for multi-app architecture) - Git worktrees for isolated feature work --- ## 4. Testing Patterns ### 4.1 Test Philosophy - **100% coverage** on backend unit tests (Actions) and frontend component/store tests - **TDD flow:** write tests before implementation (per `CLAUDE.md` Plan Mode rules) - **Architecture tests** enforce structural patterns at CI time (15+ arch test files) - **Mutation testing** via Infection on Actions (`composer test:mutation`) ### 4.2 Backend Testing - Pest PHP for all backend tests - Mockery for mocking dependencies - Actions tested with mocked dependencies (not integration tests) - Architecture tests in `tests/Arch/` enforce every structural convention: - `ActionsTest.php`: final readonly, single execute(), no facades - `ControllersTest.php`: no authorize(), must depend on Actions, no direct Eloquent writes - `FormRequestsTest.php`: no authorize() method - `ResourcesTest.php`: ResourceData patterns - `RoutesAuthorizationTest.php`: all auth routes have `->can()` middleware - `InteractionPermissionsTest.php`: 8 tests for Tier 2 permissions - `MigrationsTest.php`: rejects cascadeOnDelete()/nullOnDelete() - `AuditTest.php`: Actions touching auditable models depend on loggers - `ConfigTest.php`: no `config()` helper in container-resolved classes ### 4.3 Frontend Testing **3-tier mock system:** 1. Centralized mock lists in `tests/mocks/lists/` (imported globally in `setup.ts`) 2. Source-level `__mocks__/` directories co-located with source files 3. Inline mocks in test files for one-off overrides **Key rules:** - Always use `:pipeline` test scripts (the base scripts use vitest watch which never exits) - Tests mirror source directory structure: `tests/js/apps/tenant/domains/...` - AAA format (Arrange-Act-Assert) - Load `/vue-vitest-testing` skill before writing tests **Page Integration Tests (ADR-0017):** - Real child components mounted (no mocking child components) - Only HTTP transport layer mocked (in-memory mock-server) - Stores, translation, router, and auth all run real - Separate vitest config and coverage thresholds ### 4.4 Permission Testing (KD-0340) Permissions are tested at multiple layers: - Backend: architecture tests enforce route-level `->can()` on all authenticated routes - Backend: policy unit tests with permission matrix - Frontend: component tests verify `can()`/`canInProject()` guards on buttons and routes - Security audit identified 10 gaps (KD-0340) — IDOR, privilege escalation, unscoped validation — all fixed in one PR --- ## 5. Anti-Patterns (Explicitly Rejected) The DECISIONS.md files are as valuable for what they reject as for what they accept. ### 5.1 Never: Provide/Inject Zero usage in the codebase. Explicitly rejected in: - KD-0194 D1: "Zero provide/inject usage exists in codebase; introduces a new pattern for one use case" - KD-0194 D4: "No existing provide/inject patterns; overkill for a single boolean" - KD-0301 D1: "Introduce new pattern not used anywhere in codebase; implicit dependency" ### 5.2 Never: ON DELETE CASCADE All cascading is explicit in Actions (ADR-0002). Enforced by `tests/Arch/MigrationsTest.php`. ### 5.3 Never: Facades in Actions Actions use constructor DI only. No `Auth::`, `DB::`, `Gate::` facades. Gate is injected as `Illuminate\Contracts\Auth\Access\Gate` (the contract, not the facade). ### 5.4 Never: Nested ResourceData Collections KD-0349 D7 explicitly rejected nesting `AttachmentResourceData[]` inside `ReportResourceData`. The pattern is: lean parent resource + dedicated endpoint for child collections. ### 5.5 Never: Raw Arrays Crossing Layer Boundaries FormRequest -> DTO -> Action. No exceptions (ADR-0012). ### 5.6 Never: Pinia (or any third-party state management) Custom adapter-store pattern. Pinia was never adopted and is not used anywhere. ### 5.7 Never: interface (TypeScript) Always `type`. Convention is universal across the frontend. ### 5.8 Never: ULIDs or UUIDs for Primary Keys All models use auto-incrementing integers. KD-0315 D1 explicitly switched from ULIDs back to integers for consistency. ### 5.9 Never: Soft Deletes (with one exception) Only the `User` model uses `SoftDeletes`. All other deletions are hard deletes. KD-0294 D2 explicitly rejected soft deletes for notifications with extensive rationale. ### 5.10 Superseded Decisions Some decisions were made and later reversed: - KD-0236 D2: Per-attachment ownership check superseded by D5 (simplified authorization deferred to separate issue) - KD-0315 D8: Users.read gating for navbar superseded by D11 (each sidebar link uses its own resource) - KD-0277 D3: Always use GitHub App token (superseded initial fallback approach) --- ## 6. Pattern Decision Tree When building a new feature in Kendo, the decisions follow a predictable path: ### Backend 1. **Where does business logic go?** -> Action (always) 2. **How does data enter the Action?** -> FormRequest -> DTO 3. **How does data leave the Action?** -> ResourceData (model-backed) or void 4. **How is auth handled?** -> Tier 1 (`->can()` middleware) for resource access, Tier 2 (`Gate::authorize()`) for contextual permissions 5. **How are deletions handled?** -> Explicit cascade in the Action, no FK cascading 6. **How is config accessed?** -> `#[Config('key')]` attribute 7. **Is there an audit trail?** -> Yes, if the entity is auditable. Logger injected into Action. ### Frontend 1. **Where does the feature live?** -> Domain directory under the correct app 2. **How is state managed?** -> Adapter-store pattern with `adapterStoreModuleFactory()` 3. **How are shared services accessed?** -> Factory instances from `@tenant/services/` or `@central/services/` 4. **How are permissions checked?** -> `can()` for tenant resources, `canInProject()` for project resources 5. **How is the component structured?** -> Smart parent with dumb presentational children 6. **Should I use provide/inject?** -> No. Props or factory. 7. **Should I use Pinia?** -> No. Adapter-store. ### Cross-Cutting 1. **Should I add a new endpoint for this data?** -> Prefer enriching existing responses (e.g., ProfileResourceData) 2. **Should I make this configurable?** -> Not until there is a concrete need (YAGNI) 3. **Should I build for both apps?** -> Only if both apps actually use it. Otherwise, scope to one app. 4. **How should I split this work?** -> One plan per logically independent piece, chained PRs if needed --- ## 7. Quantitative Pattern Distribution Across all 38 DECISIONS.md files (224+ decisions): | Decision Type | Count | Examples | |--------------|-------|---------| | **Consistency with existing pattern** | ~65 | Factory over inject, integer IDs, int-backed enums | | **Scope reduction / YAGNI** | ~30 | Defer watchers, skip configurable retention, frontend-only limits | | **Security / authorization** | ~25 | IDOR fixes, password-required flows, scoped validation | | **Component architecture** | ~20 | Dumb component/smart parent, shared vs domain-specific | | **API design** | ~15 | Enrich profile, single endpoint, semantic status codes | | **Data model** | ~15 | Separate models vs shared, column placement, cascade strategy | | **PR / delivery strategy** | ~10 | Chained PRs, bundled issues, split plans | | **Rejected / superseded** | ~10 | Provide/inject, ULIDs, soft deletes, string enums | | **Performance / infrastructure** | ~5 | Synchronous processing, no partitioning, client-side filtering | --- ## 8. Open Questions 1. **Architecture test coverage for frontend domain boundaries** — The ratcheted violation list in ESLint suggests there are known boundary violations being grandfathered. How many remain, and is there a plan to eliminate them? 2. **Page Integration Tests rollout** — ADR-0017 mentions "phased rollout by shared component usage count." What is the current coverage of integration tests vs the target? 3. **Audit logging expansion** — Only 5 entities have audit loggers. The system is designed for more. Which entities are next in line? 4. **Feature flag lifecycle** — Pennant flags are added for new features but there is no documented process for removing them once a feature is stable and rolled out to all tenants. 5. **The adapter-store's scalability ceiling** — The custom reactive state management works well today. At what point (if ever) would the codebase benefit from a more established solution, and what would trigger that evaluation? 6. **Multi-tenant scheduled commands** — KD-0294 D4 noted that `ReapStaleAiRunsCommand` does not iterate tenants (a gap). Has this been fixed, and are there other commands with the same issue? 7. **Ownership transfer implications** — KD-0315 D19 added a dedicated `transferOwnership` ability. How does ownership transfer interact with the orphaned assignments system (KD-0292)? --- ## Related - [kendo-app-plans-landscape.md](https://kendo-hq.pages.dev/raw/research/kendo-app-plans-landscape.md) — Feature map and decision archaeology from the same plans - [../company/product-overview.md](https://kendo-hq.pages.dev/raw/company/product-overview.md) — Product features built on these patterns --- # MCP Ecosystem Landscape 2026 Q2 *Source: `research/mcp-ecosystem-landscape-2026-q2.md` · [Raw](https://kendo-hq.pages.dev/raw/research/mcp-ecosystem-landscape-2026-q2.md)* --- title: "MCP Ecosystem Landscape — Q2 2026" date: 2026-04-04 type: analysis tags: [mcp, competitors, ecosystem, market-trends] sources: - https://www.atlassian.com/platform/remote-mcp-server - https://www.atlassian.com/blog/announcements/remote-mcp-server - https://support.atlassian.com/atlassian-rovo-mcp-server/docs/supported-tools/ - https://github.com/atlassian/atlassian-mcp-server - https://developers.asana.com/docs/mcp-server - https://developers.asana.com/docs/using-asanas-mcp-server - https://monday.com/w/mcp - https://support.monday.com/hc/en-us/articles/32364501317906-monday-MCP-available-tools - https://github.com/mondaycom/mcp - https://developer.clickup.com/docs/connect-an-ai-assistant-to-clickups-mcp-server - https://developer.clickup.com/docs/mcp-tools - https://help.clickup.com/hc/en-us/articles/33335772678423-What-is-ClickUp-MCP - https://developers.notion.com/docs/mcp - https://www.notion.com/blog/notions-hosted-mcp-server-an-inside-look - https://developers.notion.com/guides/mcp/mcp-supported-tools - https://github.com/github/github-mcp-server - https://github.blog/changelog/2026-01-28-github-mcp-server-new-projects-tools-oauth-scope-filtering-and-new-features/ - https://linear.app/docs/mcp - https://linear.app/changelog/2026-02-05-linear-mcp-for-product-management - https://github.com/useshortcut/mcp-server-shortcut - https://help.shortcut.com/hc/en-us/articles/36443434285844-MCP-Server - https://github.com/makeplane/plane-mcp-server - https://developers.plane.so/dev-tools/mcp-server - https://www.wrike.com/blog/wrike-mcp-server/ - https://developers.wrike.com/wrike-mcp/ - https://www.wrike.com/newsroom/wrike-launches-mcp-server-empowering-third-party-ai-agents-with-real-time-work-management-intelligence/ - https://www.pulsemcp.com/servers/teamwork-go - https://www.pulsemcp.com/servers/todoist - https://dev.to/zarq-ai/state-of-ai-assets-q1-2026-143k-agents-17k-mcp-servers-all-trust-scored-2dc2 - https://mcpmanager.ai/blog/mcp-adoption-statistics/ - https://www.digitalapplied.com/blog/mcp-97-million-downloads-model-context-protocol-mainstream - https://zuplo.com/mcp-report - https://blog.modelcontextprotocol.io/posts/2025-11-25-first-mcp-anniversary/ - https://www.pulsemcp.com/servers - https://github.com/TensorBlock/awesome-mcp-servers/blob/main/docs/project--task-management.md - https://fastmcp.me/blog/most-popular-mcp-tools-2026 - https://www.merge.dev/blog/project-management-mcp-servers - https://www.cdata.com/blog/2026-year-enterprise-ready-mcp-adoption --- # MCP Ecosystem Landscape — Q2 2026 ## Executive Summary **The window has closed.** The marketing strategy written in March 2026 claimed only 4 PM tools had MCP support (Linear, Shortcut, Plane, Kendo) with a ~12 month first-mover window. As of April 2026, **at least 10 major project management tools have official or first-party MCP servers**, including every tool in Kendo's competitive set. Jira, Asana, Monday.com, ClickUp, Notion, and GitHub all shipped MCP servers. Wrike and Teamwork have also entered. MCP has gone from niche differentiator to table stakes in under a year. Kendo's MCP server (26 tools, 10 resources) is no longer rare — it's now a question of **depth** and **developer-workflow fit** rather than mere existence. --- ## 1. Who Has MCP Servers Now? ### Tier 1: Official First-Party MCP Servers (Vendor-Built and Maintained) | Tool | MCP Server | Tools | Transport | Status | Notes | |------|-----------|-------|-----------|--------|-------| | **Jira** (Atlassian) | Atlassian Rovo MCP Server | ~46+ | Remote (Streamable HTTP) | GA | Covers Jira, Confluence, Compass, JSM, Teamwork Graph. OAuth 2.1. Hosted on Cloudflare. Anthropic was first official partner. | | **Linear** | linear.app/docs/mcp | ~21 (expanded Feb 2026) | Remote + stdio | GA | Added initiatives, milestones, project updates in Feb 2026. Product management focus. | | **Asana** | developers.asana.com/docs/mcp | ~30+ | Remote (Streamable HTTP v2) | GA | V2 launched Feb 2026. V1 (SSE) deprecated, shutting down May 2026. | | **Monday.com** | monday.com/w/mcp | ~18 standard + dynamic API tools | Remote | GA | Available on all plans at no cost. Dynamic API Tools unlock full GraphQL surface. | | **ClickUp** | developer.clickup.com/docs/mcp-tools | 20+ (official), 170+ (community) | Remote | Public Beta | Tasks, docs, time tracking, comments. No deletion tools (safety). All plans. | | **Notion** | developers.notion.com/docs/mcp | ~18 | Remote (Streamable HTTP) | GA | Read/write. Search, content management, database queries. V2 with data sources. | | **GitHub** | github.com/github/github-mcp-server | 51 | Remote + stdio | GA | 28k+ stars. Issues, PRs, Actions, Projects (consolidated toolset Jan 2026). | | **Shortcut** | help.shortcut.com MCP Server | ~24 | Remote (OAuth) + stdio | GA | Stories, epics, iterations, objectives, docs. Read-only mode available. | | **Plane** | developers.plane.so/dev-tools/mcp-server | 55+ | stdio, SSE, streamable HTTP | GA | Largest tool count in PM category. Issues, cycles, modules, work logs. Agent framework with @mention. | | **Wrike** | developers.wrike.com/wrike-mcp | Unknown (full API coverage) | Remote | GA | Tasks, sprints, blockers, time entries. OAuth 2.0. Announced via press release. | | **Teamwork** | PulseMCP listing | Unknown | Unknown | Available | Tickets, customers, companies. Less documented than others. | | **Todoist** | PulseMCP listing | Unknown | stdio | Available | Task management via REST API. | | **Kendo** | kendo.dev/docs | 26 tools, 10 resources | stdio | GA | Issues, sprints, time tracking, epics, GitHub sync. | ### Tier 2: Community / Third-Party MCP Servers | Tool | Source | Notes | |------|--------|-------| | **Trello** | Multiple community repos (delorenj/mcp-server-trello, etc.) | No official Atlassian Trello MCP, but community implementations exist. Board, list, card, label management. | | **Basecamp** | BusyBee3333/basecamp-mcp-2026-complete | Community-built, claims 100+ tools. Not official. | | **Zoho Projects** | PulseMCP listing | Community implementation. | ### Summary Of the 10 tools in Kendo's original competitive matrix, **every single one now has MCP support** — either official (Jira, Asana, Monday.com, ClickUp, Notion, GitHub, Linear, Shortcut, Plane) or community-built (Trello). --- ## 2. How Do They Compare to Kendo's 26 Tools + 10 Resources? ### Tool Count Comparison | Tool | Approx. Tool Count | Scope | Time Tracking via MCP | Resources/Prompts | |------|-------------------|-------|-----------------------|-------------------| | **Plane** | 55+ | Issues, cycles, modules, work logs, agent framework | Yes (work logs) | Unknown | | **GitHub** | 51 | Repos, issues, PRs, Actions, Projects, security | No | No | | **Atlassian (Jira)** | 46+ | Jira, Confluence, Compass, JSM, Teamwork Graph | Via JSM/add-ons | Unknown | | **Asana** | 30+ | Tasks, projects, portfolios, goals, custom fields | No | Unknown | | **Kendo** | 26 tools + 10 resources | Issues, sprints, time tracking, epics, GitHub sync | **Yes (native)** | **Yes (10 resources)** | | **Shortcut** | ~24 | Stories, epics, iterations, objectives, docs | No | Unknown | | **Linear** | ~21 | Issues, projects, initiatives, milestones, comments | No | Unknown | | **ClickUp** | 20+ (official) | Tasks, docs, time tracking, comments | Yes (timers + logs) | Unknown | | **Monday.com** | ~18 + dynamic API | Boards, items, columns, groups, subitems | Via column values | Unknown | | **Notion** | ~18 | Pages, databases, search, content management | No | Unknown | | **Wrike** | Unknown | Tasks, sprints, blockers, time entries | Yes | Unknown | ### Analysis - **Plane has overtaken everyone on raw tool count** (55+), significantly exceeding Kendo's 26. - **Jira's scope is massive** — 46+ tools spanning 5 Atlassian products. Enterprise AI integration. - **GitHub's 51 tools** cover far more than project management (repos, PRs, Actions, security). - **Kendo's 26 tools + 10 resources is mid-pack** on quantity, but still has a unique combination: native time tracking via MCP + resource endpoints for structured data access. - **ClickUp now has time tracking via MCP** — timers, historical log entries, and fetching entries. This was previously listed as absent. - **Wrike also has time tracking via MCP** — a new entrant with built-in time entries. ### Where Kendo Still Has an Edge 1. **MCP resources** — Most competitors expose only tools. Kendo's 10 resources provide structured read access (project overview, sprint status, etc.) that tools can consume for context. This is architecturally more sophisticated for agent workflows. 2. **Time tracking + sprints + AI in a single tool** — While ClickUp and Wrike now have time tracking via MCP, Kendo bundles it at every tier without add-on pricing. 3. **Developer-workflow focus** — Kendo's MCP server is designed for the Claude Code / Cursor terminal workflow, not for enterprise chatbot integrations. This is a positioning choice, not a feature gap. 4. **EU hosting + audit logging** — None of these MCP servers change the data residency or compliance story. ### Where Kendo Has Lost Ground 1. **Tool count** — 26 tools is no longer impressive. Plane (55+), GitHub (51), and Jira (46+) all exceed it significantly. 2. **"One of only 4"** — This claim is now false. At least 10 major tools have official MCP servers. 3. **First-mover window** — The 12-month window did not materialize. It was more like 3-6 months before the wave hit. --- ## 3. Is the Window Closing? **The window is closed.** Here's the timeline: | Period | Event | |--------|-------| | Nov 2024 | MCP launched by Anthropic | | Early 2025 | Linear, Shortcut, Plane ship MCP servers. Kendo ships MCP. | | Jun 2025 | Atlassian Rovo MCP server goes GA (25 tools for Jira + Confluence) | | Oct 2025 | GitHub MCP server adds Projects support | | Late 2025 | OpenAI, Google, Microsoft all adopt MCP. SDK downloads hit 68M/month. | | Dec 2025 | MCP donated to Agentic AI Foundation (Linux Foundation). Becomes vendor-neutral standard. | | Jan 2026 | GitHub MCP server expands Projects tools + OAuth scope filtering | | Feb 2026 | Asana ships MCP V2. Linear expands to product management (initiatives, milestones). Monday.com MCP goes GA. | | Early 2026 | ClickUp MCP enters public beta. Wrike announces MCP server. Notion MCP GA. | | Mar 2026 | SDK downloads reach 97M/month. 17K+ MCP servers indexed across registries. | | Apr 2026 | Atlassian expands to 46+ tools across 5 products. | **The acceleration was faster than predicted.** The marketing strategy estimated a 12-month window. In reality: - The major enterprise players (Jira, Asana, Monday.com, ClickUp) all shipped within 6-9 months of the early movers. - The MCP donation to the Linux Foundation in Dec 2025 removed the "Anthropic-only" perception and accelerated enterprise adoption. - OpenAI, Google, and Microsoft all backing MCP made it an industry standard, not a niche protocol. --- ## 4. MCP Ecosystem Growth ### Scale | Metric | Value | Date | |--------|-------|------| | Monthly SDK downloads | 97M | Mar 2026 | | MCP servers in public registries | 17,000+ | Q1 2026 | | PulseMCP directory | 11,170+ servers | Apr 2026 | | GitHub MCP server stars | 28,300+ | Apr 2026 | | Major vendor backing | Anthropic, OpenAI, Google, Microsoft, AWS | Nov 2025 | | Governance | Agentic AI Foundation (Linux Foundation) | Dec 2025 | ### Growth Trajectory - **Nov 2024:** ~100K monthly SDK downloads at launch - **Apr 2025:** 22M downloads/month (after OpenAI adoption) - **Jul 2025:** 45M downloads/month (Microsoft integration) - **Nov 2025:** 68M downloads/month (AWS Bedrock support) - **Mar 2026:** 97M downloads/month For comparison: the React npm package took ~3 years to reach 100M monthly downloads. MCP achieved comparable scale in 16 months. ### Protocol Evolution - **Streamable HTTP** is now the preferred transport (replacing SSE). Asana, Atlassian, and others have migrated. - **OAuth 2.1** is becoming the standard auth method for remote MCP servers. - **Remote MCP servers** (hosted, no local install) are up 4x since May 2025. - The 2026 MCP Roadmap focuses on enterprise-readiness: token management, access control, response filtering. --- ## 5. Discrepancies with Existing Kendo Documents ### `shared/competitors.md` | Claim | Reality | Severity | |-------|---------|----------| | Jira listed as "MCP Support: No" | Atlassian has an official Rovo MCP Server with 46+ tools, GA since mid-2025 | **Critical** | | Asana listed as "MCP Support: No" | Asana has an official MCP V2 server (30+ tools), GA | **Critical** | | Monday.com listed as "MCP Support: No" | Monday.com has an official MCP server, available on all plans | **Critical** | | ClickUp listed as "MCP Support: No" | ClickUp has an official MCP server in public beta | **High** | | Notion listed as "MCP Support: No" | Notion has an official MCP server, GA | **High** | | GitHub Projects listed as "MCP Support: No" | GitHub MCP server has 51 tools including Projects support | **High** | | Trello listed as "MCP Support: No" | Community-built MCP servers exist (not official) | **Low** | ### `company/marketing-strategy.md` | Claim | Reality | Severity | |-------|---------|----------| | "Only 4 PM tools support MCP" (S2, O1) | At least 10 major PM tools have official MCP servers | **Critical — remove this claim immediately** | | "First-mover advantage window is ~12 months" (O1) | Window was ~3-6 months. It's closed. | **Critical — requires strategy revision** | | "MCP support is rare" (Key Takeaways) | MCP support is now ubiquitous among PM tools | **Critical** | | Linear "MCP but limited" (Feature Messaging Matrix) | Linear has 21+ tools with initiatives and milestones | **High — understates competitor** | | Competitive table shows only Linear, Shortcut, Plane with MCP | All 10 competitors now have MCP | **Critical — table is outdated** | --- ## 6. Strategic Implications ### What This Means for Kendo 1. **"We have MCP" is no longer a differentiator.** Everyone has MCP. The messaging must shift from "we have it, they don't" to "ours is built for developer terminal workflows." 2. **Depth over existence.** The new competitive question is: How deep is your MCP integration? Kendo's 10 resources (structured context endpoints) are still unusual — most competitors only expose tools. This is a technical nuance worth emphasizing. 3. **The combination is the moat, not MCP alone.** MCP + native time tracking + EU hosting + audit logging + developer focus — the bundle still differentiates. No single competitor matches all five. 4. **Tool count gap.** Plane (55+), GitHub (51), and Jira (46+) all exceed Kendo's 26 tools significantly. Consider whether expanding Kendo's MCP tool surface is a priority. 5. **Messaging overhaul needed.** Every instance of "only 4 tools" and "12-month window" must be removed from marketing materials, landing pages, and strategy documents before public launch. ### Recommended Messaging Pivot **Before (March 2026):** "One of only 4 PM tools with MCP support." **After:** "The deepest MCP integration for developer workflows — 26 tools, 10 resources, and native time tracking. No add-ons, no enterprise tier required." The emphasis shifts from scarcity to quality: structured resources for agent context, terminal-native design, and a bundle that competitors gate behind higher tiers or add-ons. --- ## 7. Open Questions for Future Research 1. **What do competitors' MCP resources look like?** Kendo has 10 resources — do any competitors expose MCP resources (not just tools)? This could be a genuine remaining differentiator. 2. **MCP usage analytics.** How many developers are actually using MCP servers for PM tools vs. just having them available? Adoption != usage. 3. **Pricing impact.** Several competitors (Monday.com, ClickUp) offer MCP on all plans. Does this change the willingness to pay for Kendo's MCP? 4. **Agent framework patterns.** Plane has an agent framework with @mention and Agent Run lifecycle. Is this the next competitive frontier after basic MCP support? 5. **MCP auth standardization.** OAuth 2.1 and streamable HTTP are becoming standard. Does Kendo's stdio-only transport become a limitation for remote/hosted use cases? --- *[Librarian] Research conducted 2026-04-04. All tool counts are approximate and based on public documentation and registry listings as of the research date. Exact counts change frequently as vendors update their MCP servers.* --- ## Related - [ai-native-dev-tools-landscape-2026.md](https://kendo-hq.pages.dev/raw/research/ai-native-dev-tools-landscape-2026.md) — Broader AI-native positioning context - *mcp-apps-kendo-pilot.md* — Follow-up on the next MCP ecosystem shift: **MCP Apps** (SEP-1865, GA 2026-01-26) — interactive UI widgets rendered inside Claude Desktop. Asana, Monday.com, Canva, Figma, Box, and Slack were launch partners. Kendo pilot scoped at 3-5 days. - [../shared/competitors.md](https://kendo-hq.pages.dev/raw/shared/competitors.md) — Competitive landscape updated from these findings - [../company/marketing-strategy.md](https://kendo-hq.pages.dev/raw/company/marketing-strategy.md) — Go-to-market strategy updated with MCP reality check --- # Unified AI SDKs (TypeScript) 2026 *Source: `research/unified-ai-sdks-typescript-2026.md` · [Raw](https://kendo-hq.pages.dev/raw/research/unified-ai-sdks-typescript-2026.md)* --- title: "Unified Multi-Provider AI SDKs for TypeScript — Landscape and Recommendation" date: 2026-04-04 type: comparison tags: [ai-sdk, typescript, anthropic, openai, content-factory, multi-provider, agents] sources: - https://ai-sdk.dev/docs/introduction - https://vercel.com/blog/ai-sdk-6 - https://ai-sdk.dev/providers/ai-sdk-providers/anthropic - https://ai-sdk.dev/cookbook/api-servers/express - https://openai.github.io/openai-agents-js/ - https://github.com/openai/openai-agents-js - https://openai.github.io/openai-agents-js/extensions/ai-sdk/ - https://www.npmjs.com/package/@openai/agents - https://tanstack.com/ai/latest/docs - https://github.com/TanStack/ai - https://mastra.ai/docs - https://github.com/mastra-ai/mastra - https://github.com/zya/litellmjs - https://strapi.io/blog/openai-sdk-vs-vercel-ai-sdk-comparison - https://github.com/anthropics/anthropic-sdk-typescript - https://github.com/anthropics/claude-agent-sdk-typescript --- # Unified Multi-Provider AI SDKs for TypeScript — Landscape and Recommendation ## Summary The content factory currently uses `@anthropic-ai/claude-agent-sdk` (Anthropic's Claude Agent SDK) for all LLM interactions. To support OpenAI models alongside Anthropic without maintaining separate provider packages, the **Vercel AI SDK** (`ai` package, v6) is the clear winner: 10M+ weekly downloads, first-class support for both Anthropic and OpenAI, streaming, structured output, tool calling, agent patterns, extended thinking support, and it works in plain Node.js/Express (no Edge runtime required). The migration path from Claude Agent SDK's `query()` pattern to AI SDK's `generateText()`/`streamText()` is straightforward. ## Comparison Matrix | Criteria | Vercel AI SDK (ai) | OpenAI Agents SDK (@openai/agents) | TanStack AI (@tanstack/ai) | Mastra (@mastra/core) | LiteLLM JS (litellmjs) | Claude Agent SDK (current) | |----------|-------------------|-----------------------------------|--------------------------|-----------------------|------------------------|---------------------------| | **npm weekly downloads** | 10.2M | 463K | 44K | 576K | Dead (no npm) | 3.6M | | **Latest version** | 6.0.146 | 0.8.2 | 0.10.0 (alpha) | -- | 0.12.0 (Jan 2024) | 0.2.92 | | **Multi-provider** | 15+ providers | OpenAI native + adapter for others | OpenAI, Anthropic, Google, Ollama | 94 providers (via AI SDK) | 8 providers, partial | Anthropic only | | **Streaming** | Yes (streamText, pipeToResponse) | Yes (StreamedRunResult) | Yes | Yes | Yes (basic) | Yes (async iterator) | | **Structured output** | Yes (Zod/JSON Schema) | Yes (via tools) | Yes (Zod) | Yes (Zod) | No | Yes (outputFormat) | | **Tool calling** | Yes (tool(), dynamicTool()) | Yes (6 tool types + MCP) | Yes (isomorphic tools) | Yes | No | Yes (tools array) | | **Agent patterns** | Yes (ToolLoopAgent, v6) | Yes (Agent, Handoff, Guardrails) | Yes (tool loop) | Yes (Agent, Workflow, RAG) | No | Minimal (query loop) | | **Extended thinking** | Yes (providerOptions.anthropic.thinking) | No (OpenAI only) | Unknown | Yes (via AI SDK) | No | Yes (native) | | **Prompt caching** | Yes (cacheControl) | No | Unknown | Yes (via AI SDK) | No | Yes (native) | | **Express/Node.js** | Yes (pipeTextStreamToResponse) | Yes | Yes | Yes | Yes | Yes | | **TypeScript-first** | Yes | Yes | Yes | Yes | Yes | Yes | | **Actively maintained** | Yes (Vercel-backed) | Yes (OpenAI-backed) | Alpha stage | Yes (YC W25, $13M) | No (last commit Jan 2024) | Yes (Anthropic-backed) | | **Install** | `npm i ai @ai-sdk/anthropic @ai-sdk/openai` | `npm i @openai/agents` | `npm i @tanstack/ai` | `npm i mastra @mastra/core` | -- | `npm i @anthropic-ai/claude-agent-sdk` | ## Detailed Analysis ### 1. Vercel AI SDK (`ai` package) -- RECOMMENDED **What it is:** The dominant TypeScript AI SDK, maintained by Vercel. v6 (current) introduces a formal `Agent` abstraction (`ToolLoopAgent`) and unifies structured output with tool calling. Provider-agnostic: install `@ai-sdk/anthropic` for Claude, `@ai-sdk/openai` for GPT/o-series, and swap with one line. **Key strengths for Kendo:** - **Drop-in provider switching.** Change `anthropic('claude-sonnet-4-6')` to `openai('gpt-4o')` -- same `generateText()`/`streamText()` call, same tool definitions, same output handling. - **Extended thinking support.** `providerOptions: { anthropic: { thinking: { type: 'adaptive' } } }` returns `reasoningText` alongside `text`. This is critical if we ever want to expose Claude's reasoning chain. - **Structured output.** `generateText()` with `output: Output.object({ schema: z.object({...}) })` replaces our current `outputFormat` JSON schema approach. Zod integration means compile-time type safety. - **Streaming to Express.** `result.pipeTextStreamToResponse(res)` works in plain Node.js -- no Edge runtime needed despite what some comparison articles claim. Full example exists at `ai-sdk.dev/cookbook/api-servers/express`. - **Prompt caching.** Anthropic prompt caching exposed via `providerOptions.anthropic.cacheControl`. - **Massive ecosystem.** 10.2M weekly downloads. Every tutorial, every example, every AI library assumes AI SDK compatibility. The `@ai-sdk/anthropic` provider alone has 4.1M weekly downloads. **Provider-specific features:** - Anthropic: extended thinking (adaptive + budget), prompt caching, computer use tool, PDF support, `disableParallelToolUse`, speed/effort settings, MCP server connections - OpenAI: function calling, structured outputs, embeddings, file search, web search **Migration path from Claude Agent SDK:** ```typescript // BEFORE (claude-agent-sdk) const conversation = query({ prompt: userPrompt, options: { systemPrompt: SYSTEM, model: 'claude-haiku-4-5-20251001', tools: [], outputFormat: {type: 'json_schema', schema: OUTPUT_SCHEMA}, }, }); for await (const message of conversation) { ... } // AFTER (ai sdk) const result = await generateText({ model: anthropic('claude-haiku-4-5-20251001'), system: SYSTEM, prompt: userPrompt, output: Output.object({ schema: zodSchema }), }); const spec = result.object; // typed! ``` **Gotchas:** - v6 deprecates `generateObject()` in favor of `generateText()` with `output` property -- migration codemod available (`npx @ai-sdk/codemod v6`) - Provider-specific features require `providerOptions` -- not every feature is portable across providers - The `@ai-sdk/gateway` dependency in v6 suggests Vercel is steering toward their paid gateway service, though direct provider connections still work fine - Bundle size is small (19.5 kB gzipped for OpenAI provider) but you install one package per provider ### 2. OpenAI Agents SDK (`@openai/agents`) **What it is:** OpenAI's agent framework for TypeScript, evolved from the Swarm experiment. Provides `Agent`, `Handoff`, `Guardrails` primitives plus 6 tool types including MCP support. Primary use case: multi-agent orchestration with OpenAI models. **Key strengths:** - **Rich agent primitives.** Agent handoffs, guardrails, sessions, tracing -- more opinionated agent framework than AI SDK - **MCP-native.** Built-in `MCPServerStdio` and `MCPServerSSE` support - **Realtime agents.** Voice agent support with interruption detection - **Multi-provider via adapter.** `@openai/agents-extensions` package bridges to Vercel AI SDK providers: ```typescript import { aisdk } from '@openai/agents-extensions/ai-sdk'; import { anthropic } from '@ai-sdk/anthropic'; const model = aisdk(anthropic('claude-sonnet-4-6')); const agent = new Agent({ model, instructions: '...', tools: [...] }); ``` **Why NOT for Kendo's content factory:** - **OpenAI-first design.** Anthropic support is a second-class citizen via an adapter that's still in beta. Feature parity is not guaranteed -- OpenAI-specific features (hosted tools, file search, web search) won't work with Anthropic models. - **Heavier abstraction.** The Agent/Handoff/Guardrail pattern adds complexity we don't need. Our content factory has a simple three-agent pipeline -- we don't need handoffs or guardrails. - **Lower adoption.** 463K weekly downloads vs AI SDK's 10.2M. The extensions package is only at 46K downloads. - **0.x version.** Still pre-1.0, API may change significantly. **When it would make sense:** If Kendo ever builds a complex multi-agent system with dynamic handoffs, guardrails, and MCP tools -- and OpenAI is the primary model -- the Agents SDK would be the right choice. For our use case (content generation with structured output), it's overkill. ### 3. TanStack AI (`@tanstack/ai`) **What it is:** Provider-agnostic AI SDK from the TanStack team (React Query, TanStack Router). Focuses on "no vendor lock-in" with clean TypeScript APIs and isomorphic tool definitions. **Key strengths:** - Truly vendor-neutral (no Vercel platform tie-in) - Isomorphic tools (define once, run on server or client) - Supports OpenAI, Anthropic, Ollama, Google Gemini, OpenRouter **Why NOT for Kendo:** - **Alpha stage.** v0.10.0, API is unstable. "Alpha 2" blog post from TanStack. - **Tiny adoption.** 44K weekly downloads -- 230x smaller than AI SDK. - **Missing documentation.** Extended thinking, prompt caching, and provider-specific features are undocumented. - **No ecosystem.** No Express integration examples, no agent framework, limited community content. **When it would make sense:** If Vercel AI SDK ever becomes too tightly coupled to Vercel's commercial platform and we need a pure open-source alternative. Worth revisiting in 6-12 months when it reaches 1.0. ### 4. Mastra (`@mastra/core`) **What it is:** Full-featured TypeScript agent framework from the Gatsby team (YC W25, $13M funding). Built on top of Vercel AI SDK for LLM calls, adds agents, workflows, RAG, memory, evals, and MCP support. **Key strengths:** - Full agent framework with memory, workflows, RAG, evals - 94 providers via AI SDK integration - Supervisor pattern for multi-agent orchestration - Observational memory (background agents compress conversation history) **Why NOT for Kendo's content factory:** - **Framework, not library.** Mastra wants to own the entire agent lifecycle. Our content factory is a focused tool, not a platform. - **Excessive abstraction.** We need `generateText()` with structured output, not RAG pipelines and memory systems. - **Dependency chain.** Mastra -> AI SDK -> Provider SDK. Two layers of abstraction above the model. If AI SDK already gives us what we need, adding Mastra is pure overhead. - **Young project.** Active but still rapidly evolving APIs. **When it would make sense:** If Kendo builds a standalone AI agent product with persistent memory, multi-step workflows, and RAG -- Mastra would be worth evaluating. For the content factory, it's like using Rails when you need a single Express endpoint. ### 5. LiteLLM JS (`litellmjs`) **What it is:** Community JavaScript port of the Python LiteLLM library. The official LiteLLM is Python-only; this is an unofficial adaptation. **Why NOT for Kendo:** - **Dead project.** Last commit January 2024. Not on npm anymore. 147 GitHub stars. - **No tool calling.** Only basic completions and embeddings. - **Partial provider support.** 8 providers, many incomplete. - **No structured output, no agents, no streaming beyond basic chunks.** The only viable LiteLLM path for TypeScript is running the Python proxy server and calling it via OpenAI-compatible API -- which adds operational complexity for no benefit when AI SDK exists. ### 6. Anthropic SDKs (current setup) **`@anthropic-ai/claude-agent-sdk`** (what we use): Anthropic's agent SDK that wraps the Claude API with a `query()` function, structured output, tool use, and session management. 3.6M weekly downloads. **`@anthropic-ai/sdk`** (lower-level): Direct Anthropic API client. 12.2M weekly downloads. Does not support other providers. **Neither SDK supports non-Anthropic models.** The only multi-provider path via Anthropic's SDK is using a gateway service (Braintrust, Requesty) that translates between API formats -- adding latency, cost, and a third-party dependency. ## Recommendation for Kendo **Migrate the content factory from `@anthropic-ai/claude-agent-sdk` to Vercel AI SDK (`ai` + `@ai-sdk/anthropic`).** Then, when OpenAI model support is needed, add `@ai-sdk/openai` and swap the model string. No other code changes required. ### Migration scope estimate The content factory has 4 agent files that use `query()`: - `server/agents/strategist.ts` -- structured output (Haiku) - `server/agents/writer.ts` -- free-form text (Sonnet) - `server/agents/evaluator.ts` -- structured output (Sonnet) - `server/agents/scorer.ts` -- structured output (Haiku) Each file follows the same pattern: call `query()`, iterate async, extract result. The AI SDK equivalent is `generateText()` with optional `output` for structured data. Estimated migration: **half a day of work** per agent, including testing. The prompt files (`server/prompts/`) need zero changes -- they're just strings. ### What we gain 1. **Provider flexibility.** Add `@ai-sdk/openai` and test GPT-4o or o3 for any agent role. Compare quality/cost/speed without rewriting agent logic. 2. **Better structured output.** Zod schemas with compile-time type inference replace our manual JSON schema + fallback parsing. 3. **Ecosystem alignment.** AI SDK is the de facto standard. Future tools, tutorials, and libraries assume it. 4. **Extended thinking.** Anthropic extended thinking exposed via `providerOptions` -- no special SDK needed. 5. **Prompt caching.** Reduce costs on repeated system prompts via `cacheControl`. ### What we lose 1. **Claude Agent SDK's `permissionMode`/`persistSession` options.** These are specific to the Claude Code environment. Our content factory doesn't use them meaningfully (all agents run with `tools: []` and `persistSession: false`). 2. **Direct Anthropic SDK features** we might not know about. The AI SDK provider is a thin wrapper, so provider-specific capabilities are generally accessible via `providerOptions`. ### Package changes ```diff // package.json dependencies - "@anthropic-ai/claude-agent-sdk": "^0.2.85", + "ai": "^6.0.0", + "@ai-sdk/anthropic": "^3.0.0", + // Add when needed: + // "@ai-sdk/openai": "^3.0.0", ``` ## Open Questions 1. **Per-agent model selection.** Should we make the model configurable per agent at runtime (e.g., via the dashboard), or keep it as a constant per agent file? AI SDK makes this trivial -- the model is just a function parameter. 2. **Cost tracking.** AI SDK returns `usage` data (prompt/completion tokens). Should we track costs per run in `meta.json`? The current setup doesn't do this. 3. **Streaming to WebSocket.** The content factory streams progress via WebSocket. AI SDK's `streamText()` returns an async iterable that could pipe partial content to the frontend in real-time, which we don't do today (we only send progress events, not partial text). 4. **OpenAI model candidates.** Which OpenAI models would we actually test? GPT-4o for the writer (creative)? o3 for the evaluator (reasoning)? This needs experimentation once the SDK is swapped. 5. **Fallback chains.** AI SDK supports model fallbacks (try Anthropic, fall back to OpenAI on failure). Worth implementing for reliability, or unnecessary complexity for a solo-founder tool? ---