The latest features, improvements, and fixes shipped in heyQ.
Btw, the changelog below is - basically - auto-generated by Q. You can have the same for your own changelog by using our changelog page template.
Learn more in our docs
Until now, your success criteria lived in three different places — Growth Lab experiments, a spreadsheet somewhere, and old chat threads. Q couldn’t see any of it. That changes today.
The new Goals & Metrics page is a singleton — one per project, always at the same place. It’s where you define what success looks like and watch it unfold. Every item is a metric. Give it a target, and it becomes a goal. Remove the target, and it goes back to being a tracked number. No structural difference — just different rendering.
current / target unit and a status badge. Works for MRR, signups, activation rate, ARPU — anything countable.When you set a target with a start date and deadline, status is calculated automatically — on_track, at_risk, behind, or completed — based on how your actual progress compares to where you should be at this point in time. No deadline? A static fallback applies (≥60% → on track, ≥30% → at risk). You can override the auto-calc for any goal, and the override is visually indicated so nothing looks wrong without explanation.
Pure metrics with no target skip status entirely and show a trend indicator (↑ ↓ →) instead.
Metrics are organized by area — Growth, Revenue, Product, Financial, Engagement, and Custom (with a user-defined label). Empty areas stay hidden until you need them. When a tracking period ends, archive its metrics: the data is preserved and Q can reference it for trend reasoning (“last quarter’s MRR was…”).
This is the part that changes how Q works for you. Q reads the Metrics page in every relevant conversation — without you having to ask. When you’re talking through budget, Q surfaces burn vs. revenue context. When you’re planning a Growth Lab experiment, Q connects it to your actual metric baselines. When you discuss runway, Q factors in your current MRR and spending data (if your Stack & Costs page exists).
Your goals are no longer disconnected from your decisions.
MCP tokens give external tools (Cursor, Claude Desktop) read access to /metrics/index.json — and nothing else. Write operations return 403, always. Goal targets, deadlines, and status overrides are human decisions. The metrics page is a source of truth, not a surface for AI to rewrite.
Shipped as part of DEV-0047 · Webhook-based auto-updates and the Q sidebar write tool are scoped to a follow-up release once the page is validated in production.
HeyQ already gives you structured product truth and a mission board for development work. Now it covers the other half of building a product: figuring out how people find it.
Growth work has always been the orphan of product management. Experiments live in spreadsheets. Learnings get forgotten. Every new AI session starts cold on your marketing context — you re-explain your positioning, your ICP, your channel bets, just like you used to re-explain your product.
Growth Lab closes that gap.
A new structured page type — built on the same file-first, AI-native architecture as Mission Boards — for tracking your distribution experiments.
Channels are your sources of growth. Each one has a lifecycle status you move it through as you learn:
Untested → Testing → Promising → Winner / Killed
Experiments live under channels. Each experiment is a structured test with:
Every experiment has its own markdown body file — the same click-to-open UX as a mission — where you write the design, record results, and capture learnings that persist across sessions.
When adding channels, you can choose from preset categories to get started quickly:
Four new page templates are now available under a Growth category in the Add Page flow:
| Template | What it captures |
|---|---|
| Positioning | Competitive alternatives, unique attributes, differentiated value, target segment, market category |
| Audience | ICP profile, pain points, watering holes, language/jargon, buying triggers |
| Messaging | One-liner, elevator pitch, headlines by segment, objection handling |
| Channel Strategy | Which channels to test, rationale, current bets, budget |
These are plain markdown pages — immediately readable by any MCP-connected tool.
Q in the AI sidebar can now work directly with your Growth Lab. Ask it to:
Q executes it the same way it manages missions.
getProjectContext now includes a structured Growth Lab summary — channels by status, recent experiments with verdicts. Every AI tool you connect to HeyQ now knows your distribution strategy alongside your product truth.
After you’ve shipped 3 or more features with no Growth Lab, Q will surface a suggestion: you’ve built the thing — now figure out how people find it. Growth Lab is one click away.
This is the first entry in the HeyQ changelog. Rather than a release note for a single feature, it’s a snapshot — a record of where the product stands at the moment we started keeping one.
HeyQ is the product management tool for the vibe coding era.
Cursor, Claude, Lovable, v0 — AI development tools are everywhere. But the PM layer hasn’t caught up. Builders are shipping faster than ever and losing coherence faster than ever. Every new AI session starts cold. Decisions made last week are forgotten by this week’s prompts. The feature you just shipped contradicts a choice you made two weeks ago.
Vibe coding gave you speed. It took away your product’s memory.
HeyQ fixes that. You define your product truth once — vision, scope, decisions, boundaries — and every AI tool stays aligned. Stop re-explaining. Start shipping what you meant to build.
By the time this changelog launched, HeyQ had shipped a complete, production-grade v1. Here’s what’s live:
The fundamental loop is working: define → build → stay aligned.
Every Q suggestion is a proposal. Pages show diffs. Users accept or reject before anything is saved. There’s no silent drift — the audit trail records whether each change was made by a human, Q, an MCP tool, or the system. Version history lets you restore any previous state.
This is intentional. HeyQ is drift-resistant by design.
The most powerful thing HeyQ does is serve your product truth to any AI tool in your workflow.
MCP tokens let you connect Cursor, Claude Desktop, Claude CLI, VS Code, or any MCP-compatible tool with fine-grained per-page read/write control. Copy a config snippet. Add a .cursor/rules file. Every AI session starts informed.
Export is also available: ZIP archive, Cursor Rules (.mdc), or Agent Skills (SKILL.md) for Claude Code.
When a pull request is opened on a connected repo, Q automatically comments with the relevant product truth — scoped decisions, brief context, mission links. It uses AI to triage the PR type and adjusts depth accordingly. A feature PR gets the full context. A typo fix gets silence.
Q can also reverse-engineer product truth from an existing codebase — useful for builders who coded first and never documented.
The backlog is full. The priorities that matter most right now:
This isn’t just a release log. It’s a commitment to transparency — to building in public and communicating clearly about what changed, why, and what it means for the people building with HeyQ.
Entries will be drafted with help from Q (the AI), grounded in actual done missions, and published when they represent a coherent set of shipped changes.
Every mission needs a Q. Welcome to the log.