Hacker News
Show HN: Continue – Source-controlled AI checks, enforceable in CI
Continue (https://docs.continue.dev) runs AI checks on every PR. Each check is a source-controlled markdown file in `.continue/checks/` that shows up as a GitHub status check. They run as full agents, not just reading the diff, but able to read/write files, run bash commands, and use a browser. If it finds something, the check fails with one click to accept a diff. Otherwise, it passes silently.
Here’s one of ours:
.continue/checks/metrics-integrity.md
---
name: Metrics Integrity
description: Detects changes that could inflate, deflate, or corrupt metrics (session counts, event accuracy, etc.)
---
Review this PR for changes that could unintentionally distort metrics.
These bugs are insidious because they corrupt dashboards without triggering errors or test failures.
Check for:
- "Find or create" patterns where the "find" is too narrow, causing entity duplication (e.g. querying only active sessions, missing completed ones, so every new commit creates a duplicate)
- Event tracking calls inside loops or retry paths that fire multiple times per logical action
- Refactors that accidentally remove or move tracking calls to a path that executes with different frequency
Key files: anything containing `posthog.capture` or `trackEvent`
This check passed without noise for weeks, but then caught a PR that would have silently deflated our session counts. We added it in the first place because we’d been burned in the past by bad data, only noticing when a dashboard looked off.---
To get started, paste this into Claude Code or your coding agent of choice:
Help me write checks for this codebase: https://continue.dev/walkthrough
It will:- Explore the codebase and use the `gh` CLI to read past review comments
- Write checks to `.continue/checks/`
- Optionally, show you how to run them locally or in CI
Would love your feedback!
esafak
|next
[-]
Do you support exporting metrics to something standard like CSV? https://docs.continue.dev/mission-control/metrics
A brief demo would be nice too.
sestinj
|root
|parent
[-]
One of the fundamental differences between checks and code review bots is that you trade breadth for consistency. There are two things Continue should never, ever do:
1. find a surprise bug or offer an unsolicited opinion
2. fail to catch a commit that doesn't meet your specific standards
sestinj
|root
|parent
[-]
- we do! right now you can export some metrics as images, or share a link publicly to the broader dashboard. will be curious if others are interested in other formats https://imgur.com/a/7sgd81r
- working on a loom video soon!
megamorf
|next
|previous
[-]
sestinj
|root
|parent
[-]
Some of these are:
- Having a local experience to run them with Claude Code, etc.
- Making it easy to accept/reject suggested changes
- A single folder dedicated to just checks so you don't have to think about triggers
- Built-in feedback loops so you can tune your checks with feedback
- Metrics so you can easily track which have a high suggestion / merge rate
Are you using a lot of `gh-aw`?
bachittle
|previous
[-]
sestinj
|root
|parent
[-]
tl;dr
- a _lot_ of people still use the VS Code extension and so we're still putting energy toward keeping it polished (this becomes easier with checks : ))
- our checks product is powered by an open-source CLI (we think this is important), which we recommend for jetbrains users
- the general goal is the same: we start by building tools for ourselves, share them with people in a way that avoids creating walled gardens, and aim to amplify developers (https://amplified.dev)