● cswatch / pipeline
How CSWatch Works
The end-to-end view: how a suspect player goes from a routine lookup to a public conviction, and why each step in the pipeline exists.
CSWatch processes about 30,000 player lookups and 1,200 new reports per week. Each touchpoint runs through a fixed pipeline that mixes algorithmic signals with human judgment. Below is the actual flow, stage by stage, with the design rationale for each step.
1. Lookup or auto-discovery
A player profile enters our system in one of two ways: either someone explicitly looks them up by Steam ID or vanity URL, or our crawler discovers them through public match data, leaderboards, or friend-of-friend traversal. Every profile we see gets a pass through the algorithmic signal layer.
Why this exists:a non-trivial fraction of cheaters never get reported because their victims don't know they were cheated. Auto-discovery lets us flag suspects before anyone files a manual report.
2. Algorithmic signal scan
Each profile gets scored across four dimensions: ban record (any VAC, game, or trade ban on the account or close friends), account legitimacy (age, library diversity, badge organic-ness, hours-vs- rank ratio), community feedback (existing reports and verdicts), and behavioural anomalies (statistical outliers in their public match stats). The composite produces the 0-100 trust score visible on the profile page.
Why this exists: the algorithmic layer is fast, scalable, and gives reviewers context before they invest demo-watching time. It also surfaces auto-flags for accounts that nobody has reported but probably should be reviewed.
3. Demo evidence and rendering
For a report to enter the Overwatch queue, it needs a CS2 demo file. Our backend renders the demo into a viewable video clip focused on the suspect's POV, optionally jumping to specific timestamps the reporter flagged. Rendering takes 2-15 minutes per demo depending on length and queue depth; Pro users get priority.
Why this exists: without a demo, a report is just an opinion. Demos are the only evidence type that lets a reviewer verify the suspect's actual in-game behavior. We won't convict anyone on stats alone.
4. Community Overwatch review
The case enters the public review queue. Trusted Overwatch reviewers — community members who have applied, been vetted, and maintain a 70%+ accuracy rate — watch the demo and cast a verdict: guilty, not guilty, or insufficient evidence. Each reviewer also sets an optional confidence modifier and writes a brief justification.
Why this exists: human judgment is irreplaceable for the cases that matter — borderline cheats, novel cheating techniques, and decisions where context matters. The review pool is governed: low-accuracy reviewers get their votes down-weighted, and reviewers below 60% accuracy for a sustained period are rotated out.
5. Conviction or dismissal
A case converts to a public conviction when it accumulates 3+ weighted-guilty votes with 66%+ consensus. Below that threshold, the case closes without affecting the suspect's public record. Insufficient-evidence votes count toward neither side; they indicate the demo doesn't support a confident verdict.
Why these thresholds exist: three votes prevents single-reviewer mistakes, weighted votes account for reviewer reliability, and the 66% consensus floor ensures that close-call reviews don't produce convictions. False-positive minimisation matters because public convictions stick.
6. Public record and reputation tracking
Convicted players appear on the leaderboard and are flagged on their profile pages. The conviction itself includes the reviewer count, consensus percentage, and verdict justifications — anyone looking up the player can see the evidence trail. Reputation scores update continuously as new data flows in (new bans, new reports, new match-history snapshots).
Why this exists: the whole point. Without a public, accountable record, the entire pipeline has no consequence. Convictions exist to inform queue decisions and create deterrence at the margin.
What this design optimises for
- Low false-positive rate. Multiple gates (auto-flag → demo evidence → 3-vote consensus → 66% agreement) make wrongful convictions structurally hard.
- Public accountability. Every step is visible. Anyone can verify how a verdict was reached.
- Speed on novel cheats. Community review can spot new cheating techniques the algorithmic layer doesn't yet have signals for.
- Sustainable scaling. The auto-flag layer absorbs volume; humans only spend time on cases with submitted evidence.
Want to dig deeper?
Read the technical breakdowns on the blog, see the FAQ for specific questions, or explore live convictions on the leaderboard.