Skip to content

Fact Checker Agent

Thinklio Built-in Agent Specification Version 0.1 | March 2026


1. Purpose and Problem Statement

The Fact Checker Agent is a quality gate. It sits between the Writer Agent and the Report Writer, and its job is to verify that what has been written is accurate, well-sourced, and consistent with the intended voice and format.

It does not rewrite. It annotates, flags, and scores. Any changes resulting from its output are either made by the Writer Agent (if the coordinator loops back) or by the user manually.

Three distinct verification tasks are in scope:

  • Factual accuracy — claims in the draft are checked against the source list. Claims that cannot be traced to a source are flagged. Claims that contradict their cited source are flagged.
  • Citation integrity — where the draft includes citations, these are verified to exist, to be correctly formatted, and to actually support the adjacent claim.
  • Voice and style compliance — the draft is checked against the org voice profile (and user voice profile if applicable) for prohibited language, reading level, and structural requirements from the format template.

These are separable concerns and can be run independently, but in the standard pipeline all three run together.


2. Position in the Pipeline

Writer Agent  →  [Fact Checker]  →  Report Writer
               (loop back to Writer if corrections needed)

If the Fact Checker returns critical flags, the coordinator can loop back to the Writer Agent with a correction brief derived from the Fact Checker's output. This loop has a configurable max iteration count to prevent infinite cycling.


3. Invocation Modes

Programmatic (agent-to-agent) A coordinator passes the draft and the source map. The Fact Checker returns an annotated draft and a verification report. No UI is required.

Standalone (user-initiated) A user submits a draft and a source list for checking. Results are shown in the review UI. Useful for checking content written outside Thinklio before publishing.


4. Input Requirements

The Fact Checker requires:

  • A Draft object from the Writer Agent (or equivalent structured content)
  • A SourceMap — a mapping of which source list(s) apply to which sections of the draft. In a multi-section pipeline (e.g. newsletter), each section may have been written from a different source list, and the checker needs to know this to avoid falsely flagging cross-section claims.
  • The FormatTemplate that was used to produce the draft (for style and structure checking)
  • The OrgVoiceProfile (and UserVoiceProfile if applicable)

5. Configuration

5.1 Admin Configuration

Setting Description
Max correction loops How many times the coordinator may loop back to the Writer Agent before forcing a manual review flag
Fact check strictness strict (all untraced claims flagged) or lenient (only direct contradictions flagged)
Style check enabled Whether voice/style compliance checking is active for this workspace
External verification Whether the agent may query external APIs to verify claims beyond the source list (e.g. live fact-check services)

5.2 Run-time Parameters

Parameter Type Description
draft Draft The draft to check
source_map SourceMap Section-to-source-list mapping
format_template reference Template used to produce the draft
voice_profiles OrgVoiceProfile, UserVoiceProfile? Voice profiles to check against
checks enum[] Which checks to run: factual, citation, voice, structure. Defaults to all.
strictness enum strict or lenient. Overrides admin default if permitted.
correction_brief boolean If true, the agent produces a structured correction brief suitable for passing back to the Writer Agent

6. Source Map

The source map is a first-class data structure, not a flat list. It must capture which sources are authoritative for which parts of the draft.

SourceMap
├── map_id              UUID
├── draft_id            UUID
└── sections[]
    ├── section_id      UUID
    ├── section_name    string
    └── source_list_ids UUID[]

When the pipeline is assembled by a coordinator, the source map is constructed automatically from the Research Agent and Writer Agent run records. When the Fact Checker is invoked standalone, the user constructs the source map in the UI.


7. Verification Logic

7.1 Factual Accuracy

For each verifiable claim in the draft:

  1. Extract the claim as a discrete statement
  2. Attempt to match it to a statement in the applicable source list(s)
  3. Classify the match:
  4. Supported — claim is directly supported by a source statement
  5. Inferred — claim is consistent with sources but not directly stated (flagged with lower severity)
  6. Unsourced — no matching source found (flagged)
  7. Contradicted — claim conflicts with a source statement (flagged, high severity)

7.2 Citation Integrity

For each citation in the draft:

  1. Verify the cited source exists in the source list
  2. Verify the citation is correctly formatted per the template's citation style
  3. Verify the cited source actually supports the adjacent claim (applies factual accuracy check to the specific source-claim pair)

7.3 Voice and Style Compliance

  • Check for prohibited words/phrases from the org voice profile
  • Estimate reading level and compare to target
  • Check structural requirements from the format template (e.g. required sections present, word limits within tolerance)
  • If a user voice profile is active, check that user-specific style markers are present where expected

8. Output Structure

The Fact Checker returns a VerificationReport:

VerificationReport
├── report_id               UUID
├── draft_id                UUID
├── generated_at            timestamp
├── overall_status          enum (passed | passed_with_warnings | requires_correction | failed)
├── scores
│   ├── factual_accuracy    float (0–1)
│   ├── citation_integrity  float (0–1)
│   └── voice_compliance    float (0–1)
├── flags[]
│   ├── flag_id             UUID
│   ├── type                enum (unsourced | contradicted | inferred | citation_error | voice_violation | structure_violation)
│   ├── severity            enum (critical | warning | info)
│   ├── section_id          UUID
│   ├── claim_text          string
│   ├── source_ref          UUID | null
│   ├── explanation         string
│   └── suggestion          string | null
└── correction_brief        CorrectionBrief | null

Correction Brief

When correction_brief: true, the agent produces a structured brief suitable for passing back to the Writer Agent:

CorrectionBrief
├── brief_id            UUID
├── draft_id            UUID
└── corrections[]
    ├── section_id      UUID
    ├── instruction     string
    └── flag_ids        UUID[]

9. User Interface

9.1 Submission Screen (standalone mode)

  • Draft input: paste text or select a saved draft
  • Source list: select from saved source lists or paste
  • Section mapping: if multiple sections and multiple source lists, a simple mapping UI (drag source lists to sections)
  • Checks to run: checkboxes (Factual, Citations, Voice, Structure)
  • Strictness: toggle

9.2 Results View

  • Overall status banner (passed / warnings / corrections needed / failed) with score summary
  • Annotated draft view: flags shown inline with colour coding by severity
  • Critical flags: red underline with tooltip
  • Warnings: amber underline
  • Info: blue underline
  • Flag list panel: filterable by type and severity, each flag linked to its inline position
  • Per-flag actions: dismiss, accept suggestion, mark as reviewed
  • Correction brief: expandable panel showing structured corrections, with a "Send to Writer Agent" button
  • Export: download annotated markdown, download report JSON

9.3 Loop View (coordinator-managed)

When the Fact Checker is running as part of a coordinator pipeline, a simplified status view shows: - Current iteration count vs. maximum - Flags resolved vs. remaining - Option to force manual review if automated corrections are not converging


10. Data Model Integration

Data Object Interaction
Note Verification report saved as a Note attached to the draft's parent record
Task Correction loop tracked as Task subtasks
Item Checking a reply to an Item before sending

11. Use Cases

UC-1: Pipeline quality gate

A coordinator passes a newsletter draft and its source map to the Fact Checker. The agent returns two critical flags (contradicted claims) and three warnings (inferred claims with no direct source match). The coordinator generates a correction brief and loops back to the Writer Agent for a second pass. The second draft passes with warnings only. The coordinator proceeds to the Report Writer.

UC-2: Standalone content check

A user has written an article outside Thinklio and wants to verify it before publishing. They paste the text, attach a source list, and run all four checks. The fact checker identifies two unsourced claims and a reading level above the org target. The user edits manually and re-runs.

UC-3: Voice compliance audit

A workspace admin runs the Fact Checker on a batch of previously published articles (via a coordinator) to audit voice compliance after updating the org voice profile. The checker flags articles using prohibited language introduced in the new profile.


12. Open Questions

  • For the correction loop, should the correction brief be passed verbatim to the Writer Agent, or should the coordinator summarise it into a new section brief? Verbatim is more precise; summarised may produce more natural rewrites.
  • Reading level estimation is imprecise for short texts and varies by algorithm. Which method (Flesch-Kincaid, SMOG, Gunning Fog) should be canonical, and should this be configurable per workspace?
  • Should inferred claims be flagged by default, or only in strict mode? Inferred claims are often legitimate writing practice and may generate noise.
  • External verification APIs (live fact-check services) introduce latency and cost. Should this be opt-in per run rather than an admin toggle?

Previous: Writer Agent | Next: Report Writer Agent