Skip to content

Newsletter Creator: Custom Agent Case Study

Thinklio Agent Studio — Custom Agent Example Version 0.1 | March 2026


1. Purpose of This Document

This case study serves two purposes. First, it demonstrates how a custom agent is composed in the Thinklio Agent Studio using built-in pipeline agents as components. Second, it stress-tests the pipeline architecture against a real-world scenario, surfacing assumptions and gaps that inform the built-in agent specifications.

The custom agent built here is a Newsletter Creator. It produces a complete, publication-ready newsletter in markdown — with image placeholder suggestions — from a theme and a format template. It uses the Research, Writer, Fact Checker, Report Writer, and Data agents, coordinated by a custom coordinator agent.


2. What the Agent Studio Provides

The Agent Studio is the workspace-level tool for composing custom agents. It does not require code. A custom agent consists of:

  • A coordinator agent — the orchestration logic, defined as a workflow of steps, conditions, and agent invocations
  • Tool agents — built-in agents (or other custom agents) invoked as steps within the coordinator's workflow
  • Configuration bindings — how the coordinator's inputs map to tool agent parameters
  • A UI definition — the interface presented to the user when they run this custom agent

Custom agents appear in the workspace agent library alongside built-in agents. They can be invoked standalone or called by other coordinator agents, enabling composition at multiple levels.


3. The Newsletter Creator

3.1 Overview

The Newsletter Creator takes:

  • An issue theme (a topic or focus for this edition)
  • A format template (the newsletter's known structure and section specifications)
  • An optional set of override parameters (e.g. specific keywords, date ranges for news)

It produces:

  • A complete newsletter draft in markdown, with image placeholder suggestions
  • A source map for traceability
  • A PDF artefact stored in the media system (optional, triggered by the coordinator)

3.2 Newsletter Format

For this case study, the newsletter has four sections. The format template defines each section's purpose, word count, tone, and source type.

Newsletter Format: "The Weekly Brief"

Sections:
  1. Feature  
     Purpose: In-depth piece on the issue theme  
     Words: 500–700  
     Tone: Authoritative, org voice  
     Source type: Academic or general  
     Detail level: facts_list  
     References: 10–15

  2. News Digest  
     Purpose: 3–5 short summaries of recent relevant news  
     Words: 50–80 per item  
     Tone: Neutral, factual  
     Source type: News  
     Detail level: citation_summary  
     References: 5–8

  3. Industry Perspective  
     Purpose: One concise analysis piece contextualising the theme  
     Words: 200–300  
     Tone: Considered, slightly opinionated within org voice  
     Source type: General  
     Detail level: citation_extract  
     References: 4–6

  4. Interesting Links  
     Purpose: 3–4 brief, lightly written items — surprising, quirky, or delightful  
     Words: 40–60 per item  
     Tone: Light, conversational — the one place user voice can lead  
     Source type: General  
     Detail level: citation_summary  
     References: 4–6

4. Coordinator Workflow

The coordinator runs the following workflow. Steps that are independent run in parallel where possible.

Step 1: Parse inputs
  → Validate theme, template, and override params
  → Extract keywords from theme (if not provided)

Step 2: Research (parallel)
  → 2a: Research Agent (feature)
        source_type: general
        prompt: [theme]
        keywords: [extracted + any overrides]
        num_references: 15
        detail_level: facts_list
  → 2b: Research Agent (news)
        source_type: news
        prompt: [theme]
        num_references: 8
        detail_level: citation_summary
        date_from: [today - 14 days]
  → 2c: Research Agent (industry)
        source_type: general
        prompt: [theme + "industry analysis trends"]
        num_references: 6
        detail_level: citation_extract
  → 2d: Research Agent (quirky)
        source_type: general
        prompt: [theme + "unusual surprising unexpected"]
        num_references: 6
        detail_level: citation_summary

Step 3: Data preparation
  → Data Agent: filter each source list to relevance > 0.65
  → Data Agent: deduplicate across all four lists (a source appearing
    in multiple lists is kept in the highest-priority list only,
    priority order: feature > industry > news > quirky)

Step 4: Write (parallel, each section uses its own source list)
  → 4a: Writer Agent (feature section)
        source_list: [2a filtered]
        format_template: newsletter_feature_section
        voice: org_with_user_modulation (if user voice profile set)
        word_limit: 600
        image_placeholders: true
  → 4b: Writer Agent (news digest)
        source_list: [2b filtered]
        format_template: newsletter_news_digest
        voice: org
        word_limit: 300 (5 items × 60 words)
        image_placeholders: false
  → 4c: Writer Agent (industry perspective)
        source_list: [2c filtered]
        format_template: newsletter_industry_section
        voice: org
        word_limit: 250
        image_placeholders: true
  → 4d: Writer Agent (interesting links)
        source_list: [2d filtered]
        format_template: newsletter_links_section
        voice: user (if profile set) or org
        word_limit: 200 (4 items × 50 words)
        image_placeholders: false

Step 5: Assemble source map
  → Coordinator constructs SourceMap from steps 2 and 4:
    section feature     → source_list 2a
    section news        → source_list 2b
    section industry    → source_list 2c
    section links       → source_list 2d

Step 6: Fact check
  → Fact Checker Agent
        draft: [assembled draft from 4a–4d]
        source_map: [step 5]
        checks: [factual, citation, voice]
        correction_brief: true
        strictness: lenient (news and links sections are summary only)

Step 7: Correction loop (if needed)
  → If Fact Checker status = requires_correction:
        Writer Agent (targeted sections only, using correction brief)
        Fact Checker (re-run on corrected sections)
        Max iterations: 2
  → If still failing after 2 loops: flag for manual review, halt

Step 8: Assemble final document
  → Coordinator assembles sections in template order with:
    - Issue header (title, date, theme)
    - Section dividers per template
    - Image placeholder tags preserved
    - Footer (unsubscribe placeholder, org details)

Step 9: Report Writer (optional, triggered by user or config)
  → Report Writer Agent
        draft: [step 8 output]
        output_formats: [markdown, pdf]
        layout_template: newsletter_layout
        summary_length: short
        tags: ["newsletter", theme, issue_date]
        publish: false (held for review)

5. Custom Agent UI

The Newsletter Creator presents a focused UI that hides the complexity of the pipeline.

5.1 Configuration Screen

  • Issue theme — text field (required). The topic or focus for this edition. E.g. "AI in aged care" or "Regenerative agriculture in Southern Australia".
  • Issue date — date picker (defaults to today)
  • Format template — read-only in this custom agent; locked to "The Weekly Brief"
  • Keyword overrides — optional tag input to supplement auto-extracted keywords
  • News date range — optional override for the news section lookback window (default: 14 days)
  • User voice — toggle. If on, shows a dropdown to select a user voice profile (used for feature and links sections)
  • Generate PDF — toggle. If on, triggers the Report Writer at the end of the pipeline.
  • Save to — optional record picker (e.g. link to an Issue Task)

5.2 Progress View

A visual pipeline status board is shown while the coordinator runs, rather than a simple spinner. Each step is shown as a node with a status indicator:

[Research ×4] → [Data prep] → [Write ×4] → [Fact Check] → [Assemble] → [Report]
   ✓ ✓ ✓ ✓         ✓           ✓ ⟳ ✓ ✓       ⟳              —            —

This gives the user visibility into which parallel steps are running, complete, or waiting. Individual step logs are expandable.

Estimated time is shown based on step count and typical API latency.

5.3 Results View

On completion, the user sees:

  • Full rendered newsletter preview in markdown, with section boundaries labelled
  • Image placeholder summary — list of all placeholder tags with suggestions; "Add image" action per placeholder
  • Fact check summary — overall status and any remaining warnings surfaced inline in the preview
  • Source map — expandable panel showing which sources contributed to which section
  • Actions:
  • Approve and save as Note
  • Approve and publish artefact (if Report Writer ran)
  • Edit in draft view (sends to a generic draft editor)
  • Re-run with same configuration
  • Export as markdown download
  • Download PDF (if generated)

6. Pipeline Stress Test: Assumptions and Findings

Working through the newsletter case study reveals several assumptions that must hold for the pipeline to work, and some that need refinement.

6.1 Parallel Research Runs

Assumption: The Research Agent can be invoked multiple times concurrently by a coordinator.

Finding: Nothing in the Research Agent spec prevents this, but admin configuration must include a concurrency limit for Research Agent instances per coordinator run. Without a limit, a single complex newsletter coordinator could consume all available API quota. Add max_concurrent_research_runs to admin config.

6.2 Cross-List Deduplication

Assumption: The Data Agent can identify duplicate sources across lists from different source types (general and news), even when DOIs are absent.

Finding: Title similarity matching is needed. Pure URL/DOI deduplication is insufficient for general and news sources, which may reference the same underlying story via different URLs. The Data Agent spec needs to specify a title similarity threshold (e.g. normalised Levenshtein distance or embedding-based similarity) for this case.

6.3 Section-Level Writer Invocations

Assumption: Running the Writer Agent four times and assembling the outputs produces a coherent newsletter.

Finding: Coherence across sections is not guaranteed. Each Writer Agent instance has no awareness of what the others wrote. This means the same fact or phrase could appear in multiple sections, or the sections may feel tonally disjointed at their boundaries.

Mitigation options: a. Add a "coherence pass" — a fifth Writer Agent invocation that reviews and lightly edits the assembled document for consistency b. Pass the feature section output as context to subsequent Writer Agent runs (adds latency) c. Accept the limitation and surface it as a known quality characteristic of the assembled approach, relying on the Fact Checker's style check and manual review

Option (a) implies the Writer Agent needs a review_mode parameter where it edits rather than creates. This should be added to the Writer Agent spec.

6.4 Fact Checker Source Map Complexity

Assumption: The Fact Checker can handle a source map where different sections are checked against different source lists.

Finding: The current Fact Checker spec supports this via the SourceMap structure. The newsletter case confirms this design is correct and necessary. A flat source list would cause the Fact Checker to incorrectly flag news digest items as unsourced when checked against the feature's academic source list.

6.5 Strictness Per Section

Assumption: A single strictness setting applies to the whole document.

Finding: The newsletter has sections with fundamentally different sourcing expectations. The feature section warrants strict fact checking; the interesting links section contains lightly sourced, deliberately conversational content that would fail a strict check. The Fact Checker needs a section_strictness_map parameter — a way to set strictness per section rather than globally. This is a gap in the current spec.

6.6 Voice Mixing Within a Single Document

Assumption: The Writer Agent handles voice per section, and the assembled document holds together.

Finding: This works because each Writer Agent invocation is configured independently. However, the org voice profile must be applied consistently across all sections (it cannot differ per section), and user voice is optionally applied only to specific sections. This is consistent with the current Writer Agent spec.

6.7 Image Placeholder Aggregation

Assumption: Image placeholders from multiple Writer Agent runs are preserved and surfaced in the final document.

Finding: Each Writer Agent run inserts placeholder tags independently. The coordinator's assembly step must preserve these tags. The Report Writer then handles all of them in one pass. This works, but the number of placeholders in a multi-section document could be large. The Results view should show a count and allow bulk management (accept all, dismiss all, add images to all of type X).

6.8 Report Writer and Scheduled Newsletters

Finding (new): A newsletter is typically a recurring publication. The newsletter coordinator could be scheduled, similar to the Research Agent's update frequency feature. However, scheduling a coordinator agent is more complex than scheduling a single agent — it involves scheduling a whole workflow, with potential failure at any step.

This implies the need for a scheduled coordinator pattern in the Agent Studio — a way to define a recurring run of a custom agent, with retry logic, failure notifications, and run history. This is out of scope for the initial pipeline spec but should be captured as a platform capability requirement.


7. Gaps Identified and Spec Updates Required

Spec Gap Action
Research Agent No concurrency limit per coordinator run Add max_concurrent_research_runs to admin config
Data Agent Title similarity deduplication not specified Add similarity matching option to dedup operation
Writer Agent No review/coherence pass mode Add mode: review parameter for editing existing drafts
Fact Checker Single strictness setting for whole document Add section_strictness_map parameter
Fact Checker No per-section strictness in UI Update UI to support section-level strictness config
Agent Studio No scheduled coordinator pattern Capture as platform capability requirement
Report Writer No bulk image placeholder management Add bulk actions to Results view

8. What This Shows About the Architecture

The newsletter case study confirms several things the pipeline architecture does well:

  • Separation of concerns holds up. Research, writing, checking, and rendering are genuinely distinct steps with clean interfaces between them. The temptation to merge them (e.g. have the Writer also do fact-checking inline) would have broken the section-level control demonstrated here.
  • The source map is the right abstraction. A flat source list would have been inadequate for a multi-section document. The section-to-source-list mapping enables per-section checking without building special logic into the Fact Checker.
  • Parallel execution matters. The four research runs and four writer runs are independent and can run concurrently. In a sequential architecture, this would be prohibitively slow. The coordinator pattern needs to make parallelism easy to express.
  • The Data Agent as a pre-flight step is genuinely useful. Filtering and deduplicating source lists before writing keeps writer prompts focused and reduces token cost in a measurable way.

And two things that need attention:

  • Cross-section coherence is a real gap. Assembling independently written sections is good enough for a news digest format, but may not be for more integrated content types. The review_mode Writer Agent invocation should be treated as a priority addition.
  • Scheduling at the coordinator level is the logical next capability after the initial pipeline is stable. A newsletter that runs weekly is the most natural recurring use case for Thinklio, and the architecture should make that straightforward to configure.

This case study is a living document. As the built-in agent specs are revised in response to the gaps identified here, the relevant sections of this document should be updated to reflect the final agreed design.

Previous: Data Agent | See also: Agent Studio overview (to be written)