Outbound Intelligence

Scout.

Manual opportunity sourcing is slow, inconsistent, and impossible to run at volume. Scout is a five-phase AI pipeline that monitors multiple sources simultaneously, scores each opportunity against a defined target profile, generates personalised outreach, and tracks every response. It runs twice every day without anyone touching it.

Status Running in production
Type Autonomous pipeline
Runs per day 2 (automated)
Human input required One-tap approvals only
565+
Leads per week
<£0.10
Cost per scored listing
5 phases
End-to-end pipeline
0h
Manual input per run

The Problem

Outbound sourcing
at volume is broken.

Manual opportunity sourcing across multiple channels takes hours. Most of that time is spent on leads that are obviously wrong. The small number of genuinely relevant targets get the same unfocused attention as everything else. Follow-up is inconsistent. Nothing is tracked properly.

The real problem is not finding opportunities. It is the signal-to-noise ratio. Hundreds of signals exist across any given market. A handful are worth pursuing. Identifying them manually at any real volume is not a sustainable operation.

Scout was built to solve this with an AI scoring layer that reads the same signals a human analyst would, against a defined Ideal Client Profile, without the time cost.

Before Scout

2 to 3 hours daily sourcing across channels manually
Generic outreach with no personalisation to target or context
No tracking of who was contacted or when
Follow-up inconsistent or forgotten entirely

After Scout

565 opportunities sourced and scored before 7 AM, twice daily
Outreach drafted specifically for each target and context
Every contact logged with full context in a GDPR-compliant schema
Human review via Telegram takes under 5 minutes per batch

How It Works

Five phases.
One pipeline.

1
Data Ingestion

The pipeline scrapes multiple job boards simultaneously on a scheduled trigger. Raw listings are normalised into a consistent schema, deduplicated against previous runs, and written to the database. No listings are missed, no duplicates pass through.

2
AI Scoring Against ICP

Each new opportunity is scored against the defined Ideal Client Profile using Claude. The model reads the full context of the opportunity and the target profile, then outputs a fit score with reasoning. Claude Haiku was chosen here specifically for cost and latency. The scoring layer processes hundreds of opportunities for pennies.

3
Profile-Matched Personalisation

For opportunities above the score threshold, the system generates personalised outreach tailored to the specific target and context. Not a template. The model reads the full opportunity brief and writes to it directly.

4
Outreach Automation

High-scoring listings trigger an outreach sequence. The Telegram interface presents scored job cards with pre-drafted outreach for one-tap approval. Daily volume caps protect sender reputation. GDPR contact cleanup is built into the schema. Approved messages send automatically.

5
Analytics and Follow-Up

Every contact is tracked. Open rates, response rates, and follow-up timing are logged. The analytics layer surfaces what is working so the scoring thresholds and outreach copy can be refined over time.

Telegram Interface

The only human touchpoint in the entire pipeline is a Telegram approval flow. Each morning, scored job cards arrive with pre-drafted outreach. One tap approves and sends. One tap archives. The operator never sees raw data.

This was a deliberate UX decision. The goal was to make the daily review take under 5 minutes, not to replicate a full CRM dashboard.

Key Design Decisions

Model Choice

Claude Haiku over GPT-4

Lower cost and faster latency for a task that runs at volume. The scoring does not require frontier reasoning, it requires consistency and speed.

Architecture

Volume caps built in

Daily email limits are set at the schema level, not as a guardrail added later. Protecting sender reputation was a day-one requirement.

Compliance

GDPR from day one

Contact data cleanup is not a feature added later. It is baked into the data schema from the first version.

Memory

Living spec file

A continuously updated implementation record prevented feature regression across all five phases of development.


Technology

Built with the
right tools.

Every tool in Scout was chosen for a specific reason. Claude Haiku handles scoring at volume without burning budget. FastAPI serves the internal endpoints that connect the pipeline stages. Telegram delivers the approval interface without requiring a separate dashboard. SQL keeps the data model simple and queryable.

The architecture deliberately avoids complexity. Each pipeline stage is a discrete step. If one fails, the others continue. Failures are logged and surfaced, not silently dropped.

Full Stack

Claude API (Haiku) Python FastAPI SQL Telegram Bot API REST APIs Git Scheduled jobs (cron) GDPR-compliant schema JSON structured outputs

Why no vector database or embeddings?

Scout uses structured AI scoring rather than semantic search. For this use case, a prompted score with reasoning is more interpretable and auditable than a similarity vector. You can read why a job was scored 8 out of 10. You cannot read a cosine distance.


Proof of Work

Running output.
Real numbers.

📷
Pipeline dashboard screenshots

Live Telegram interface, job card scoring output, and analytics dashboard. Screenshots and screen recording being added shortly.

🎥
Pipeline walkthrough video

A full end-to-end walkthrough of one complete Scout run, from cron trigger to approved outreach. Video being added shortly.


What We Learned

The things that
only show up in production.

Model latency matters more than capability at volume. GPT-4 produced marginally better scoring reasoning. Claude Haiku produced good-enough reasoning at 4 times the speed and a fraction of the cost. For a task running hundreds of times per day, that trade-off is clear.

The approval UX is as important as the pipeline itself. An early version sent raw scored data to review. Nobody read it. The Telegram job card format with one-tap actions cut review time from 20 minutes to under 5. The interface shapes the behaviour.

GDPR compliance is not a feature you add at the end. Building contact cleanup into the schema from the start cost nothing. Retrofitting it into an existing data model would have taken days and introduced risk.

A living spec prevents drift across long builds. A five-phase project built over months will regress without a continuously updated record of what was decided and why. The implementation memory file was more valuable than any formal documentation would have been.



Need a pipeline
built for your process?

Scout was built for recruitment. The same architecture applies to any repetitive sourcing, scoring, and outreach workflow. Tell us the problem.