CampaignForge AI - The Journey
Chapter 7: Stop the Pipeline Before It Spends Money on a Broken Website
Date: 2026-05-08 Vertical: B2B SaaS | Budget: $500/month
Where We Left Off
Chapter 6 documented the first complete end-to-end run: intake brief through Gate 6, all six gates approved, Agent 11 generating a LinkedIn draft from real performance data. The pipeline held together.
But watching it run exposed a gap I had been aware of and had not done anything about.
The pipeline launches immediately after intake. You give it a brief, it starts building strategy and creative. At no point did anything stop to ask: is the website it's about to send traffic to actually ready for paid traffic?
That is not a small question. Paid traffic is wasted money if the landing page is slow, broken on mobile, missing a clear CTA, or built in a way that makes trust impossible. I have seen — and helped waste — budget on exactly this problem. The agent pipeline I built was capable of repeating it.
So before any more pipeline runs, I built Agent 00.
The Problem With Blind Trust
The original pipeline design started with intake_brief and immediately moved to product_person. The assumption was: the operator knows their website is ready. Submit the brief, trust the brief.
That assumption is wrong a lot of the time. Not because operators are careless. Because website quality problems are not always visible to the person running the site. You know your content. You do not necessarily know your Lighthouse scores, whether the mobile viewport tag is missing, or how fast the page actually loads for someone not on your local network.
A bad landing page is also not always a visible problem at launch. The campaign goes live, spends money, gets impressions and clicks, and the performance numbers come back weak. Someone blames the creative. Someone adjusts the targeting. No one thinks to audit the page.
The right time to catch this is before the campaign starts — not after $500 has been spent diagnosing what was fixable in a morning.
What Agent 00 Does
Agent 00 (Website Auditor) runs immediately after intake_brief, before the product person, architect, or any strategy work begins. It has two jobs.
Job 1: HTTP probe.
The node makes a real HTTP GET request to the landing page URL from the campaign brief. It measures page load time in milliseconds, checks whether HTTPS is enforced, and checks whether the mobile viewport meta tag is present. These are not proxy signals. They are direct measurements.
Load time classification:
- Under 1,500ms: Fast
- 1,500ms–3,500ms: Average
- Over 3,500ms: Slow
Slow pages kill conversion rates before the ad copy has a chance to work. A 3-second delay increases bounce probability by over 50% on mobile. This is not a soft concern.
If no landing_page_url is in the brief, the audit returns RED immediately. There is nothing to probe.
Job 2: LLM-based CRO analysis.
After the probe, the LLM receives a structured prompt with the technical measurements plus the full brief context. It evaluates five dimensions:
- Message match potential — Does the ad's promise likely match what someone landing on this page will see? Mismatched message is the single biggest source of wasted paid clicks.
- Conversion clarity — Is the CTA clear, prominent, and unambiguous? Can someone arriving from an ad tell immediately what you want them to do?
- Trust signals — Reviews, testimonials, guarantees, visible contact information. Anything that lowers the psychological cost of saying yes.
- Mobile friendliness — Combined with the viewport probe, does the page work for mobile users?
- Page load — Combined with the HTTP measurement, does the speed support conversion?
The LLM returns a WebsiteAuditOutput with:
- An overall readiness score from 1 to 10
- A GREEN / YELLOW / RED status flag
- A summary paragraph
- A list of critical issues
- Actionable recommendations
The status mapping is deliberate:
| Status | Score range | should_proceed | |---|---|---| | GREEN | 8–10 | true | | YELLOW | 5–7 | true | | RED | 1–4 | false — schema-enforced |
RED status sets should_proceed = false at the schema level. The Pydantic validator enforces it. An agent cannot return a RED audit and should_proceed = true. That combination is structurally impossible.
The Gate
GATE-0 sits between the website audit and the product person. The operator sees:
- The score and flag
- The landing page URL that was probed
- The LLM summary
- Any critical issues
- The load speed, HTTPS status, and mobile viewport result
Approving lets the pipeline continue regardless of status. This is intentional.
The audit is an advisory block, not a hard lockout. A RED flag with should_proceed = false is the agent's recommendation, not an unoverridable veto. There are legitimate reasons to proceed despite a flagged site: the page is under active remediation, the operator is running a quick test on a controlled audience, the critical issue is known and accepted. The system records the decision in the audit log with full context. It does not make the decision for the operator.
Rejecting halts the pipeline. The operator can fix the website and start again.
Why a New Agent Instead of a New Check Inside Intake
This was a real design question.
The intake node already validates the brief for field completeness and budget. Adding a URL readiness check there would have been simpler to build. It also would have been wrong.
The website audit needs to be interruptible. The operator needs to see the result, ask questions, and make a gate decision. Intake does not have a gate. Adding one to intake would mean the intake brief gate carries two concerns — brief validity and site readiness — in a single approval step. Those are different decisions with different remediation paths. They should be separate gates.
Agent 00 is also a separate concern from brief intake structurally. Intake validates what the operator said. The website audit validates the reality the campaign will land in. Keeping them in different nodes keeps the state machine clean.
The New BriefPayload Field
landing_page_url: Optional[str] was added to BriefPayload.
Making it optional was deliberate. Not every brief will have a URL at the moment of intake — someone might be preparing a campaign before the landing page exists. Making it required would break that workflow.
The consequence of an absent URL is a RED audit with "no URL provided" as the critical issue. That is the correct behavior. You cannot audit what you cannot reach.
The Honest State After Chapter 7
What changed:
- 12 agents (was 11) — Agent 00 is now the first node in the pipeline
- 8 human approval gates (was 7) — GATE-0 runs after every website audit
- New
BriefPayloadfield:landing_page_url - New schema:
WebsiteAuditOutput - New state key:
agent_00_output - New agent descriptor:
agents/00-website-auditor.md - GATE-0 card added to the Streamlit UI
- 263 tests passing
What did not change:
- The pipeline after GATE-0 is identical to before
- No existing gates were renumbered or restructured
- No live Meta Ads calls
- No live social publishing
What Comes Next
Two things.
The first is connecting a real SimQuant.net landing page URL to the brief and running the audit against the live site. The probe will give real load time numbers from actual infrastructure. The LLM analysis will evaluate the actual page, not a placeholder. That is the first real use of Agent 00.
The second is what came next in the build session itself: Agent 06 got a significant upgrade. The pipeline can now diagnose why performance is poor, not just report the numbers. That is Chapter 8.