Building in Public

Chapter 4

The Honest State of the Dogfood Machine

CampaignForge AI · May 2026 · SimQuant LLC

CampaignForge AI - The Journey

Chapter 4: The Honest State of the Dogfood Machine

Status: Raw draft for Content Publisher Agent (11) to format Date: 2026-05-07 Author: Tim Simeonov (founder) + Codex (project reviewer)


Why We Stopped and Reviewed the Whole Project

Chapter 3 proved that the local LangGraph rebuild could run. The system could take a campaign brief, produce a PRD, pause at approval gates, verify the codebase, generate strategy and creative, write a local launch artifact, write a local performance artifact, and pause monitoring safely.

That was real progress.

But it also created a dangerous moment: the system was starting to look like it worked, even though the original product promise was bigger than a local graph that passes tests.

The original project was not "build a neat local agent demo." The project was:

  1. 1. Create a SaaS ad campaign platform that can eventually serve real customers.
  2. 2. Let the agentic framework help build itself and document that process.
  3. 3. Use CampaignForge to advertise CampaignForge.
  4. 4. Publish the build, marketing, sales, and performance journey in public.
  5. 5. Use real campaign outcomes to improve future strategy, creative, and content.

In other words, CampaignForge AI is supposed to be both the product and the proof. It should eat its own dog food. It should advertise itself, measure itself, write about itself, and get better from the outcome data.

So we stepped back and asked a harder question:

Is that what we have so far?

The honest answer is: partially.

We have the local skeleton and the beginning of the public journey. We do not yet have the full SaaS dogfood machine.


What We Have Now

The current implementation has a real local agent pipeline.

It runs through a LangGraph StateGraph with SQLite checkpointing. Agent 10 is not a separate agent process; the graph topology itself is the orchestrator. Every node receives CampaignState and returns a partial update. Human gates use LangGraph interrupt() and resume through the CLI.

The current local graph includes:

  1. 1. intake_brief
  2. 2. product_person
  3. 3. gate_1
  4. 4. architect
  5. 5. gate_2
  6. 6. developer
  7. 7. deployer
  8. 8. gate_3
  9. 9. cost_analyst
  10. 10. gate_4
  11. 11. strategist
  12. 12. creative
  13. 13. gate_5
  14. 14. executor
  15. 15. performance_analyst
  16. 16. monitoring_pause

If performance later triggers content, the graph can route to:

  1. 1. content_draft
  2. 2. gate_6
  3. 3. content_publish

The current local verification suite passes:

138 passed

That matters. Agent 03 no longer claims a build is complete unless the focused local LangGraph suite passes. The latest committed pipeline run was captured before the launch-review gate was added, and its build manifest showed:

tests_total: 135
tests_passed: 135

The latest meaningful run was:

fcc7cdd4-94f1-4e2d-b102-a05a695ec122

It successfully approved through Gate 4 and produced these committed artifacts:

dist/fcc7cdd4-94f1-4e2d-b102-a05a695ec122/build-manifest.json
dist/fcc7cdd4-94f1-4e2d-b102-a05a695ec122/launch/execution.json
dist/fcc7cdd4-94f1-4e2d-b102-a05a695ec122/monitoring/performance.json

The run proved the current local chain can:

This is a good local foundation.

It is not yet a production ad campaign platform.


What the Latest Run Actually Means

The latest run did not launch Facebook ads.

Agent 09 currently writes a local launch artifact. It does not call Meta Ads. It does not create a real campaign. It does not spend money. The campaign IDs are local IDs like:

LOCAL-fcc7cdd4-V01
LOCAL-fcc7cdd4-V02

That is intentional for the local phase, but the artifact language can still be misleading. The Executor output says status: LAUNCHED, and the audit action is CAMPAIGNS_LAUNCHED, but what actually happened is:

A launch plan was written to disk.

The performance artifact is also correctly conservative. It says:

"data_source": "local_artifact",
"is_real_performance_data": false,
"recommendation": {
"action": "CONTINUE"
}

That means content remains blocked. There are no real impressions, clicks, conversions, spend, CAC, or ROAS because no campaign has run in a real ad platform.

This is the right pause point.

The next mistake would be to manufacture "real" metrics just to keep the graph moving. That would violate the core trust premise of the product.


The Main Gaps

1. We do not have real ad execution yet

The PRD says Agent 09 should call ad platform APIs and create live campaigns. The current Executor explicitly does not call Meta Ads or any external platform API.

That means the current pipeline can create a local campaign plan, but it cannot yet sell "autonomous campaign launch" to a real customer.

This is acceptable for Chapter 4. It is not acceptable for the product promise.

2. We need a launch review gate before any real Meta integration

The original design expected human approval before consequential actions. The previous local graph approved spend at Gate 4 and then immediately ran strategy, creative, executor, and performance analysis.

That is fine while Executor only writes local files. It is not safe once Executor can touch Meta Ads.

Before any real platform integration, the graph needs a campaign launch review gate after creative and before execution. That gate is now present locally as Gate 5.

The current local shape is:

Gate 4: Spend Limit Sign-off
Strategist
Creative
Gate 5: Campaign Launch Review
Executor
Performance Analyst
Monitoring Pause
Content Draft
Gate 6: Content Publish Review
Content Publish

This keeps two approvals separate:

Those are not the same decision.

3. The Content Publisher is too narrow

The project vision includes a publisher agent that documents build progress, marketing progress, sales progress, campaign results, and the full journey.

Current Agent 11 only drafts a LinkedIn-style journey post after performance metrics trigger content. It does not yet publish:

Also, the agents/11-content-publisher.md file is currently empty. That is a real gap. Agent 11 has implementation code, but not a strong agent contract.

4. Manual metrics are not a substitute for a real campaign

The manual metrics workflow is useful once a real campaign has run somewhere outside the system. For example, the operator could manually create a Meta campaign from launch/execution.json, let it run, export real platform metrics, and then enter those metrics into the template.

But if no real campaign ran, there are no real metrics.

The generated template originally included plausible placeholder values and marked:

"is_real_performance_data": true

That was risky. The template now defaults to false, so the operator must replace the values and explicitly flip the flag only after verifying that they came from a real platform export.

5. The RAG flywheel is not implemented

The PRD describes a proprietary performance data flywheel:

platform

The current system stores performance-shaped artifacts, but Strategist and Creative do not retrieve historical campaign records yet. There is no embedding index, no retrieval step, and no feedback loop into future planning.

The data model is pointed in the right direction. The learning system is still future work.

6. The "self-build" loop is documented, but not autonomous

The journey is real. PRD, ADRs, chapters, commits, traces, artifacts, and pipeline outputs are being preserved. Agent 03 verifies the codebase. The system is documenting its own construction.

But the runtime is not yet autonomously modifying itself. The actual implementation work is still directed by the human operator with Codex doing the engineering work.

That is not a failure. It is just the honest current state.


What This Project Is Right Now

CampaignForge AI is currently:

A local-first, human-gated, multi-agent campaign planning and documentation
system with a working artifact trail.

It is not yet:

A production SaaS platform that launches, monitors, optimizes, and publicly
documents real customer ad campaigns.

The distinction matters because the trust model depends on saying exactly what the system has done.

The current system has earned the right to proceed to the next phase. It has not yet earned the right to claim real campaign execution.


The Plan From Here

Step 1: Fix the safety language and gates

Before building Meta integration, rename and separate the local concepts:

action like LOCAL_LAUNCH_PLAN_WRITTEN

This protects the product from accidentally turning a local proof into a real spend action.

Step 2: Harden manual metrics

Manual metrics should be redesigned so fake data cannot accidentally pass as real data.

The template should default to something like:

"is_real_performance_data": false,
"source_export_required": true,
"operator_attestation": "REPLACE_WITH_REAL_PLATFORM_EXPORT"

The ingest command should reject the file unless the operator explicitly marks the data as real and provides a source note.

Manual ingest should be for real externally gathered metrics only.

Step 3: Build the artifact review command

Before any launch approval, the operator needs a concise review surface.

The CLI now has:

.venv/bin/python campaignforge.py --review-artifacts <pipeline_id>

The first version summarizes:

This closes the most important missing operator experience before GATE-5: review no longer requires opening several JSON files manually.

Step 4: Define Agent 11 properly

Agent 11 needs a real contract, not just implementation code.

It should support at least three content modes:

  1. 1. Build progress update
  2. 2. Campaign journey update
  3. 3. Performance case study

Publishing should remain gated. Drafting can happen locally. Live publishing should require explicit approval and credentials.

The first version can write drafts to dist/<pipeline_id>/content/ only. Live LinkedIn publishing can stay behind the existing token check.

Step 5: Run the first real manual launch

Before automating Meta Ads, do one carefully controlled manual launch:

  1. 1. Review artifacts with --review-artifacts
  2. 2. Generate the Meta sandbox/manual packet with --meta-manual-launch
  3. 3. Manually create paused campaign objects in Meta Ads Manager or a sandbox/test ad account
  4. 4. Fill the receipt template with real Meta IDs
  5. 5. Use a small budget only after a separate launch decision
  6. 6. Let it run long enough to generate signal
  7. 7. Export real metrics
  8. 8. Ingest those metrics
  9. 9. Let Agent 11 draft the public update
  10. 10. Review and approve the post

This tests the business loop without giving the agent direct spend authority.

Step 6: Only then build Meta Ads API execution

Once the manual dogfood loop works, build the real Meta integration.

The integration should start in a constrained mode:

The product can then move from local planning to real campaign execution.

Step 7: Start the performance flywheel

After real campaigns exist, the data flywheel becomes meaningful.

The first version does not need complex fine-tuning. It needs reliable retrieval:

This is where the SaaS moat begins.


What Success Looks Like for the Next Chapter

Chapter 5 should not be "we pretended to have metrics and published a post."

Chapter 5 should be:

We separated local planning from real launch, added the missing launch review
gate, hardened manual metrics, and made the artifact review experience usable.

The next strong proof is not automation. It is clarity.

The system must make it impossible for the operator, the reader, or a future customer to confuse:

Once those boundaries are solid, CampaignForge can safely move toward the real dogfood loop.


Current State Summary

What exists:

What does not exist yet:

The project is not off track. It is earlier than it looks.

The right next move is to make the boundaries explicit, then run one real, small, manually launched dogfood campaign before giving the agents direct access to ad platform APIs.

This post was drafted by AI and reviewed by the operator. Content is published as part of the CampaignForge AI build-in-public journey.