Documentation
Get started with ACO
ACO is an autonomous conversion optimizer. It generates hypotheses about your landing page, runs A/B tests via a lightweight JS snippet, measures real traffic, and recommends what to keep — without touching your codebase or requiring access to your infrastructure.
Quick Start
From sign-up to a live A/B test in under ten minutes. You need a publicly accessible page and at least one AI provider key (Anthropic is recommended).
Create a campaign
Click New campaign in the dashboard. Set your target URL, write a short aco.md brief (see the Campaigns section for a template), and configure your conversion URL — the path visitors land on after they convert, e.g. /thank-you.
Install the snippet on your page
Go to your campaign Settings and copy the script tag from the Snippet installation panel. Paste it into the <head> of every page you want ACO to optimize.
<script src="https://cdn.aco.run/v1/camp_abc123.js" async defer></script>Verify the install: visit your page and open DevTools → Application → Cookies. You should see an __aco_vid cookie. That means the snippet loaded and assigned this browser a visitor ID. No cookie = snippet not loading — check the script tag is in <head>, not deferred by a tag manager.
Run your first cycle
Click Run cycle on your campaign page. ACO will:
- Fetch and screenshot your live page
- Generate one hypothesis using Cialdini persuasion principles
- Take a before/after screenshot and run the visual regression gate
- If the variant passes, activate the A/B test via your snippet
This takes 1–3 minutes depending on page complexity. The experiment appears in the Experiments tab as soon as it's activated. Your real visitors immediately start being split between control and variant.
Wait for traffic, then decide
ACO updates experiment stats every 15 minutes as your snippet reports visitor and conversion events. Once you have enough data, the verdict changes from insufficient data to a result you can act on.
Open the experiment, review the screenshots, the AI's reasoning, and the live stats. Approve to keep the variant, or Reject to revert all visitors to the control immediately. See Reading Your Results for guidance on when to approve vs. wait.
How ACO Works
Each run cycle follows the same four-step loop. You trigger it manually or it runs on a schedule — either way, the loop is fully automated.
Observe
ACO fetches your target URL and takes a full-page Playwright screenshot. If you have a PostHog or Mixpanel integration, real user behavior data — pageviews, top events, click patterns — is fetched and included as context. This grounds the hypothesis in what your actual visitors are doing, not just what the page looks like.
Hypothesize
Using Cialdini's six persuasion principles (reciprocity, commitment, social proof, authority, liking, scarcity) and your aco.md brief, ACO reasons about which specific element is most likely limiting conversions. It proposes one targeted, minimal text change per cycle — never multi-variate, never layout changes.
Visual Regression Gate
Before any real visitor sees the variant, ACO takes a Playwright screenshot of the proposed change against your live production URL and compares it pixel-by-pixel against the control. If the pages look substantially different (>15% pixel diff by default), the variant is rejected automatically and recorded as regression_failed. Only changes that pass this gate are activated.
Deploy & Measure
The passing variant is activated in under 60 seconds via the ACO snippet. Your real visitors start being split between control and variant immediately. The snippet reports impression and conversion events back to ACO, which aggregates stats every 15 minutes and updates the verdict. When the result is statistically clear, ACO notifies you (or auto-concludes, if autonomous mode is on).
By default, ACO requires your approval before recording a final decision on any experiment. The variant is live and collecting real traffic — you just decide whether to keep it or revert. Autonomous mode (auto-keep on significant wins, auto-discard on significant losses) is available on Growth and Scale plans.
Campaigns
A campaign targets one URL and optimizes for one conversion goal. Run separate campaigns for different pages or different goals on the same page.
Writing a good aco.md
The aco.md is a plain-English brief that shapes every hypothesis ACO generates. A vague brief produces generic suggestions. A specific brief produces targeted, on-brand changes. Include your goal, who your visitors are, your brand voice, and any hard constraints.
## Goal
Increase free trial sign-ups on the homepage hero.
## Audience
Early-stage SaaS founders — technical, skeptical of marketing claims.
They've seen a lot of hype. Direct evidence beats superlatives.
## Brand voice
Direct. No exclamation marks. No "revolutionary" or "game-changing".
Short sentences. Use numbers where possible.
## Constraints
- Headline must stay under 10 words
- Do not change the navigation, pricing section, or footer
- Maintain the existing CTA button color (#2b5945)
## Current context
~2.1% conversion rate (170 sign-ups / 8,000 monthly visitors)
Main friction: visitors don't understand what the product does in 5 secondsTarget URLThe page ACO will observe and optimize. Must be publicly accessible — ACO uses Playwright to screenshot it. Use your production URL, not a staging URL, to get accurate visual regression results.
aco.mdA plain-English brief describing your goal, audience, brand voice, and constraints. The richer and more specific this is, the more targeted and on-brand the hypotheses. See the template above.
Traffic allocationWhat percentage of visitors see the variant (1–50%). Start at 5% and increase only when you have enough monthly traffic. At 5%, 10,000 monthly visitors = 500 visitors/month in the variant arm — enough for meaningful results within a few weeks on most conversion rates.
Conversion URLsURL paths that count as a conversion for this campaign, e.g. /thank-you or /welcome. Add every path that signals a completed goal — ACO records a conversion when any bucketed visitor navigates to one of these paths. Use path prefixes only (not full URLs).
Require approvalWhen on (default), experiments collect traffic and show stats, but ACO waits for you to approve or reject before concluding. When off (Growth+), ACO auto-concludes on significant win/loss verdicts. Borderline results always require human review regardless of this setting.
Daily budgetMaximum AI spend per day across all cycles for this campaign. One cycle typically costs $0.02–$0.08 using Claude Sonnet, depending on page complexity. The campaign pauses automatically if this limit is hit.
The ACO Snippet
A small (~2 KB gzipped) JavaScript file served from ACO's global edge network. Add it once — ACO handles everything else: bucketing, variant delivery, event collection. No server-side changes, no Cloudflare account, no Git access.
What it does, step by step
Installation
Copy the script tag from your campaign settings and paste it into the <head> of every page you want ACO to optimize. It loads with async defer and will never block your page from rendering.
<!-- Paste once per page, inside <head> -->
<script src="https://cdn.aco.run/v1/camp_abc123.js" async defer></script>Platform-specific notes
Next.js / ReactPaste the script tag in app/layout.tsx inside the <head> tag. Because the snippet uses async defer, it will not block server-side rendering or hydration.
ShopifyAdd the script tag to your theme's theme.liquid file, inside the <head> section. Do not use a script app embed — it may load too late.
WebflowGo to Project Settings → Custom Code → Head Code. Paste the script tag there and publish your site.
WordPressUse a plugin like Insert Headers and Footers (WPCode) to add the script to the site-wide <head>. Avoid using wp_footer — conversion events on the thank-you page need the snippet loaded before navigation.
Tag managers (GTM)Possible but not recommended. Tag managers fire after page load, which can cause a flash of original content (FOOC) before the variant is applied. Direct <head> installation is always preferred.
The token in the snippet URL is a public identifier, not a secret. It is safe to commit to your repository or include in page source. It cannot be used to write data to your campaign or access sensitive information.
GitHub Audit Trail
Every accepted experiment creates a pull request in an ACO-owned private GitHub repository. You can see exactly what copy changed, review the full page diff, and optionally merge the change into your own codebase — without giving ACO access to your repo.
What you get per experiment
- 1.When a run cycle completes and a variant passes the visual regression gate, ACO fetches your live page HTML and applies the proposed text change to produce a control version and a variant version.
- 2.Both are committed to a new branch in an ACO-owned private GitHub repo created for your campaign. The branch name includes the cycle number, experiment ID, and date — e.g. run-3-exp-a1b2c3d4-20260315 — so you can trace it back to the exact run.
- 3.A pull request is opened from that branch. The PR title mirrors the hypothesis category and text. The PR body includes the Cialdini principle, the full AI reasoning, the target URL, and the experiment ID.
- 4.The PR link appears in the experiment detail view under GitHub PR. Click it to see the line-by-line diff of exactly what was changed, with full page context around the edit.
The GitHub PR link appears in each experiment's detail view under GitHub PR. The repo lives in the ACO org — you do not need a GitHub account to use ACO, but the PR link is always accessible if you want the detailed diff.
Approval Workflow
When Require approval is enabled (the default), experiments collect real traffic and show stats in the dashboard — but ACO waits for a human sign-off before recording a final keep or discard decision.
How to review
- 1.Open your campaign in the dashboard and go to the Experiments tab. Experiments waiting for a decision show a pending_approval badge.
- 2.Click an experiment to open the detail view. Read the AI's reasoning, look at the before/after screenshots and the pixel diff, and check the live visitor counts and current verdict.
- 3.Check the GitHub PR link (if shown) to see the full page diff — useful if you want to verify the exact text that was changed in context.
- 4.Click Approve to mark the experiment as kept. The variant stays live. Click Reject to immediately deactivate the variant and revert all visitors to the control — the experiment is marked discarded.
Autonomous mode
On Growth and Scale plans you can turn off Require approval. In autonomous mode, ACO concludes experiments automatically when the statistical verdict is clear: significant win → kept, significant loss → discarded, variant deactivated. Borderline and flat results are never auto-shipped — they wait for your review regardless of the setting.
Rejected experiments are permanently recorded in your campaign history. ACO avoids proposing the same change pattern again within the same campaign after a rejection.
Reading Your Results
ACO evaluates every experiment using both a frequentist z-test and a Bayesian false-positive risk estimate. The verdict is shown in plain English — here is what each one means and what you should do.
Not enough visitors to draw any reliable conclusion. The experiment is running correctly — you just need more traffic. Check back in a few days.
→ Wait. Do not approve or reject yet.
No statistically significant difference between control and variant. The proposed change did not move conversions in either direction.
→ Reject. Shipping a flat result adds complexity with no benefit. ACO will propose a different approach next cycle.
The variant is winning (p < 0.05) but the false positive risk is meaningful at this threshold. Per Kohavi: p 0.01–0.05 has a ~20% chance of being noise given typical experiment failure rates.
→ Use your judgement. If the change makes intuitive sense and the effect is modest, approving is reasonable. If the effect looks surprising, wait for more data or run a replication.
The variant shows a statistically significant improvement (p < 0.01). The Bayesian false positive risk is low given your campaign's historical failure rate.
→ Approve. Check the Bayesian FP risk shown — if it's below 10%, this is a strong result.
The variant is significantly worse than the control. The proposed change is hurting conversions.
→ Reject immediately. The variant is already live — every hour it stays active costs you conversions.
The result is statistically significant, but the effect size is unusually large (>50% relative lift). Per Twyman's Law: 'Any figure that looks interesting or unusual is usually wrong.' This often indicates a data pipeline bug, bot traffic, or a misconfigured conversion URL.
→ Investigate before approving. Check your snippet is not double-counting conversions, and verify the conversion URL path is correct.
Sample Ratio Mismatch — the observed traffic split differs significantly from the configured split. The conversion data cannot be trusted because the randomization is broken.
→ Reject. Common causes: a caching layer serving the page before the snippet loads, a bot inflating one arm, or the snippet tag missing from some pages. Fix the root cause and re-run the experiment.
How long should I run an experiment?
Run until you have at least 1,000 visitors per arm and a clear verdict. Stopping earlier risks acting on random noise. A rough guide:
ACO shows a Sample Ratio Mismatch (SRM) warning if the observed traffic split doesn't match the configured split. An SRM usually means a bot, a caching layer, or the snippet loading inconsistently. Always investigate an SRM before approving — the conversion data may be corrupted.
A/B Testing & Statistics
ACO uses the installed snippet to split traffic client-side — no server changes, no Cloudflare configuration needed. Every visitor is deterministically bucketed using MurmurHash3, so the same person always sees the same experience.
Traffic splitting
When an experiment is activated, the snippet fetches the variant config from ACO's edge (globally cached, sub-5 ms). It hashes visitorId + experimentId to assign the visitor to control or variant. The assignment is sticky across sessions (stored in a first-party cookie). The DOM patch is applied before rendering — no visible flicker.
How significance is calculated
Every 15 minutes, ACO aggregates conversion counts and runs two checks:
- Z-test (frequentist): two-proportion z-test to compute a p-value. ACO requires p < 0.01 for a "significant win" (not 0.05) to reduce false positive risk.
- Bayesian false positive risk: given your campaign's historical experiment failure rate, what is the true probability this result is noise? Shown alongside the p-value.
- SRM check: chi-squared test to detect corrupted data. If triggered, the experiment is flagged invalid.
- Twyman's Law: if the effect is implausibly large (>50% relative lift), ACO flags it for investigation before any action.
Bayesian Multi-Armed Bandit
When you run multiple experiments across campaigns, ACO uses Thompson Sampling to allocate budget across cycles — spending more on campaign setups that are producing results and less on those that are plateauing. This is separate from per-experiment significance; it operates at the campaign level over time.
Analytics Integrations
Connect your analytics platform to give ACO real user behavior data before each cycle. ACO fetches pageviews, top events, and observed friction points — and includes them as context when generating hypotheses. This grounds suggestions in what your actual users are doing, not just what the page structure suggests.
PostHog
Connect your PostHog project to give ACO pageview counts, top custom events, and most-clicked elements for your target URL. ACO uses HogQL queries via your Personal API key — read-only, no write permissions required. Works with PostHog Cloud and self-hosted instances.
Configure in your campaign settings → Analytics integration.
Provider: PostHog
Personal API key: phx_... (Settings → Personal API keys → Read events scope)
Project ID: 12345 (Settings → General → Project ID)
Self-hosted URL: https://posthog.mycompany.com (leave blank for cloud)Mixpanel
Connect your Mixpanel project to pull pageviews, unique visitors, and top business events for your target URL. ACO uses your Project Secret for HTTP Basic auth — this is a server-side credential, never exposed to visitors.
Configure in your campaign settings → Analytics integration.
Provider: Mixpanel
Project Secret: (Settings → Project Settings → Project Secret)
Project ID: (Settings → Project Settings → Project ID)Analytics integrations are optional. ACO works without them — the hypothesis engine uses page structure and your aco.md. Analytics data makes hypotheses more targeted by surfacing where real users are dropping off.
Experiment Detail — Field Guide
The experiment detail view shows everything you need to make a decision. Here is what each field means.
HypothesisThe Cialdini principle applied and the AI's reasoning for why this change should increase conversions. Read this critically — if the reasoning doesn't make sense for your audience, reject the experiment even if the statistics look good.
Control / VariantPlaywright screenshots of your live production page in control and variant state, taken before any real visitor saw the change. A pixel diff image highlights what changed visually. If the variant screenshot looks broken or substantially different from what you expected, reject regardless of stats.
Visual similarityPercentage of pixels that are identical between control and variant (0–100%). ACO automatically rejects variants below 85% similarity before they ever go live. A passing score doesn't mean the change looks right — always review the screenshots yourself.
GitHub PRA pull request in an ACO-owned repo showing control.html vs variant.html as a line-by-line diff. Click to verify the exact copy change in context — useful when the screenshot doesn't show the full text clearly.
Proposed changeThe exact text that was changed: original wording and replacement. ACO always makes minimal, single-element edits — never rewrites multiple sections in one experiment.
Visitors / ConversionsRaw traffic data per arm, updated every 15 minutes. If both arms have zero conversions after significant traffic, your conversion URL may be misconfigured — check it matches the exact path visitors land on after completing the goal.
p-valueTwo-proportion z-test result. ACO requires p < 0.01 (not 0.05) for a significant win verdict — this reduces the false positive rate from ~20% to ~5% at industry-average experiment failure rates.
Bayesian FP riskGiven your campaign's historical experiment failure rate, the estimated probability that this result is a false positive. At 80% failure rate (typical for young SaaS), a p < 0.05 result still has ~20% false positive risk. ACO shows this number so you can weigh it alongside the p-value.
VerdictPlain-English summary: insufficient data, flat, borderline win, significant win, significant loss, Twyman's warning, or invalid SRM. See the Reading Your Results section for what each verdict means and what to do.
AI costExact LLM cost in USD for generating this hypothesis. Shown per experiment and summed at campaign level so you can track total AI spend.
Self-Hosting (CLI)
The ACO CLI is open source (Apache-2.0) and runs without a SaaS subscription. It generates hypotheses, applies diffs, runs visual regression locally, and produces a results log you can review manually.
Install and run
npm install -g @aco/cli
# Run a single conservative cycle against your page
aco run --url https://yoursite.com --conservative
# Run an audit (no changes — just a hypothesis report)
aco audit --url https://yoursite.comRequired environment variables
# AI provider — at least one required
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-... # fallback if Anthropic unavailable
# Budget guard (default $2.00)
ACO_MAX_USD_PER_RUN=0.50The SaaS dashboard adds: A/B testing via snippet, real traffic stats, approval workflow, GitHub audit trail, Bayesian significance evaluation, and analytics integration. The CLI is best for generating and reviewing proposed changes locally before committing to a live test.
Frequently Asked Questions
Can I see exactly what ACO changed on my page?
Yes — two ways. First, the experiment detail view shows the exact text that was replaced (original and replacement). Second, the GitHub PR link shows a full page diff (control.html vs variant.html) with every line around the change for context. You don't need to give ACO access to your own repo.
Does ACO need access to my codebase or Git repository?
No. ACO delivers variants through a JS snippet you add to your site once. It applies changes as DOM text patches in the visitor's browser — no access to your source code, Git repository, or hosting provider is required.
Does ACO need access to my Cloudflare account?
No. Traffic splitting is handled entirely by the ACO snippet running in the visitor's browser. You do not need to route your domain through Cloudflare or configure any worker routes.
What happens when I reject an experiment?
The variant is deactivated immediately — all visitors revert to the control within seconds. The experiment is recorded as discarded in your campaign history. ACO avoids proposing the same change pattern again within this campaign. There is no partial rollback or gradual ramp-down.
How much traffic do I need to run meaningful experiments?
A rough minimum: 5,000 monthly visitors. Below that, only very large effects (>20% relative lift) are detectable before significance is drowned in noise. Between 5K–20K visitors, set traffic allocation to 10–20% to reach significance faster. Above 50K monthly visitors, the default 5% is fine.
How long does an experiment take to reach a verdict?
It depends on your traffic and conversion rate. A campaign with 20,000 monthly visitors, a 2% conversion rate, and 5% traffic allocation will have ~1,000 visitors and ~20 conversions per arm after one month. That's usually enough for a borderline verdict. Double the traffic allocation to halve the time.
The variant has been live for two weeks and still shows 'insufficient data'. What should I do?
Check three things: 1) Is the snippet loading on your page? (check for the __aco_vid cookie in DevTools). 2) Is your conversion URL correct? (visit the thank-you page manually and check the Network tab for ACO events). 3) Do you have enough traffic? Consider increasing traffic allocation to 15–20% to accumulate data faster.
How much does each cycle cost?
One cycle using Claude Sonnet typically costs $0.02–$0.08, depending on page complexity. The exact token spend is shown per experiment and summed per campaign. Your daily budget setting caps spend automatically.
Can I run ACO against a staging environment?
Yes. Set the campaign URL to your staging URL and install the snippet on staging. When you're ready for production, update the campaign URL and install the snippet on production. Campaign history and settings carry over.
Does ACO work with Next.js, Shopify, Webflow, or other platforms?
Yes — any platform that lets you add a <script> tag to the page head. ACO patches visible text in the DOM and does not know or care about your framework or CMS. See the platform notes in the Snippet section for specifics.
Is the snippet token a secret?
No. The token in the snippet URL (camp_abc123) identifies which campaign's variants to serve. It is safe to commit to your repository, include in page source, or share publicly. It cannot be used to write data to your campaign or access sensitive information.
What license does the ACO CLI use?
The ACO CLI is open source under the Apache License, Version 2.0. You are free to use, modify, and distribute it — including commercially — with attribution. The SaaS platform (dashboard, API, managed infrastructure) is proprietary software owned by BlocWeave. See the /license page for full details.
Ready to start optimizing?
Get early access and run your first cycle today.