AI VISIBILITY · SAMPLED HONESTLY

Find out what AI tells your customers.

Sample what ChatGPT, Claude, Gemini, Grok, and Perplexity actually say when someone asks about your category. Mention rate, sentiment, citations — with the sample size visible so the numbers mean something. Per-call billing. No subscription. No prompt cap.

7
AI models sampled
$0.001
cheapest per call
$0
monthly minimum

THE HONEST PITCH

Every AI citation tool is selling a fantasy.

There is no Search Console for AI search. No vendor — not Otterly, not Profound, not the $399/month tool youre evaluating — has a privileged feed of "what AI actually said about your brand this week." It doesnt exist.

Every tool, including this one, runs queries against the same public AI APIs and parses the response. They wrap it in a subscription, cap your prompt volume, invent a proprietary “visibility score”, and call the result tracking. It isnt. Its sampling. The underlying mechanic costs pennies.

FoundryVis does the same thing. We charge a flat 15% on top-ups (which funds the service) and pass through API costs at exact provider price — zero per-call markup. Show the math before you spend. Refund unused estimates. The product is the honest version of what theyre already selling you.

FEATURES

Built for marketers who actually want to ship the change.

Monitoring is the easy part. Knowing what to do with the data is the moat.

01

Saved configurations

Build "Brand Queries" once. Run it weekly.

Bundle queries + models + sample size into named, repeatable runs. Trends and results group by configuration, so "Brand Queries" tracks against itself across weeks instead of fragile per-query line charts. Schedule support is in the schema; the worker lands in v1.x.

02

Mention rate, sentiment, citations

Real numbers. Sample size visible.

For every (query × model) pair, we run N samples (5 by default — single samples are noise). You see mention rate, sentiment of each mention (positive / neutral / negative classified by a fast model on the snippet), and the actual citation URLs. Trends show drift over time with sample-size annotations.

03

Google Search Console integration

Sample what you already rank for.

Connect GSC at signup. Pull your top queries by impressions, filter by page, sort by position. One click adds them to your saved-query library. Stop guessing what to monitor — use the queries Google already says you have visibility for.

04

"Why wasn’t I cited?" diagnosis

Monitoring → strategy.

When AI cites competitors and not you, click "Why?". We scrape the cited pages + your matching page, and a Claude Sonnet pass produces a specific report: what those competitors have, what you don’t, ranked actions. Two flavors: page-vs-page, or brand-only diagnosis. Cost shown before you spend.

05

Native search per model

What real users actually see.

GPT-5.4 with OpenAI’s browse tool. Claude with Anthropic’s web search. Gemini with Google grounding. Perplexity with its own search. We never bolt one provider’s search onto another model — that creates response shapes no real consumer encounters. Grok runs vanilla because xAI doesn’t expose a search tool.

06

Robots.txt audit (free, no signup)

Are you blocking the AI crawlers?

Many sites silently block GPTBot, ClaudeBot, or PerplexityBot in robots.txt — sometimes intentionally, sometimes by accident. We check your domain against every known AI crawler and tell you which are allowed, blocked, or unspecified. Free tool, lives at the bottom of this page.

A run summary, sketched

QueryModelMentionedSentiment
best landing page tools for B2Bclaude-sonnet-4.64/5●3 ●1
best landing page tools for B2Bgpt-5.42/5●2
best landing page tools for B2Bperplexity/sonar0/5

Click any row to expand. Click “Why wasn’t I cited?” to get a specific gap diagnosis vs the actual competitors the AI surfaced.

PRICING

15% on top-ups. Zero per-call markup.

Drop $25, we keep $3.75, you get $21.25 in credits. Spend those credits on API calls at exact provider cost — no per-call upcharge, ever. Minimum top-up $10. The 15% is our entire business model; it doesnt depend on whether you run anything.

FoundryVis — per call estimate

ModelPer call
GPT-5.4
openai/gpt-5.4
$0.1200
Claude Sonnet 4.6
anthropic/claude-sonnet-4.6
$0.1248
Claude Opus 4.7
anthropic/claude-opus-4.7
$0.1920
Gemini 3.1 Pro
google/gemini-3.1-pro-preview
$0.1536
Gemini 3 Flash
google/gemini-3-flash
$0.0528
Grok 4.1
xai/grok-4.1-fast-non-reasoning
$0.0012
Perplexity Sonar
perplexity/sonar
$0.0096

Pure provider passthrough — tokens, web search, reasoning, cache reads, all at exact gateway price. Estimates lean high; actuals usually come in at 40-60% of estimate and the difference is refunded immediately on completion.

The subscription tools youre evaluating

Tool/moPrompts$/prompt
LLMrefs
best per-prompt value
$79500$0.16
Trakkr Growth
8 models
$7950$1.58
Otterly Standard
unlimited workspaces
$189100$1.89
Peec Pro
3 models
$245150$1.63
Profound Growth
3 engines
$399100$3.99

$/prompt is monthly cost ÷ prompt allowance — the floor you commit to whether you use it or not. None of them refund unused prompts.

The math

Top up $25 → $21.25 in credits. 10 queries × 4 models × 5 samples = 200 calls. ~$0.24 on the cheap models, ~$24.48 on the flagships with web search. One $25 top-up runs the cheap-model version several times, or one full flagship pass. LLMrefs: $79/mo minimum. Profound Growth: $399/mo. Both whether you run anything or not.

Where every $25 with FoundryVis goes

$%
→ Your API calls (passthrough)$21.2585.0%
→ Stripe processing (2.9% + $0.30)$1.054.2%
→ Us (gross — infra + dev)$2.7010.8%

Net to us is ~10% after infrastructure. The $10 minimum exists because Stripe’s fixed $0.30 eats most of our share below that.

Where every $399 with Profound Growth goes (estimated)

$%
→ Underlying API calls (100 prompts × 3 engines)~$10.00~2.5%
→ Stripe processing~$11.90~3.0%
→ Profound (gross)~$377~94.5%

Estimate based on 100 prompts × 3 engines × ~5 samples × ~$0.02/call avg API cost. Actuals depend on which models they route to, but the order of magnitude holds. The price gap doesnt fund harder engineering — underneath its the same shape as this product: a web app, a database, scheduled API queries. The ~94% funds a sales team, marketing budget, and the growth rate their investors expect.

For every dollar you give us, 85¢ buys actual AI sampling. For every dollar you give Profound, about 3¢ does. The rest funds the company in both cases — weve just chosen not to build the sales org.

REAL TALK

What we dont know.

Other tools bury this. We lead with it. If any of these matter to your reporting, you should know now.

We don’t see real citation logs.

Nobody does. There is no Search Console for AI search. Every tool — including this one — runs queries against the public model APIs and parses the response. If a vendor implies otherwise, they’re lying.

AI responses are non-deterministic.

Same query, same model, different answer. That’s why default sample size is 5 — single samples are noise. We show sample sizes everywhere so you can judge confidence yourself.

Web search results drift daily.

Yesterday’s cited URL can be gone today. Trend charts catch this; one-time samples don’t. Schedule recurring runs to spot drift.

Sentiment is itself an LLM judgment.

We classify with a fast cheap model on the snippet only. Useful as a directional signal; not gospel. The "unclassified" bucket is honest about ambiguity.

We can’t measure what consumers actually click.

For that you’d need first-party browser data, which doesn’t exist for AI search yet. Citation presence is a leading indicator, not a click-through guarantee.

FREE TOOL

Are AI bots blocked from your site?

If youre blocking PerplexityBot, OAI-SearchBot, or Google-Extended, you wont appear in AI responses no matter what your content says. Audit any domains robots.txt — no signup, free forever.

Stop paying $79/month to find out youre not cited.

$1.00 free on signup. Enough cheap-model samples to find out where you stand before you top up. Minimum top-up is $10; we keep 15%, the rest is yours to spend on API calls.