AI VISIBILITY · SAMPLED HONESTLY
Sample what ChatGPT, Claude, Gemini, Grok, and Perplexity actually say when someone asks about your category. Mention rate, sentiment, citations — with the sample size visible so the numbers mean something. Per-call billing. No subscription. No prompt cap.
THE HONEST PITCH
There is no Search Console for AI search. No vendor — not Otterly, not Profound, not the $399/month tool you’re evaluating — has a privileged feed of "what AI actually said about your brand this week." It doesn’t exist.
Every tool, including this one, runs queries against the same public AI APIs and parses the response. They wrap it in a subscription, cap your prompt volume, invent a proprietary “visibility score”, and call the result tracking. It isn’t. It’s sampling. The underlying mechanic costs pennies.
FoundryVis does the same thing. We charge a flat 15% on top-ups (which funds the service) and pass through API costs at exact provider price — zero per-call markup. Show the math before you spend. Refund unused estimates. The product is the honest version of what they’re already selling you.
FEATURES
Monitoring is the easy part. Knowing what to do with the data is the moat.
Build "Brand Queries" once. Run it weekly.
Bundle queries + models + sample size into named, repeatable runs. Trends and results group by configuration, so "Brand Queries" tracks against itself across weeks instead of fragile per-query line charts. Schedule support is in the schema; the worker lands in v1.x.
Real numbers. Sample size visible.
For every (query × model) pair, we run N samples (5 by default — single samples are noise). You see mention rate, sentiment of each mention (positive / neutral / negative classified by a fast model on the snippet), and the actual citation URLs. Trends show drift over time with sample-size annotations.
Sample what you already rank for.
Connect GSC at signup. Pull your top queries by impressions, filter by page, sort by position. One click adds them to your saved-query library. Stop guessing what to monitor — use the queries Google already says you have visibility for.
Monitoring → strategy.
When AI cites competitors and not you, click "Why?". We scrape the cited pages + your matching page, and a Claude Sonnet pass produces a specific report: what those competitors have, what you don’t, ranked actions. Two flavors: page-vs-page, or brand-only diagnosis. Cost shown before you spend.
What real users actually see.
GPT-5.4 with OpenAI’s browse tool. Claude with Anthropic’s web search. Gemini with Google grounding. Perplexity with its own search. We never bolt one provider’s search onto another model — that creates response shapes no real consumer encounters. Grok runs vanilla because xAI doesn’t expose a search tool.
Are you blocking the AI crawlers?
Many sites silently block GPTBot, ClaudeBot, or PerplexityBot in robots.txt — sometimes intentionally, sometimes by accident. We check your domain against every known AI crawler and tell you which are allowed, blocked, or unspecified. Free tool, lives at the bottom of this page.
A run summary, sketched
| Query | Model | Mentioned | Sentiment | Snippet |
|---|---|---|---|---|
| best landing page tools for B2B | claude-sonnet-4.6 | 4/5 | ●3 ●1 | |
| best landing page tools for B2B | gpt-5.4 | 2/5 | ●2 | |
| best landing page tools for B2B | perplexity/sonar | 0/5 | — |
Click any row to expand. Click “Why wasn’t I cited?” to get a specific gap diagnosis vs the actual competitors the AI surfaced.
PRICING
Drop $25, we keep $3.75, you get $21.25 in credits. Spend those credits on API calls at exact provider cost — no per-call upcharge, ever. Minimum top-up $10. The 15% is our entire business model; it doesn’t depend on whether you run anything.
| Model | Per call |
|---|---|
GPT-5.4 openai/gpt-5.4 | $0.1200 |
Claude Sonnet 4.6 anthropic/claude-sonnet-4.6 | $0.1248 |
Claude Opus 4.7 anthropic/claude-opus-4.7 | $0.1920 |
Gemini 3.1 Pro google/gemini-3.1-pro-preview | $0.1536 |
Gemini 3 Flash google/gemini-3-flash | $0.0528 |
Grok 4.1 xai/grok-4.1-fast-non-reasoning | $0.0012 |
Perplexity Sonar perplexity/sonar | $0.0096 |
Pure provider passthrough — tokens, web search, reasoning, cache reads, all at exact gateway price. Estimates lean high; actuals usually come in at 40-60% of estimate and the difference is refunded immediately on completion.
| Tool | /mo | Prompts | $/prompt |
|---|---|---|---|
LLMrefs best per-prompt value | $79 | 500 | $0.16 |
Trakkr Growth 8 models | $79 | 50 | $1.58 |
Otterly Standard unlimited workspaces | $189 | 100 | $1.89 |
Peec Pro 3 models | $245 | 150 | $1.63 |
Profound Growth 3 engines | $399 | 100 | $3.99 |
$/prompt is monthly cost ÷ prompt allowance — the floor you commit to whether you use it or not. None of them refund unused prompts.
The math
Top up $25 → $21.25 in credits. 10 queries × 4 models × 5 samples = 200 calls. ~$0.24 on the cheap models, ~$24.48 on the flagships with web search. One $25 top-up runs the cheap-model version several times, or one full flagship pass. LLMrefs: $79/mo minimum. Profound Growth: $399/mo. Both whether you run anything or not.
Where every $25 with FoundryVis goes
| $ | % | |
|---|---|---|
| → Your API calls (passthrough) | $21.25 | 85.0% |
| → Stripe processing (2.9% + $0.30) | $1.05 | 4.2% |
| → Us (gross — infra + dev) | $2.70 | 10.8% |
Net to us is ~10% after infrastructure. The $10 minimum exists because Stripe’s fixed $0.30 eats most of our share below that.
Where every $399 with Profound Growth goes (estimated)
| $ | % | |
|---|---|---|
| → Underlying API calls (100 prompts × 3 engines) | ~$10.00 | ~2.5% |
| → Stripe processing | ~$11.90 | ~3.0% |
| → Profound (gross) | ~$377 | ~94.5% |
Estimate based on 100 prompts × 3 engines × ~5 samples × ~$0.02/call avg API cost. Actuals depend on which models they route to, but the order of magnitude holds. The price gap doesn’t fund harder engineering — underneath it’s the same shape as this product: a web app, a database, scheduled API queries. The ~94% funds a sales team, marketing budget, and the growth rate their investors expect.
For every dollar you give us, 85¢ buys actual AI sampling. For every dollar you give Profound, about 3¢ does. The rest funds the company in both cases — we’ve just chosen not to build the sales org.
REAL TALK
Other tools bury this. We lead with it. If any of these matter to your reporting, you should know now.
We don’t see real citation logs.
Nobody does. There is no Search Console for AI search. Every tool — including this one — runs queries against the public model APIs and parses the response. If a vendor implies otherwise, they’re lying.
AI responses are non-deterministic.
Same query, same model, different answer. That’s why default sample size is 5 — single samples are noise. We show sample sizes everywhere so you can judge confidence yourself.
Web search results drift daily.
Yesterday’s cited URL can be gone today. Trend charts catch this; one-time samples don’t. Schedule recurring runs to spot drift.
Sentiment is itself an LLM judgment.
We classify with a fast cheap model on the snippet only. Useful as a directional signal; not gospel. The "unclassified" bucket is honest about ambiguity.
We can’t measure what consumers actually click.
For that you’d need first-party browser data, which doesn’t exist for AI search yet. Citation presence is a leading indicator, not a click-through guarantee.
FREE TOOL
If you’re blocking PerplexityBot, OAI-SearchBot, or Google-Extended, you won’t appear in AI responses no matter what your content says. Audit any domain’s robots.txt — no signup, free forever.
$1.00 free on signup. Enough cheap-model samples to find out where you stand before you top up. Minimum top-up is $10; we keep 15%, the rest is yours to spend on API calls.