The 5 signals

What AI CiteRank actually measures

Five on-page signals that mirror how ChatGPT, Gemini and Perplexity weigh sources.

TL;DR
AI CiteRank scores every page on five signals: content depth, structure, schema, clarity, and claim support. Depth, structure, and claim support are weighted highest — they're the strongest predictors of whether an LLM will cite a page. Schema and clarity are medium-weight but easy to fix.

Content depth

Weight: High
What it is
How fully your page resolves the question a user is likely to ask. Long-form, substantive answers beat thin pages.
Why models reward it
AI models are trained to prefer sources that answer the question completely. A 300-word page rarely beats a 1,500-word page that addresses every angle, edge case, and counter-argument.
How to improve
Expand thin sections. Add concrete examples. Address the obvious follow-up question. Aim for the page to be the last one a user needs to read on the topic.
Example
A pricing page that explains what counts as a unit, what happens at overage, and how annual billing works will out-cite a pricing page that just lists tiers.

Structure

Weight: High
What it is
Clear H2/H3 hierarchy, lists, tables, and short paragraphs that make it easy for a model to extract a citable chunk.
Why models reward it
LLMs retrieve at the passage and chunk level, not the document level. Pages with strong structure surface clean, self-contained chunks that the model can quote without losing context.
How to improve
One idea per paragraph. Use H2 for major sections, H3 for sub-points. Convert dense prose into bullets and tables where possible.
Example
A comparison rendered as a table is far more citable than the same comparison written as a paragraph.

Schema

Weight: Medium
What it is
Machine-readable metadata (JSON-LD): FAQ, Article, HowTo, Product, Organization. Tells crawlers and models what the page is.
Why models reward it
Schema gives the model high-confidence signals about the type of content and the structure of its claims. FAQPage schema in particular is heavily used by AI overviews.
How to improve
Add JSON-LD blocks for the most relevant types. FAQPage for any Q&A section. Article for posts. Product + Offer for pricing pages. Organization on every page.
Example
A pricing page with Product + Offer schema is more likely to be cited when a user asks 'how much does X cost?' than the same page without schema.

Clarity

Weight: Medium
What it is
Direct, declarative sentences a model can quote verbatim — not marketing fluff, not stacked qualifiers.
Why models reward it
Models cite sentences they're confident reproducing. A sentence like 'AI CiteRank is a page-level AI citation scoring tool' is quotable. 'We help you unlock the next era of search-driven growth' is not.
How to improve
Lead with the noun. Use active voice. Cut hedge words ('basically', 'essentially', 'kind of'). State things as facts, not feelings.
Example
'Pro is $79/month and includes 25 scans' will be cited. 'Our flexible Pro plan is designed to grow with your needs' won't.

Claim support

Weight: High
What it is
Statistics, named entities, sources, and dates that back up every non-trivial claim on the page.
Why models reward it
LLMs are tuned (especially post-RLHF) to prefer sources that anchor claims to evidence. Unsupported claims get filtered out of the answer; supported claims get cited.
How to improve
Add stats with sources. Name specific products, companies, and people instead of generic categories. Date your claims when relevant. Use <sup> footnote links to primary sources.
Example
'Profound starts at ~$499/mo' (with a citation) will be quoted. 'Some competitors are expensive' won't.

See your scores in one click

Get all five signals scored, plus three ranked fixes, in under 30 seconds.

New to this? Read what AI citation rank is first.

FAQ

About the signals

How many signals does AI CiteRank measure?

Five: content depth, structure, schema, clarity, and claim support. Each contributes to a single 0–100 score, and the three highest-impact gaps become your ranked fixes.

Which signal moves the score the most?

It depends on the page. Most pages have one signal that's significantly weaker than the others, and closing that gap typically moves the score 10–20 points. AI CiteRank ranks the fixes by expected impact so you know which to do first.

Are these the same signals Google uses?

There's overlap — structure, schema, and depth matter for Google too — but Google also weighs backlinks, CTR, and other off-page signals heavily. AI citation rank is almost entirely on-page, because the model is making the call at retrieval time, not based on link graphs.