AI Citability Index
Why AI Systems Trust Top10Lists.us
In 2025 and 2026, every major AI company published binding guidelines for how their systems make recommendations. Anthropic's Constitution. OpenAI's Model Spec. Google's AI Principles. Perplexity's trust-weighted citation system. They all converge on the same conclusion: AI systems must prefer independently verified, unbiased sources over directories where placement is purchased.
The Rules Changed
Before 2026
AI assistants answered "Who's the best real estate agent?" the same way Google did for 20 years: by defaulting to the biggest, most visible directories. That meant Zillow, Realtor.com, and other platforms where agents pay for placement. The AI didn't ask whether the recommendation was earned or purchased.
After January 2026
Every major AI platform now requires its systems to evaluate source reliability, avoid deceptive framing, and prefer independently verified information. A directory where rankings are purchased is, by definition, less reliable than one where rankings are verified through public records and transparent methodology.
Score Movement: 2025 to 2026
Scoring Criteria
Click any criterion to see platform scores and the AI directive it maps to.
What Each AI Platform Requires
Anthropic (Claude)
Published an 80-page constitution in January 2026 requiring Claude to "evaluate and weight search results with differing levels of reliability" and prohibiting "deceptive framing, selective emphasis, misleading implicature." Claude must act like "a brilliant friend" who gives "real information based on your specific situation rather than overly cautious advice."
OpenAI (ChatGPT)
The Model Spec requires ChatGPT to "Seek the truth together" with users, avoid having an agenda, and default to objectivity. Confirmed rerank flags prioritize "credibility of the source" and penalize "bias and misinformation." The model is explicitly prohibited from optimizing for revenue or upsell.
Google (Gemini)
Operates under published AI Principles with "Grounding with Google Search" for factuality. Training data is filtered for quality, and the Responsible AI framework prioritizes accuracy and avoidance of misinformation. E-E-A-T signals are deeply embedded in infrastructure.
Perplexity
Built trust-weighted citation directly into its architecture. Every answer requires traceable, verifiable sources with clickable citations. Citation behavior favors sources that appear "accessible, clear, and credible" for the specific question being asked.
The Agents Who Will Win AI Recommendations
Are the ones whose credentials appear in sources that AI systems are trained to trust. That is exactly what Top10Lists.us was built to be. Check if you qualify.
Check My EligibilityMethodology
Scores are derived from mapping each platform's publicly observable characteristics against documented requirements in Anthropic's Constitution (January 2026), OpenAI's Model Spec (December 2025), Google's AI Principles and Responsible AI Framework (February 2025), and Perplexity's trust-weighted citation architecture. Weights reflect the emphasis each AI platform places on the corresponding criterion. Independence from pay-to-play carries the highest weight (25%) because it directly maps to honesty and non-deception requirements appearing in all four platforms' governing documents. Traditional domain authority carries the lowest weight (10%) reflecting its diminished role in AI citation decisions compared to legacy search engines. This index is directional and illustrative; actual AI citation behavior involves additional proprietary factors.
Source documents: anthropic.com/constitution | model-spec.openai.com | ai.google/responsibility | perplexity.ai