Understand the language of AI visibility measurement. Terms for marketers and business leaders.
The invisible layer of digital influence that controls discovery before clicks, traffic, or analytics exist. The Pre-Click AI Intent Layer measures what AI recommends before users search or click.
How AI models remember and recommend brands, products, or companies. Similar to how search engines index websites, AI models build "memory" of brands based on training data and real-time information.
The percentage of times a brand appears in AI model recommendations for a category. A citation score of 80% means the brand appears in 80% of relevant AI responses.
Changes in how AI models remember or recommend brands over time. Drift can indicate model updates, market shifts, or competitive changes. Positive drift = increased visibility, negative drift = decreased visibility.
Agreement across multiple AI models on brand recommendations. High consensus means multiple models recommend the same brand, low consensus indicates model-specific bias or differences.
Measurement of user intent and discovery before any click occurs. OutCited measures the pre-click AI intent layer—what AI recommends before users search or visit websites.
A specific time period for data collection (e.g., week_2025_46). Data is organized into weekly "tensors" to enable historical comparison and drift tracking.
The number of categories where a brand appears in AI recommendations. Higher category coverage indicates broader AI memory and more discovery opportunities.
The position a brand appears in AI recommendations (1-15 scale, where 1 is best). Lower ranking positions indicate stronger AI memory and higher recommendation priority.
Paid AI models used by enterprise buyers and professionals (e.g., GPT-4, Claude Sonnet). Premium model visibility correlates with higher-quality leads and enterprise buyers.
Free or low-cost AI models used by general consumers (e.g., GPT-4o-mini, Gemini Flash). Mass market model visibility indicates broad brand awareness.
Standardized category taxonomy used to normalize AI recommendations across models and time periods. Ensures consistent measurement and comparison.
Normalizing brand domains and names to ensure accurate tracking (e.g., "microsoft.com" vs "www.microsoft.com"). Prevents duplicate tracking and ensures data accuracy.
A multi-dimensional data structure tracking brand × category × model × time. Weekly tensors enable comprehensive analysis of AI memory across all dimensions.
A composite metric combining citation score, ranking position, and category coverage. Higher visibility scores indicate stronger AI memory and recommendation frequency.
See how AI models remember and recommend your brand across categories.
Check Your Brand