Beyond Google: When AI tells your story - A crisis recovery roadmap for C-suite & boards
"'Googling yourself' is yesterday's reputation management. Today, executives must ask: 'What does ChatGPT, Claude, Gemini, or Copilot say about me and my company and executives?'
While there are now over 100 large language models and counting, these top four represent over 72% of the market. Unlike Google, which simply indexes content, LLMs synthesize, summarize, and create their own narratives. When AI models amplify information about brands, whether it's incorrect, outdated, or legitimately negative, the stakes for C-suite reputation have fundamentally changed. Your company's story is being rewritten in real-time by algorithms trained on everything from press releases to Reddit threads. During a crisis, these models may accurately reflect public sentiment and factual negative events, making authentic response and genuine accountability more critical than ever. And most boards have no idea what narrative is taking hold."
The new reputation landscape
Real examples: When AI amplifies truth and fiction
In February 2024, Air Canada's chatbot told a grieving customer he could retroactively claim bereavement fares - information that was completely false. When the airline tried to argue the chatbot was "a separate legal entity responsible for its own actions," a British Columbia tribunal firmly rejected this defense and ordered Air Canada to pay damages. The airline's attempt to deflect responsibility was called "a remarkable submission."
But Air Canada isn't alone. New York City's AI chatbot was caught telling businesses to break the law in 2023, advising companies to violate labour regulations and discrimination laws. Even more troubling, Character.AI now faces lawsuits from families claiming its chatbots delivered explicit sexual content to minors and promoted self-harm or violence, with one Texas family asserting their child experienced sexual exploitation via a chatbot.
However, AI systems also accurately synthesize legitimate crises. When companies face real scandals like data breaches, executive misconduct, product failures, or regulatory violations, LLMs will persistently surface these truths, often with more context and connection-drawing than traditional search ever could. The FTX collapse, Theranos fraud, and Wells Fargo account scandal aren't "hallucinations" - they're factual events that AI models rightfully emphasize when discussing these organizations.
The implications: Unlike traditional SEO where content is searchable, LLM-driven search delivers prescriptive, conversational answers that users may take at face value without digging deeper. Studies show LLMs rely on editorial content for over 60% of their understanding of brand reputation. Executives can no longer control their narrative through press releases alone - nor can they spin away legitimate failures.
The problem gets personal:
Brand, communications, and product marketing teams now need to understand how LLMs represent them during launches, media cycles, or moments of reputational sensitivity, with legal and trust teams tracking both misinformation and amplification of genuine issues. This isn't just about your company it's also about you personally. What happens when an LLM hallucinates details about your background, falsely attributes quotes to you, or synthesizes a damaging narrative from fragments of outdated information? Equally challenging: what happens when AI accurately connects dots about real missteps, creating a comprehensive narrative of actual failures you'd hoped were forgotten?
Crisis recovery framework: 12 steps adapted for AI
Traditional crisis recovery follows predictable stages. But AI introduces unique challenges that demand a new playbook. This framework addresses both false narratives and legitimate negative events that AI systems accurately report.
Phase 1: Prevention & monitoring (Steps 1-3)
- Step 1: Audit Your AI Presence Don't wait for crisis. Tools like Profound, Trakkr other and specialized platforms now monitor how generative AI models, customer support bots, and internal search systems talk about products, services, and policies, tracking when LLM outputs deviate from official documentation or when they accurately highlight real problems. Query major LLMs monthly about your company, products, and key executives. Ask: "What does ChatGPT say about our CEO?" "How does Claude describe our last product launch?" "What controversies or failures does it mention?" Document everything - both inaccuracies and uncomfortable truths.
- Step 2: Establish Source Authority (But Accept Reality) Generative engine optimization (GEO) focuses on authoritative content to improve discoverability within AI platforms, with multimedia integration and earned media stories taking on new currency. Create comprehensive, well-structured content on your owned channels. Ensure Wikipedia entries are accurate and properly sourced. However, recognize that no amount of optimization can erase factual negative events. Attempting to do so will damage credibility further.
- Step 3: Document Your Truth (And Your Accountability) Maintain "source of truth" documents with key facts, dates, timelines, achievements, and preemptive corrections to common misconceptions. Also document your responses to real crises, improvements made, and lessons learned. Update it quarterly. This becomes your crisis ammunition when AI gets it wrong - and your accountability framework when AI gets it right.
Phase 2: Detection & response (Steps 4-6)
- Step 4: Rapid Identification Research suggests 15-20% of ChatGPT outputs contain hallucinations, and if employees fail to recognize these, consequences include misleading data or poor decision-making. But the other 80-85% may include accurate negative information you need to address. Set up Google Alerts, but also query AI systems directly. Employee reports of "weird AI responses" about your company should trigger immediate investigation, not dismissal. Reports of accurate but damaging AI narratives require different response strategies.
- Step 5: Assess Damage Scope (And Validity) Which LLMs are spreading misinformation? Which are accurately reporting real issues? Is it isolated to one model or systemic? ChatGPT and competitors are more prone to citing bad sources than Google AI-search, with language models struggling to differentiate true authority from noise. First determine: Is this false information requiring correction, or true information requiring authentic response? Document everything with screenshots. AI outputs change without warning or explanation.
- Step 6: Assemble Your AI Crisis Team Include: legal counsel (the Air Canada and NYC cases prove liability is real), communications lead, someone technically fluent in LLMs, and a designated "AI reputation officer." Add a crisis management expert who can differentiate between defending against falsehoods and responding to legitimate criticism. This team must be empowered to act within hours, not days. Slow response allows false narratives to calcify and real issues to metastasize.
Phase 3: Correction & containment (Steps 7-9)
- Step 7: Direct Engagement (Context Matters) Contact AI companies directly only for factual errors. OpenAI, Anthropic, and Google all have mechanisms for reporting inaccuracies, though response times vary from 48 hours to never. Submit formal correction requests with supporting documentation: official company statements, legal documents, authoritative third-party sources. For accurate negative information, focus on adding context about remediation efforts rather than demanding removal. Be persistent and escalate when needed.
- Step 8: Flood the Zone with Correct Information (Or Context) For false information: Publish authoritative corrections across multiple high-authority platforms simultaneously. For true negative information: Publish content about reforms, improvements, and accountability measures taken. Brand web mentions are the number one factor in AI search placement. Your narrative must dominate the information ecosystem AI systems crawl. One press release isn't enough - you need volume and velocity. But ensure your response matches reality; attempting to drown out legitimate criticism with spin will backfire when AI systems synthesize the disconnect.
- Step 9: Engage Trusted Third Parties Get industry publications, respected analysts, or professional organizations to publish accurate information about your company or executives. For real crises, seek third-party validation of your reform efforts, not denial of events. Third-party validation carries exponentially more weight with LLMs than self-published content. Independent verification of positive changes post-crisis is more powerful than attempts to minimize the crisis itself.
Phase 4: Recovery & prevention (Steps 10-12)
- Step 10: Continuous Monitoring Weekly accuracy reports track LLM answers that reference brands, with drift detection alerting when generative model outputs start deviating from official documentation. Also monitor whether AI systems are appropriately contextualizing past failures with current improvements. Make this part of your standard communications dashboard. AI reputation isn't a one-time fix, it's ongoing surveillance of both false narratives and evolution of true ones.
- Step 11: Update Crisis Protocols Document what worked, what didn't, and how long each correction took. Differentiate between protocols for addressing misinformation versus managing accurate negative coverage. Each AI crisis sets precedent for the next. Update your crisis communication playbook to include AI-specific response protocols for both scenarios. Train spokespeople on how to address AI-generated misinformation in media interviews without amplifying it, and how to acknowledge AI-surfaced truths while pivoting to positive changes.
- Step 12: Board Education Ensure your board understands AI reputation risk isn't theoretical. It's a fiduciary issue requiring governance-level attention. Present quarterly reports on AI reputation monitoring findings. Ask board members to personally query major LLMs about the company and read the results aloud in board meetings. Their shock at inaccuracies, hallucinations, or outdated narratives will galvanize resource allocation. More importantly, hearing AI's synthesis of real failures and public sentiment will drive genuine accountability and reform - the only sustainable reputation management strategy in the AI age.
The Air Canada, NYC, and Character.AI cases signal a watershed moment: companies are legally responsible for what their AI says, and increasingly, for what other people's AI says about them. But the greater challenge may be when AI accurately tells inconvenient truths, connecting patterns across years of public information to create narratives you cannot simply "correct" away.
The executives who thrive won't be those who fear this shift, but those who proactively manage it, with both vigorous defense against falsehoods and authentic accountability for real failures. Start monitoring your AI reputation today. Because in 2025, the question isn't whether AI will tell your story,it's whether that story will be grounded in your genuine actions or shaped by your inaction.
The new crisis question for every board: "What does ChatGPT or any of the LLM's say about us?" Follow-up question: "Is it true?" If you don't know both answers, you're already behind.