Yesterday, I had the benefit of attending the CIPR East Anglia conference in Cambridge, which was titled, “Can you be trusted? How to navigate misinformation and communications integrity in the digital age.”
This was a fantastic event, with lots of insights and data to explain how and why misinformation is spreading so rapidly. It gave a safe space where we could discuss how, as comms professionals, we can take appropriate action to anticipate and mitigate those issues.
I found it interesting from the perspective of my PR and copywriting services, because it shows the value of remaining in control over your narratives. I can understand why many small businesses in Suffolk are chasing keywords, dabbling in AI-generated content and hoping for quick wins because we’re all time-strapped and under more budget constraints than ever before.
But right now, it’s clear that misinformation spreads faster than facts, which is why we need to think carefully about where this approach could backfire. If your business is affected by misinformation, you’re almost certainly going to damage trust, ruin your reputation, and even risk customer safety.
What are the hidden risks of misinformation?
During the conference, Shayoni Lynn, CEO and Founder, Lynn Global, pointed out that misinformation is now a “systemic risk” for businesses, and it’s now considered our most urgent risk, above extreme weather conditions, state-based armed conflict and even cyber espionage.
You might think that this is a bit OTT, especially if you’re a small to medium-sized business with a tight group of stakeholders. But it became increasingly clear during the discussions that misinformation isn’t just a problem for politicians or big tech. It affects everyone, including small businesses.
False claims, misquotes, and misleading content can erode customer confidence, especially when shared in private forums or local community groups. In sectors like healthcare, policing and science, misinformation has real-world consequences. For SMEs, it can mean lost sales, damaged relationships, and long-term brand harm.
Why do people share misinformation?
“Facts do not change minds. Framing and storytelling do.”
Shayoni Lynn during the CIPR East Anglia conference, 16 September 2025
This is one of the discussions I was most interested to learn about. If you follow political discourse, the topic of misinformation or ‘fake news’ could be attributed to conspiracists, bad actors, bots and political figures.
But what surprised me was Shayoni Lynn’s comment that believing misinformation comes down to our belief systems. We’re willing to accept information without critically thinking about whether it’s true or not, especially if that information comes from our family and friends. We believe information when it comes from people like us, because it becomes validated.
As keynote speaker Sarah Roberts, Head of Digital Comms in the NHS, said, “People share information because they think it could be interesting if it were true. They don’t necessarily care if it is actually true.”
Sarah pointed out that one simple way to tackle this is to prioritise metadata hygiene on our websites.
We need to make sure that our SEO is accurate, our accessibility is considered, and our content needs to be indexable. That way, it becomes easier for accurate information to be found in Google searches and within AI overviews. The more work we do to ensure that we’re actively promoting genuine, factual information (rather than chasing click bait headlines and substance-light content), the easier it will become to get the right messages across to our audiences.
Why human-led content matters now more than ever
“Without human oversight, AI risks misinformation in a way that is dystopian.”
Paul Hutchinson, Bedford Independent, during the CIPR East Anglia conference, 16 September 2025
We all know that AI tools are incredibly powerful, but they’re only as good as the data they’re fed. Without human oversight, they can amplify bias, inaccuracies, or tone-deaf messaging. Human-led content needs to be written with empathy, nuance, and local context.
That’s why businesses still hire freelance copywriters, because they’re relying on the experts to build trust and credibility.
After all, it’s not just about what you say, but how and why you say it.
During his presentation, Robin Punt, Director of External Affairs, Engagement & Communications, Assistant Chief Officer at Essex Police, talked about the challenges that the police face when it comes to misinformation and false narratives. He pointed out that traditional communication techniques are struggling to keep up with the speed and reach of social media and that large-language models (e.g. Grok and ChatGPT) can inherit biased or false data, which is why organisations need to feed them accurate, reputationally positive content.
Robin pointed out that video content is now more powerful than ever before because it tells a significant story. But he referenced that too many businesses rely on automated subtitles, without taking the time to check that the captions are correct and not inadvertently fuelling misinformation. Clearly, from an accessibility perspective, checking captions is crucial, but to hear that mistranslated words, phrases, and colloquialisms could be contributing towards false narratives is something that I hadn’t previously considered.
This doesn’t mean that AI should be avoided. In fact, AI is fantastic when used for social listening, sentiment analysis, and anomaly detection. But it must always have human oversight to ensure that your content (whether it’s your website landing pages, blog posts, social media output, or press activity) is not contributing towards the spread of false information.
Rethinking your content strategy could be crucial to tackling misinformation.
“Telling the truth takes detail and nuance. Telling a lie is much quicker.”
Athena Dinar MCIPR, Head of Media/Deputy Head of Communications, British Antarctic Survey, during the CIPR East Anglia conference, 16 September 2025
Until now, I’ve been a big advocate for basing your content strategy on “what do your customers want to know?”
I’ve firmly believed that genuinely helpful content is less about chasing keyword search results, jumping onto the latest Google trends or chasing whatever search terms are coming up in ChatGPT. Instead, I’ve often suggested that clients keep a note of what questions customers are commonly asking and use that as a clear starting point.
But what came out of yesterday’s conference was the need to also start questioning, “Where can things go wrong?”
All of the keynote speakers highlighted the need to focus on pre-bunking rather than rebuttals when it comes to communication strategies.
This means anticipating how information can be twisted and addressing it before it appears, along with clear, proactive explanations of who’s doing what and why. By filling those knowledge gaps, you can prevent rumours from starting in the first place.
What questions do we need to be continually asking ourselves?
These are just a few of the key takeaways that I picked up from the conference and will be bringing into my everyday working activities to help my clients overcome issues relating to misinformation.
- Are you pre-bunking potential misunderstandings before they arise?
- Are your captions, headlines, and summaries accurate, and can you find ways to prevent manipulation?
- Is there a consistency between your message content and your on-the-ground delivery? This is partly a customer service issue, but it also shows how that mismatch can result in misinformation and a poor reputation.
- Have you checked your metadata hygiene (SEO, accessibility, indexability) to ensure that your correct information is higher in the search rankings/AI overviews than false content?
- Are you using owned platforms (e.g. your website) and employee voices to build long-term trust in what you are saying? Can you use case studies and testimonials or original research reports to provide evidence of what you are saying?
- Are you framing facts in stories and contexts your audience already trusts? Are you providing detailed speaker biographies to accompany every media response to explain who your commentator is and why they are qualified to comment on this topic?
I’m excited to learn from this and work with SMEs to help them tackle misinformation.
This is a big issue that doesn’t just affect large brands or public sector organisations. Misinformation is something that we all collectively need to tackle, whether it’s refusing to click on obvious clickbait headlines or correcting someone when they are clearly spreading false information.
If you’re a small business, this isn’t about becoming a misinformation expert. But it is time to recognise that there are clear risks involved in using generative AI for your content and that chasing keyword traffic rather than prioritising your customers’ wants/needs might not be the best use of your limited time and budget.
By prioritising accurate, evidence-based storytelling, maintaining customer expectations and pre-empting and recognising any potential miscommunications, you’ll be much better placed to maintain a strong, trusted reputation.
Which let’s face it, is what we all want!