Disinformation is a Management Risk: How Leaders Should Read the Web Today
Disinformation is no longer a fringe issue confined to politics or “bad actors on social media”. It has become a structural feature of the modern information environment: cheap to produce, fast to distribute, and increasingly difficult to distinguish from legitimate reporting at a glance.
The World Economic Forum has repeatedly ranked misinformation and disinformation among the most significant short-term global risks, precisely because they erode trust and accelerate polarisation. (World Economic Forum)
The Reuters Institute’s Digital News Report likewise reflects the same direction of travel: audiences are worried about what is true online, and confidence is under pressure. (reutersinstitute.politics.ox.ac.uk)
The cost of verification
For boards and executive teams, the practical conclusion is not “the web is broken”. The cost of verification has shifted. The organisation can no longer assume that stakeholders will encounter information in a clean, linear way through reputable outlets. They will encounter fragments: screenshots, short clips, “explainer” threads, anonymous accounts, and algorithmic amplification. Some of it will be inaccurate. Some will be deliberately misleading, and some will be true but framed to provoke a false conclusion.
In that environment, disinformation becomes a business risk with three direct consequences: reputational volatility, operational distraction, and decision degradation.
Why the disinformation problem is accelerating
Two forces are changing the scale and sophistication of false narratives. The first is automation. Coordinated networks—often a blend of fake accounts, compromised accounts and high-volume amplification—can quickly manufacture the appearance of consensus.
Platforms themselves describe and report on “coordinated inauthentic behaviour” as a strategic manipulation technique, and they continue to disrupt networks that attempt to game public debate. (transparency.meta.com)
The second force is content industrialisation. Generative AI doesn’t create disinformation by itself, but it makes it dramatically easier to produce infinite variations of persuasive content: headlines, posts, comments, images, and “supporting” narratives designed for different audiences and languages.
The result is not simply more fake content but more believable content deployed at a higher volume. Recent reporting has shown how AI manipulation is undermining historically trusted visual sources, such as satellite imagery, by lowering the barrier to convincing fakes. (ft.com)
The net effect for business is simple: false narratives can form and spread faster than traditional correction cycles, and they often land first where your stakeholders actually spend attention.
What should boards assume about today’s information environment
A boardroom-level posture starts with realism. First, not all disinformation will look false. Increasingly, the more effective narratives are those that use partial truth, real images placed in the wrong context, or authentic documents selectively quoted.
Second, the early “signals” of a narrative are rarely found in formal news; they show up in social feeds, private groups, and fast-moving commentary ecosystems.
Third, in a high-pressure moment, stakeholders reward confidence and speed. That dynamic creates a vulnerability: disinformation thrives when uncertainty is high and official clarity is slow.
Regulators are treating this as systemic, not incidental. The EU’s Digital Services Act places ‘risk mitigation’ obligations on very large online platforms, explicitly including risks linked to disinformation and manipulation. (European Commission)
This matters for companies because it signals a broader shift: disinformation is being recognised as a structural hazard in the digital economy, not merely a communications nuisance.
How to recognise bot amplification and coordinated behaviour
Executives do not need to become investigators, but they do need pattern recognition. The most useful lens is not whether one account “looks fake”. The stronger signal is whether a cluster behaves unnaturally.
In practice, coordinated amplification often reveals itself by its timing and sameness. A post attracts an unusually rapid wave of comments within minutes, often repeating identical phrasing, hashtags, or links. The comments do not engage with the substance; they push a script.
The accounts are not verified and frequently have thin histories—limited original posting, generic bios, recycled profile imagery, and a follower graph that does not look organic. The activity pattern can also be instructive: accounts posting at all hours, at high volume, with minimal tone variance, as if the purpose were not conversation but reach.
Meta’s own description of coordinated inauthentic behaviour centres on precisely this strategic intent: networks that manipulate debate through fake identities and coordinated actions rather than genuine participation. (transparency.meta.com) A boardroom takeaway follows: when you see “sudden consensus”, treat it as a hypothesis, not evidence.
How to evaluate whether a story is false, misleading, or simply unverified
For leaders, the objective is not to achieve philosophical certainty. It is to avoid acting on misinformation.
The fastest discipline is source-first thinking. If a claim is only carried by a single obscure site, a screenshot without provenance, or an account that does not disclose identity or credentials, it should be classified as unverified by default. In high-risk moments, the question is not “could this be true?” but “what is the primary evidence, and who stands behind it?”
A second discipline is evidence tracing. Strong reporting points to documents, named sources, public filings, direct quotes, and verifiable data. Weak reporting relies on dramatic language, unnamed insiders, and certainty without evidence.
A third discipline is context checking. A significant portion of misinformation is not fabricated media but real media used deceptively: old footage presented as new, a clip cut to remove key context, an image from a different location, or a headline that reverses the meaning of the underlying story.
Finally, cross-verification matters. For any claim that could materially affect decisions—share price, customer trust, regulatory exposure, or employee behaviour—leaders should confirm via at least two independent reputable sources or a primary authority such as a regulator, court records, official company statement, or a credible investigative outlet. If verification is not possible quickly, downgrade certainty and manage the situation as “unconfirmed”, not “true”.
What businesses should do before a disinformation moment arrives
The strongest corporate defence is not a clever rebuttal. It is preparedness. Boards should treat disinformation as part of the crisis landscape: not a separate category, but a multiplier that can accelerate any incident.
Preparedness begins with official truth channels. Stakeholders need to know where the organisation speaks with authority, whether that is a newsroom page, a press section, verified social accounts, or a predefined incident hub. When organisations have no obvious “single source of truth”, disinformation fills the gap.
The second layer is a monitoring system designed to detect early signals. This is not about vanity metrics; it is about detecting abnormal narrative formation: sudden spikes, repeated links, clustering of identical language, or targeted attacks on leadership. The earlier an organisation spots coordination, the more options it has to respond calmly rather than reactively.
The third layer is response discipline. When disinformation appears, the aim is to reduce ambiguity, not to argue. Leaders should correct the false claim plainly, state what is known and what is being verified, and keep the organisation aligned on a single narrative and set of facts. Over-explaining, repeating the false framing, or escalating emotionally can inadvertently amplify the story.
The boardroom conclusion
Disinformation is not simply “online noise”. It is a trust risk that can become operational within hours. The best defence is a verification culture combined with crisis-grade readiness: clear truth channels, early detection of coordinated behaviour, and disciplined response that privileges clarity over volume.
In an era where audiences are increasingly sceptical, and the information supply is increasingly polluted, competitive advantage shifts to organisations that can do one thing consistently: speak with credibility at speed and prove it. For this, you are best armed with Lighthouse PR at your side.
——
About the Author
Steve Gardiner (exec MBA) is a senior marketing and commercial leader at Lighthouse PR, bringing global experience from Accenture, Electronic Arts, Virgin Media, Telekom, and Etisalat. Latterly, as VP Business at Etisalat, he was responsible for $1.8B in revenue.
Today, Steve applies his strategic, marketing, and growth expertise to support Lighthouse PR clients as part of the agency’s service offering.