Breton's megaphone enforcement is no way to tackle disinformation
EU Commissioner Thierry Breton is taking an aggressive public approach to social media platforms’ failure to tackle disinformation about the Israel-Hamas conflict. Breton’s approach may help boost his public profile – but it risks undermining more useful steps in the fight against disinformation, writes Zach Meyers.
It is hard to begrudge the European Commission for flexing its muscles against X (still better recognised as ‘the platform formerly known as Twitter’). After cutting its content moderation team, the platform has increasingly struggled to deal with terrorist content, violence, hate speech and disinformation.
In recent days that problem has spiralled out of control. Countless doctored or out-of-context images purporting to be from the current Israeli-Hamas conflict have gone viral. But the Commission – and in particular, EU commissioner Thierry Breton – have taken steps against X which risk undermining the fight against disinformation.
The EU has a powerful new tool, the Digital Services Act, to hold large social media platforms to account. The new law has only kicked in since August. It requires platforms to take down illegal content quickly and mitigate risks their platforms can cause. It is no surprise the European Commission wants to publicly demonstrate the law has teeth.
In recent days, Breton has therefore publicly written to X’s owner, Elon Musk, and other large platforms. He complains not only about illegal content – but also the fact that fake or manipulated content is circulating on social media platforms. And he has followed up with a formal information request, a precursor to a potential enforcement investigation.
One problem with Breton’s approach is that fake or manipulated content may be awful, but it is not illegal. The EU deliberately chose not to require platforms to censor so-called “lawful but awful” content. That was a sensible decision. It would intrude on freedom of expression – which includes the right to express views even if they are misleading, incorrect, or abhorrent.
It would create a murky grey zone of material which is not illegal offline but cannot be posted online. It would empower authorities to make politically charged decisions about the line between “legitimate opinion” and “fake news”. By treating the mere existence of awful content on X as a problem, the European Commission risks ignoring these risks and legitimising online censorship.
A second problem is that public shaming platforms are unlikely to encourage good behaviour. Tackling disinformation is far more difficult and intractable than simply removing material. Taking content down, by itself, is ineffective: it simply drives more of the worst content to smaller, less accountable, and less regulated platforms.
The EU’s laws recognise that we don’t know yet exactly how best to tackle disinformation. Researchers still argue about the best alternative measures – such as tweaking algorithms, adding health warnings, or making polarising content more difficult to share. The right tools will have to evolve as the nature of disinformation changes.
The EU sensibly requires platforms to put systems and resources in place to mitigate the risks of their platforms – it does not dictate which steps platforms must take. For this approach to work, however, there needs to be a degree of trust between big platforms and the Commission enforcing the law. Platforms will need to trial different approaches to see what works best – and should be open with the Commission about their approach and results.
Platforms are unlikely to do this if they fear EU enforces are more worried about looking tough rather than acting fairly and predictably. Breton’s rush to publicly “name and shame” platforms at the first opportunity – while pointing only to the mere availability of bad content, rather than its reach or effects – is deeply harmful to the EU law’s success at tackling disinformation in the long run.
His approach may not change Elon Musk’s behaviour, which is already recalcitrant. But it may make more responsible platforms think twice about how openly to engage with EU officials. It may also encourage them to focus on taking content down rather than thinking about more effective approaches.
It is hard to imagine that Elon Musk is today fulfilling his legal responsibilities, given X’s arbitrary and unpredictable policy changes and its hollowed-out content moderation and public safety teams. But by jumping to publicly shame companies, and taking heavy-handed enforcement steps at the first opportunity, EU enforcers are not living up to their responsibilities either.
Zach Meyers is a research fellow at the Centre for European Reform where he works on EU competition policy, particularly in the digital sector.