Social media platforms need to step up: EU wants to abolish fake news

Back in 2018, platforms such as Google, Facebook, Twitter, and TikTok, pledged to do better when it comes to disinformation. How did they fare? It seems that not quite well enough. And so the EU is stepping in again to motivate social media platforms in this important fight. This time, the European Commission proposes stricter rules that require doubling-down on all those internet trolls and fake news. It’s even more necessary now than it was three years ago: this time, human lives are at stake. 

EU wants to end the anti-vaccination campaign on social media

Just recently, several Western influencers admitted that a suspicious PR agency with ties to Russia contacted them. The offer was quite tempting: in return for spreading conspiracy theories about the Pfizer vaccine, they would be generously rewarded. 

Whether this really was an attempted manipulation or not, European Commission believes that a “massive anti-vaccination campaign” is ongoing. As a result, social media platforms need to step up. 

Misinformation, in general, is one of the defining problems of our age. Social media platforms have allowed it to influence elections and nudge the public towards impactful decisions. But vaccination is a whole another story: no country can sustain repeated lockdowns, exhausted doctors, and wasted resources. 

“The coronavirus pandemic has been accompanied by a massive infodemic,” EU foreign policy chief Josep Borrell said at a press conference. “Disinformation in times of the coronavirus can kill. We have a duty to protect our citizens by making them aware of false information — and expose the actors responsible for engaging in such practices.”

What will change? 

As DW informed, the officials are determined to put a stop to fake news as a revenue source. “We need to see stronger commitments by online platforms, the entire advertising ecosystem, and networks of fact-checkers,” EU industry chief Thierry Breton said. At the same time, the EU doesn’t want to cross the line and limit free speech on social media, and doesn’t intend to enforce censorship. 

The Guardian informs that the revised code of practice will still be voluntary. However, companies can expect fines of up to 6% of their annual revenue if they don’t remove illegal content. The code will be applicable from 2022, and its final format will be shaped in cooperation with the social media platforms. So far, both Facebook and Twitter have publicly supported the Commission’s efforts. 

Among the newly introduced proposals is a guideline on algorithmic accountability and transparency. Companies shouldn’t reveal their algorithmic source code. Instead, they will need to prove that they’re actively preventing, removing, and blocking fake news. 

“We would like them to embed the fact-checking into their system,” said EU commission vice-president Vera Jourova. That means there will be no “arbitrator of the truth”, and companies won’t be deciding what disinformation is and what isn’t. The Commission will cooperate with several organizations to monitor oversight. Another step will be to limit political advertising, as the EU wants to clearly signal that foreign interference with elections within the member states is not an option. Jourova said it herself: “I don’t want elections to be a competition in dirty methods. We saw enough with the Cambridge Analytica and the Brexit referendum.”

These plans could set a course for the future, where the fight against fake news will surely intensify. At least in the democratic societies that don’t want internet trolls deciding their fate. 

Latest news and articles by GetForward.

Be at the dawn of change

Subscribe to our newsletter to keep in touch with the subjects that shape our future.