[ad_1]
According to fact checkers, no. “Our experience together with scientific evidence – they wrote in the letter sent to YouTube last year – tells us that bringing out verified information is more effective than deleting content. Preserve freedom of expression, while recognizing the need for more information to reduce the risks of harm to life, health, safety and democratic processes. And considering that most of of YouTube views come from its own recommendation algorithmYouTube should also check that it is not actively promoting misinformation among its users or recommending content from untrustworthy channels”.
An approach that takes the name of pre-bunking. “When the reader is aware that the link he is about to open will lead him to an unreliable site, he will proceed reading with greater caution and will think twice before sharing that content, providing it with further visibility and diffusion”, explain Virginia Padovese and Giulia Pozzi of Newsguard.
A code of disinformation
Politics has long since turned a light on fake news, pushing business towards forms of self-regulation. In 2018, a group of companies active in Europe (including those behind the main platforms) signed a Code of disinformation with the aim of putting a stop to manipulation attempts. Among the signatories of the revised version e strengthened presented in 2022 there are Meta, Google, Microsoft, TikTok, Twitter.
Following the indications of Brussels, in recent days the signatories have created a Transparency center in which they will have to give account what they are doing to comply, by providing EU citizens, researchers and NGOs with a single database in which to access and download information online. “With these relationships – reads a note from the Commission – for the first time the platforms provide comprehensive information and initial data, such as the value of the avoided advertising revenue goes to the disinformation actors; the number or value of political ads accepted and labeled or rejected; cases of manipulative behavior detected (i.e. creation and use of fake accounts); and information on the impact of fact-checking, including at Member State level”.
Foreign interference
Brussels still insists on an evidently central theme in Ursula Von der Leyen’s thoughts. In recent days, the EU High Representative for Foreign Affairs Josep Borrell has announced that the Union will deploy disinformation experts in many diplomatic fora to counter the campaigns of Russia and China. Borrell stated that Russia’s investments in disinformation have exceeded those deployed by the EU in countermeasures.
The problem, the politician underlined, is also that of fighting fake news spread in lesser spoken languages: “Many people don’t understand English, so to reach them we will have to speak in their language”, as well as being present on the means of communication they use. The speech was accompanied by the presentation of the former Relationship on the manipulation of information by foreigners and threats of interference. A new acronym is born, Fimi (which stands for manipulation and foreign interference on information): according to the definition provided by Brussels, it describes “a non-illegal course of behavior that threatens or has the potential to negatively impact political values, procedures and processes”. The reference that can be grasped is the Qatargate scandal. The important expression, the one that complicates everything, is “not illegal”: a gray area in which many, from politicians to lobbyists to secret services, wallow.
.
[ad_2]
Source link
