Social Media Disinformation a Present Day Threat to Democracy

by Wall Street Rebel - Michael London | 08/24/2022 9:20 AM
Social Media Disinformation a Present Day Threat to Democracy

"Don't Look Up." Disinformation is an old phenomenon, but social media has made it more sophisticated and weaponized it into a common political tactic. Facebook, Google, Twitter, and YouTube fight internet disinformation, yet it is still a present-day threat to peace and stability.


The spread of false information online has become a problem for the U.S. and the rest of the world in the past few decades. Domestic and foreign enemies are using it more and more to cause chaos in democratic processes, break democratic norms, and make people less trusting of public institutions. We just received a sobering dose of misinformation contributing to the senseless deaths of tens of thousands of Americans due to Covid with multiple vaccines as reachable as a trip to the pharmacy. In the United States, as of August 24, 2022, there have been 1,066,416 Covid deaths and 294 new death yesterday, as reported by But people still resist a scientifically proven vaccine that saves lives. However, many of these same people demonize science as well and see to it that others around them are also blindly made believers.

Malicious people have been using propaganda and disinformation to trick and mislead the public for decades. However, disinformation online can spread in today's communication network quickly, cheaply, and widely across social media platforms, making it a complex problem to solve. The internet and social media platforms are now used as "weapons" to confuse, anger, and split up civil society.

As another election season approaches in the United States, social media corporations are preparing themselves for the onslaught of political disinformation that will inevitably accompany it. These corporations, such as TikTok and Facebook, are promoting a set of election tools and strategies that appear to be quite similar to the methods they used in prior years.

Watchdogs for disinformation warn that even though many of these programs are helpful — especially efforts to push credible information in multiple languages — the strategies have proven insufficient in previous years and may not be sufficient to combat the wave of falsehoods that are being pushed during this election season.

The following is the plan of the anti-disinformation strategies being used by Facebook, TikTok, Twitter, and YouTube:


In a blog post published last week, Facebook's parent company Meta's president of global affairs, Nick Clegg, said that this year's strategy would be "basically consistent with the rules and safeguards" from 2020.

A post may bear several warning labels if one of Facebook's ten fact-checking partners in the United States determines that it is inaccurate or partially false. Before viewing the post's content, these labels may compel users to click through a banner that states "false information." After users complained that the labels were "overused," Mr. Clegg declared that they would be applied to articles about the integrity of the midterm elections starting in 2020 in a more "targeted and planned way." The user complaints that the labels were "overused" led to this modification.

Additionally, Facebook will increase its efforts to stop harassment and threats directed at poll workers and election authorities. Disinformation experts claimed that the company showed a heightened interest in screening content that could result in actual physical violence after the attack on the U.S. Capitol building on January 6.

Facebook considerably expanded the number of its election team to more than 300 people in response to the 2016 election outcomes. Facebook's CEO, Mark Zuckerberg, takes a personal interest in preserving the legitimacy of elections.

However, Meta's focus has changed since the 2020 election and now lies elsewhere. The group has dispersed its election personnel and hinted that CrowdTangle, a program that aids in monitoring false information on Facebook, would be shut down at some point after the midterm elections. At the moment, Mr. Zuckerberg concentrates more on creating the metaverse and overcoming TikTok's fierce competition.


Eric Han, the head of safety for TikTok in the United States, stated in a blog post that revealed the firm's midterm plans that the company would continue its fact-checking program after the year 2020. Additionally, it accelerated ahead by six weeks the deployment of an election information portal, which gives information to voters such as how to register for the upcoming election and other election-related matters. Because of this policy, certain videos cannot be suggested unless third-party fact-checkers validate that their information is accurate.

Despite this, there is already a great many evidence that misleading information and rumors have flourished on the platform during the whole primary election process.

Mr. Lehrich predicted that during this cycle, the video-sharing platform TikTok would play a big role as a route for disseminating rumors. He also said that the platform's brief video and audio pieces were more difficult to manage, which made it possible for "vast amounts of misleading information to go undetected and spread virally."

TikTok has stated that the primary focus of its moderation efforts will be to prevent creators from being compensated for posting political content that violates the company's guidelines. This will be the primary focus of their efforts to prevent creators from being compensated for posting political content. On its platform, TikTok has never allowed any kind of advertising or sponsored political statements to be displayed. Nevertheless, the company asserted that some users were seeking to circumvent or completely ignore those limitations during the election for the year 2020. TikTok will reportedly begin directly approaching talent management agencies to discuss the company's policies and procedures, as stated by a company representative.

The company has been criticized by watchdog organizations that monitor the dissemination of disinformation for its lack of openness regarding the sources of the videos that it hosts and for the efficacy of the moderation methods that it employs. Similar to the access that other companies provide, industry professionals have expressed a desire to have more tools at their disposal to investigate the platform and the information it hosts.

This year, according to a statement released by TikTok's chief operating officer, Vanessa Pappas, the company would start sharing some data with "selected researchers."


In a blog post that outlined Twitter's plans for the 2022 midterm elections, the social media platform announced that it would bring back its Civic Integrity Policy. The Civic Integrity Policy of the company is a set of regulations that were adopted in 2018 and are used in the run-up to elections all over the world. Warning labels, comparable to those used by Facebook, will once again be placed on tweets that include inaccurate or misleading information regarding elections, voting, or the integrity of elections. Users will often be pointed in the direction of accurate information or extra context by these labels. The use of comparable labels on Facebook served as an inspiration for the policy. The algorithms utilized by the corporation will not suggest or distribute any tweets that have been marked with the labels since those tweets are excluded from those categories. In addition, the company is able to erase tweets that are either deceptive or wrong.

According to the company, the new labels caused a 17 percent rise in the number of clicks made to access additional information. These clicks were made in order to gain access to more details. Tweets that utilized the updated labels received a greater number of engagements, including responses and retweets, than those that did not.

The strategy highlights Twitter's attempts to decrease inaccurate content without necessarily resorting to removing tweets and banning users. Specifically, the approach focuses on the fact that The plan has been updated to account for these activities.

This strategy may assist the company in navigating the challenging freedom of speech problems that have plagued social media companies in their efforts to curb the spread of misleading material. These problems have plagued social media companies in their efforts to curb the spread of misleading information. Elon Musk, an executive at Tesla, made his objections regarding freedom of expression the main emphasis of his offer to purchase the firm earlier this year. He did this when he was in the process of making his bid.


In contrast to the majority of the other major internet platforms, YouTube has not made public its election disinformation plan for 2022 and has generally maintained radio silence over the election misinformation approach it employs.

Mr. Sanderson stated that YouTube could not be located anywhere at this time. This appears to be their usual public relations strategy, which can be summed up as "don't say anything, and no one will notice."

In March, Google, the parent company of YouTube, released a blog post highlighting efforts to eliminate videos that mislead voters and to promote authoritative information through the streamer's recommendation engine. In a separate blog post geared for content creators, Google explains how channels might be penalized with "strikes" for spreading certain types of false information; if a channel receives three strikes within a period of ninety days, it will be removed from the platform.

Online video streaming giant has played a significant part in disseminating false information regarding politics. It has provided early support to conspiracy theorists like Alex Jones, who was eventually banned from the platform. It has taken a more robust position against the spread of medically inaccurate information, announcing in September of last year that it will remove all videos and accounts that shared inaccurate information about vaccines. In the end, the corporation decided to exclude a number of notable conservative celebrities.

In January, more than eighty fact-checkers working for independent groups worldwide signed a statement warning that YouTube's platform was being "weaponized" to promote voter-fraud conspiracy theories and other forms of election misinformation.

Ivy Choi, a spokeswoman for YouTube, said in a statement that the platform's election team had been meeting for several months to prepare for the midterm elections. She added that the platform's recommendation engine was "continuously and prominently surfacing midterms-related content from authoritative news sources and limiting the spread of harmful misinformation."


                       Why people fall for misinformation - Joseph Isaac




The Birth of a New $10-Trillion Sector... [Learn More Here]


Latest News

Stay Up to Date With The Latest
News & Updates

Join Our Newsletter


Rebel Yell Morning Market Report
Market Alerts
Offers from us
Offers from our trusted partners

Follow Us

Connect with us on social media

Facebook Twitter