• Health

Shedding light on LinkedIn’s enforcement of health misinformation policies: A pilot study

Posted on:  2022-04-26

Summary of findings

The career- and business-oriented social media platform LinkedIn has been absent from the public conversation around the topic of health misinformation, as compared to more informal platforms like Facebook and Twitter. While LinkedIn’s policies state that it prohibits “content directly contradicting guidance from leading global health organizations and public health authorities”, anecdotal user reports paint a different picture. To determine whether LinkedIn enforces its policies on health misinformation, we obtained a sample of 61 posts, in English or French, identified as containing COVID-19 vaccine misinformation and flagged them through LinkedIn’s reporting mechanism. We found that more than two-thirds of these posts received no moderation despite violating LinkedIn policies. Our findings indicate a significant discrepancy between LinkedIn’s public stance on health misinformation and its actual enforcement of its policies.

Introduction

The conversation around the subject of dangerous misinformation on social media platforms has centered heavily on Facebook, YouTube, Twitter, Instagram, and even the relatively new player TikTok. Curiously, LinkedIn, the career- and business-oriented social media platform that launched in 2003 has been spared the same level of scrutiny.

However, LinkedIn has proved to be vulnerable to misinformation like other platforms. For starters, its Transparency Report for the first half of 2021 reported “record-high engagement and conversations”, accompanied by “more content that violates our policies”, including misinformation.

Figure 1. The number of LinkedIn posts removed for violating LinkedIn’s Professional Community Policies and User Agreement. Graph is from LinkedIn’s Transparency Report for the six-month period between January 1 and June 30, 2021.

As LinkedIn’s own data shows, misinformation made up the majority of content removed from the platform, with harassment or abusive content a close second (Figure 1). However, the platform only provided the amount of content removed for content violations, but not the total number of reports made for such content or the total number of posts made on the platform within that period of time.

This isn’t unique to LinkedIn; as researchers noted, Facebook also doesn’t disclose the total amount of content posted on its platform. The absence of “a meaningful denominator”, as Ethan Zuckerman, professor and director of the MIT Center for Civic Media, highlighted, makes it difficult to place the available data in context, as it is essential for understanding the true scope of the problem and the effectiveness of interventions.

In a public statement published on March 2020, LinkedIn said that “information contradicting guidance from leading global health organizations and public health authorities [is not] allowed on the platform [including] making unsupported claims about the virus’s origins or posts that downplay the seriousness of the pandemic, as well as baseless treatments or cures”.

The same is also stated in LinkedIn’s Professional Community Policies:

Do not share content in a way that you know is, or think may be, misleading or inaccurate, including misinformation or disinformation […] We may prevent you from posting content from sites that are known to produce or contain misinformation. Do not share content that directly contradicts guidance from leading global health organizations and public health authorities.

But it appears to be a different story on the ground, as some users told the French daily Le Monde that LinkedIn moderators permitted reported posts that contradicted such guidance to remain on the platform, contradicting the platform’s public stance on health misinformation.

What are we to make of the conflicting accounts by LinkedIn’s management and LinkedIn users? Without data, it is hard to come to any kind of conclusion. Unlike Facebook and Twitter, which have been the subject of multiple academic studies, LinkedIn has yet to undergo a similar level of objective examination. Indeed, we were unable to find a published study or analysis about misinformation on LinkedIn. The lack of such information impedes our ability to recognize potential problems on the platform and consequently undermines the ability to implement solutions.

As a first step towards addressing the knowledge gap about the state of health misinformation on LinkedIn, we decided to conduct an experiment on the platform, focusing on COVID-19 vaccine misinformation. The aim of this experiment is to determine whether LinkedIn does abide by its own policies to moderate health misinformation from the platform—and if not, to determine whether the failure to enforce its policies only affects a minority of posts or if it is, in fact, widespread and systemic.

Methods

Starting from 23 February 2022, we searched for relevant posts using the key words “covid vaccine” using LinkedIn’s search function, using the filters “Last 24 hours” and “Top match”. There were a total of nine searches conducted. In each search, we screened the first 50 search results for posts that contained misinformation about COVID-19 vaccines. It should be noted that LinkedIn search results sometimes display duplicate posts; duplicates were not counted in the first 50 results we collected per search.

At the end of our evaluation period (21 March 2022), we found 61 posts containing COVID-19 vaccine misinformation. Almost 80% of posts in our sample were in English, with the remainder in French.

We used a LinkedIn account that had zero connections to conduct searches. It is likely that LinkedIn’s algorithm selects which posts to display based on a user’s location, connections, line of work, and other interests. By using an account with zero connections, we attempted to minimize the level of bias that could be introduced by the algorithm, although it is unlikely that we avoided all biases. For example, the account’s location was set in France, which is likely why French-language posts also appeared in search results. Accounts set in other locations where another language is dominant might receive different results.

As far as possible, we preserved the posts by archiving them. There were a handful of instances in which this wasn’t possible, as attempts to archive the post led to LinkedIn’s login page instead. In such cases, we took screenshots of the posts. Each post was reported for misinformation using LinkedIn’s built-in reporting mechanism. We then recorded the decision of the moderator.

LinkedIn provides users with options to report a post for various reasons (see below), including misinformation. As shown in the screenshot, LinkedIn explicitly states that it “prohibits false content or information, including news stories, that present untrue facts or events as though they are true or likely true” and “content directly contradicting guidance from leading global health organizations and public health authorities”.

Figure 2. Options that appear when reporting a LinkedIn post.

In our experiment, we observed four possible outcomes from reporting a post for misinformation, summarized below in Figure 2. The left-most panel shows the initial message that LinkedIn sends to a user after a report. The middle panel shows the four different responses:

  • Response 1: Sent to a user when LinkedIn finds a post to be not in violation of its policy. To the right of Response 1 is a possible follow-up, in which LinkedIn’s Trust and Safety Team reverses the initial decision (Response 1), finds the post to be in violation of the platform’s policies after all, and removes the post.
  • Response 2: Sent when a post is found in violation of the policy, leading to removal.
  • Response 3: Shown when no report is available. According to LinkedIn, this can be because the post is already deleted or if the user isn’t “authorized” to view the report. It’s unclear exactly what kind of authorization is needed, given that the outcome of moderation is normally communicated to the user (Responses 1 and 2).
  • Response 4: No follow-up from LinkedIn.

Figure 3. Possible outcomes after reporting a post for misinformation on LinkedIn.

Results

Among the 61 posts we examined, 47 (77%) shared an article. Notably, several of these posts shared links to websites that have a reputation for regularly publishing health misinformation. Among them are the anti-vaccine organization Children’s Health Defense, as well as websites known for publishing conspiracy theories and misinformation, including The Gateway Pundit and Daily Expose.

We also made efforts to identify common narratives In our sample of 61 posts. While posts can contain a mish-mash of multiple claims, we were able to observe certain themes in health misinformation emerge with frequency. The most common theme encountered was the misinterpretation of vaccine adverse event reports as standalone evidence of vaccine side effects (28% of posts).

This popular anti-vaccine trope predates COVID-19 vaccination by several years. As Health Feedback discussed in this Insight article, the problem with this interpretation is its oversimplification of causality. In order to determine whether an adverse event is actually caused by a vaccine, there are multiple factors to consider. While temporality—meaning that the adverse event must come after vaccination and not before—is one factor to consider, it’s not sufficient evidence on its own to establish causality.

The next most popular theme is that the COVID-19 vaccines are deadly (18%) because they make people more vulnerable to COVID-19 (the reverse is true), or because of side effects like myocarditis. While authorities have acknowledged that myocarditis is a potential side effect of COVID-19 mRNA vaccines, the overall benefits of the COVID-19 vaccines still outweigh their risks.

Indeed, calling into question the risk-benefit ratio of COVID-19 vaccination (15%) was another popular theme. But the U.S. Centers for Disease Control and Prevention, as well as the World Health Organization, are clear on the matter: the benefits of COVID-19 vaccination outweigh their risks. The false claim that COVID-19 vaccines modify DNA (they don’t) also made several appearances.

By looking at the breakdown of LinkedIn moderation outcomes in our sample, we found that more than two-thirds of posts (68.9%) weren’t considered to be in violation of LinkedIn’s Professional Community Policies. This is in spite of the fact that these posts contained misinformation about COVID-19 vaccines, breaking LinkedIn’s rule that users must not publish content that contravenes guidance by public health authorities. 3.3% were initially found to be not in violation of LinkedIn’s policies, but the moderation decision was later reversed, finding the post to be in violation after all. These posts were also removed following the decision reversal. It’s unclear what prompted the moderator(s) to review the initial decision.

Figure 4. A pie chart showing the outcome of moderation for 61 posts, comprising posts in either English or French. More than two-thirds of the posts were not considered to be in violation of LinkedIn’s Professional Community Policies, even though they contained COVID-19 vaccine misinformation that contradicted guidance from public health authorities.

Less than 15% of posts in our sample were found by LinkedIn to violate its own policies, while 5.7% of posts had no moderation report available or were deleted. Finally, for 8.2% of the posts, we received no follow-up from LinkedIn.

We did not observe any correlations between a specific moderation outcome with a particular theme of misinformation. However, the outcome of moderation paints a picture of confusion. For instance, the three posts below contain the false claim that vaccine adverse reports recorded by a German insurer indicate alarming safety issues with the COVID-19 vaccines (Figure 5). But only one of these posts (shown on the left) was considered to have violated LinkedIn policies and was removed, while the other two were inexplicably not considered to be in violation.

Figure 5. Screenshot of three posts that falsely claimed a German health insurer found evidence of “COVID vaccine injuries”. All three posts share a link to an article by the anti-vaccine organization Children’s Health Defense. Only one post (shown on the left) was considered to be in violation of LinkedIn policy and removed; the other two weren’t considered to be in violation.

Conclusion

Our findings show that LinkedIn’s moderation process is largely hit-and-miss in terms of upholding its own policies on health misinformation. Despite its claim that misinformation has no place on its platform, its decisions suggest otherwise, as shown by the majority of posts containing COVID-19 vaccine misinformation that evade enforcement on the platform. Furthermore, enforcement—when it does occur—is inconsistent. When reporting posts that contain the same false or misleading claim, one can obtain completely opposite outcomes from moderators.

We also observed that the majority of posts involve article shares, and that several of these posts shared content from websites that have already been established as outlets of health misinformation, such as Children’s Health Defense. Therefore, one step towards improving the detection of health misinformation on the platform could be scanning posts for content from such websites, perhaps through the URLs being shared in these posts.

Along the course of this experiment, we also realized that another aspect of moderation that is of potential interest is the effectiveness of moderation applied to English-language posts as compared to posts that aren’t in English. The sample size of the posts we collected in this analysis isn’t large enough to reliably demonstrate differences in moderation decisions between posts in English or French, for instance. This could be a focal point for future studies.

Indeed, the language barrier when it comes to content moderation has proved to be a significant obstacle towards effective moderation on platforms. For example, the human rights non-profit group Avaaz observed how only about 30% of COVID-19 misinformation in Spanish and Italian were flagged as misinformation on Facebook, as compared to 70% of COVID-19 misinformation in English. This suggests that non-Anglophone audiences are potentially under-served when it comes to measures aimed at reducing exposure to misinformation.

In summary, we hope that our experiment sheds some light over the state of health misinformation on LinkedIn and provides misinformation researchers with a stepping stone to delve deeper into a platform that has, up until now, been sidelined in the public conversation about web platforms’ role in combating misinformation.

Science Feedback is a non-partisan, non-profit organization dedicated to science education. Our reviews are crowdsourced directly from a community of scientists with relevant expertise. We strive to explain whether and why information is or is not consistent with the science and to help readers know which news to trust.
Please get in touch if you have any comment or think there is an important claim or article that would need to be reviewed.

Published on:

Editor:

Related Articles