Dismislab’s New Study Reveals YouTube Running Ads on Misinformation Videos
YouTube, a dominant platform in Bangladesh, significantly influences news consumption and entertainment, but concerns about its role in spreading and monetizing misinformation persist. A study by Dismislab, Digitally Right’s disinformation research unit, identified 700 unique Bangla misinformation videos fact-checked by independent organizations and still present on YouTube as of March 2024.
The study, titled “Misinformation on YouTube: High Profits, Low Moderation” shows about 30% of these misinformation videos, excluding Shorts, displayed advertisements, thereby generating profit for the platform and posing reputational risks for the advertisers. These ads were seen on 165 videos, which accumulated 37 million views and featured ads from 83 different brands, one-third of which were foreign companies targeting the Bangladeshi audience. 16.5% of the channels posting these videos were YouTube-verified, including known media outlets, but mostly content creators across various genres like entertainment, education, and sports, often pretending to be news providers.
Misinformation primarily centered around political (25%), religious, sports, and disaster-related topics, with some channels repeatedly spreading false information. Researchers reported all 700 videos to YouTube, with only a fraction (25 out of 700) of reported videos receiving action, such as removal or age restrictions, highlighting gaps in YouTube’s enforcement of its own policies.
The following are the key issues with moderation and policies that are identified in this research:
- YouTube’s policies have limitations considering they are often proven vague and inadequate.
- The policies provide some examples, but often say that violations are not limited to these instances without specifying what is not permitted, rendering the moderation process unclear.
- It is not always necessary to remove all misinformation; however, users should be made aware of potential false or misleading claims. Other platforms, such as Facebook and Twitter, identify misinformation based on user reports or third-party fact-checking organizations. YouTube does not do this extensively.
- YouTube’s automated systems often fail to detect a variety of misinformation that violates its policies. Furthermore, these techniques are unable to reliably detect the same false content on other channels, even after it has been banned or removed in response to community reports.
It is observed in this research that what actions were taken against the reported content by YouTube. While many videos explicitly violate policies and others are unclear, the reporting was carried out to mainly understand how YouTube reviews user-reported content. Advertisers and experts, interviewed for this research, expressed disappointment over ad placements on misinformation content, emphasizing the urgent need for YouTube to enhance its moderation capabilities and provide better transparency and control options.