Research: AI Adoption in Bangladeshi Newsrooms 2024

Research: AI Adoption in Bangladeshi Newsrooms 2024

As the swift technological changes are driving a global transformation in journalism, fundamentally altering the process of news gathering and dissemination, Artificial Intelligence (AI) has emerged as a defining factor in reshaping the field. Many newsrooms globally are leveraging the power of Al for efficient content creation and expanding audience engagement. From automating routine tasks to providing insights through data analysis, use of Al is increasing due to its efficiency and quick responses. The global media industry is currently experiencing a transition as artificial intelligence tools are reshaping traditional newsroom practices and storytelling methods.

However, the adoption of Al in journalism brings unique challenges too, including concerns about misinformation, biases in Al tools and fears around job security. The ethical use of AI remains a concern as well. Despite these challenges, if implemented ethically and strategically. Al has the potential to act as a powerful ally in overcoming systemic issues to reshape journalism into a more resilient and responsive domain.

The Media Resources Development Initiative (MRDI), in collaboration with Digitally Right Limited (DRL) and support from The Asia Foundation (TAF), conducted a comprehensive study on the adoption of Artificial Intelligence (AI) in Bangladeshi newsrooms.

The study explores the knowledge, use, adoption and future demands of Al in Bangladeshi newsrooms and assesses the policies and practices necessary for its ethical and responsible integration. With a view to identifying the needs and gaps that must be addressed for greater Al adoption in the sector, the report offers actionable recommendations for fostering responsible Al integration with emphasis on Al literacy, ethical practices and sustainable implementation for quality journalism in Bangladesh.

The study reveals a striking disconnect between potential and practice in terms of Al adoption in Bangladesh newsrooms. While journalists use Al tools for basic tasks like writing, editing and translation, they are reluctant towards deeper integration. At an individual level, this reluctance stems not just from technological skepticism, but from lack of access and knowledge, deep-rooted cultural inertia and fears about job security. At the institutional level, resistance to change, lack of strategic vision, and limited organizational support hamper the broader application of Al in newsrooms.

These barriers are intensified by insufficient investment in training, lack of clear policies on Al usage, and worries on maintaining journalistic integrity, all of which prevent the successful integration of Al technologies. Notably, newsrooms prioritize immediate output over systematic implementation, resulting in a shallow engagement with Al’s capabilities.

Dismislab’s New Study Reveals YouTube Running Ads on Misinformation Videos

Dismislab’s New Study Reveals YouTube Running Ads on Misinformation Videos

YouTube, a dominant platform in Bangladesh, significantly influences news consumption and entertainment, but concerns about its role in spreading and monetizing misinformation persist. A study by Dismislab, Digitally Right’s disinformation research unit, identified 700 unique Bangla misinformation videos fact-checked by independent organizations and still present on YouTube as of March 2024.

The study, titled “Misinformation on YouTube: High Profits, Low Moderation” shows about 30% of these misinformation videos, excluding Shorts, displayed advertisements, thereby generating profit for the platform and posing reputational risks for the advertisers. These ads were seen on 165 videos, which accumulated 37 million views and featured ads from 83 different brands, one-third of which were foreign companies targeting the Bangladeshi audience. 16.5% of the channels posting these videos were YouTube-verified, including known media outlets, but mostly content creators across various genres like entertainment, education, and sports, often pretending to be news providers.

Misinformation primarily centered around political (25%), religious, sports, and disaster-related topics, with some channels repeatedly spreading false information. Researchers reported all 700 videos to YouTube, with only a fraction (25 out of 700) of reported videos receiving action, such as removal or age restrictions, highlighting gaps in YouTube’s enforcement of its own policies.

The following are the key issues with moderation and policies that are identified in this research:

  • YouTube’s policies have limitations considering they are often proven vague and inadequate.
  • The policies provide some examples, but often say that violations are not limited to these instances without specifying what is not permitted, rendering the moderation process unclear.
  • It is not always necessary to remove all misinformation; however, users should be made aware of potential false or misleading claims. Other platforms, such as Facebook and Twitter, identify misinformation based on user reports or third-party fact-checking organizations. YouTube does not do this extensively.
  • YouTube’s automated systems often fail to detect a variety of misinformation that violates its policies. Furthermore, these techniques are unable to reliably detect the same false content on other channels, even after it has been banned or removed in response to community reports.

It is observed in this research that what actions were taken against the reported content by YouTube. While many videos explicitly violate policies and others are unclear, the reporting was carried out to mainly understand how YouTube reviews user-reported content. Advertisers and experts, interviewed for this research, expressed disappointment over ad placements on misinformation content, emphasizing the urgent need for YouTube to enhance its moderation capabilities and provide better transparency and control options.

Research Papers from the Tech Policy Fellowship 2022

Research Papers from the Tech Policy Fellowship 2022

With the rapid rise of the internet and the adoption of new technologies in our everyday life, we see different technology-related laws and policies emerge at home and across the world. To understand the trends in these policies and their role and impact on society, Digitally Right launched Bangladesh Tech Policy Fellowship 2022. The programme aimed to empower the citizen’s voices with the essential skills and knowledge to become an advo­cate for a free and open digital space.

The fellowship spanned for a period of six months where fellows from diverse backgrounds in academia, law, media, and civic advocacy were engaged in thorough training, and virtual deep-dives and produced research/papers under expert supervision.

In their research, fellows have focused on different areas of concern. For instance, while one of the papers emphasised on today’s youth’s interaction with the internet and their awareness of the rights and liabilities that come with it, another paper focused on how the Rohingya refugees are utilising the internet and the difficulties they face while doing so. It has also been found that the existing laws in terms of election are not sufficient to deal with the risks stemming from the digital space.

Two of our fellows attempted to assess the impact of the draft Data Protection Bill on in­ternet-driven businesses, which is set to make data localization mandatory. Another time appropriate research flags concerns around Facebook’s Bangla content moderation due to the language’s ethno-linguistic diversity.

We hope this publication and the reports by our fellows contribute to the existing knowl­edge and inform arguments, discussions, and debates for future actions in the tech policy space.

Read the papers here.

Hits and Misses: An Examination of Meta’s Political Ad Policy Enforcement in Bangladesh

Hits and Misses: An Examination of Meta’s Political Ad Policy Enforcement in Bangladesh

As Bangladesh gears up for its upcoming parliamentary elections, a study by Digitally Right has laid significant shortcomings in Meta’s political advertising enforcement. The study, titled “Hits and Misses: An Examination of Meta’s Political Ad Policy Enforcement in Bangladesh,” published today (Monday), finds various active political ads, lacking required disclaimers, managed to slip through Meta’s detection systems.

The study also finds issues of over-enforcement, with non-political ads facing inaccurate categorization, disproportionately affecting commercial entities. The inadvertent mislabeling, ranging from product promotions to employment services, prompts questions about the efficacy of Meta’s ad classification algorithms.

It reveals instances of incomplete or vague information in disclaimers provided by the advertisers that falls short of Meta’s transparency standards and hints at potential gaps in the platform’s verification processes leaving users in the dark about the sources funding political advertisements.

Political advertisements significantly impact democratic processes by shaping public perceptions of political systems and leadership and function as the most dominant form of communication between voters and candidates before elections. And social media has brought about a significant transformation in electoral campaigns, enabling political actors to engage with a vast audience through advertisements.

In January 2023, Bangladesh had 44.7 million social media users, representing 34.5% of the population aged 18 and above, as reported by Data Reportal. Among these users, Facebook stood out with the highest number, accounting for 43.25 million. Among the major platforms only Meta, in its Ad Library, offers disclosures on ads related to election, politics and social issues — that his study terms as political ads. It allows an opportunity to scrutinize the identity of advertisers, amount spent and content of such ads.

The study analyzed detected political ads and related disclaimer information available in the Meta Ad Library in one year period and ran keyword search to identify active but undetected political ads.

Political but Undetected

The under-enforcement undermines the entire purpose of political ad transparency as it represents a failure of the transparency system, which then leads to missing significant portions of political advertising on the platform.

The study finds 50 active ads with clear political messages and almost half of those ads came straight from political figures and parties highlighting underenforcement. Over 90% of the undetected ads prominently displayed political party names or political figures in the text and 72% included photos of political leaders or party symbols. These ads were found by keyword search for just four days within the research period.

Screenshots from the report

According to the findings, identical ads were identified as political when shared by certain pages but not when posted by others. Political demands featured on ostensibly unrelated pages bypassed detection at least in five cases. It also finds a delayed response in identifying and flagging political ads, with instances of ads running for prolonged periods without disclaimers. For instance, an ad conveying a political message remained undetected for a staggering 372 days despite accumulating a million impressions.

Advertisers Evading Transparency

The study analyzed 314 “paid for by” disclaimers available in the Ad Library and found that nine advertisers, including a mayoral election candidate and two politicians, didn’t submit any required transparency information yet were allowed to display political ads.

Meta advertising guidelines require the advertisers to provide phone, email, website, and location addresses that are functional and correct at all times. However, the study finds, 80% of the 314 disclaimers had ambiguous or insufficient address information, with 47% using only the district name. Only 17% had complete and operational addresses and 58% used a Facebook page URL in lieu of a website.

Screenshots from the report

“Paid For By” disclaimers are at the core of ad transparency and crucial for users because they provide essential information about who is funding and supporting a particular advertisement allowing voters to understand the interests and affiliations behind the messages they encounter. Findings of the study imply that once disclaimer information is provided, there is inadequate effort from meta in verifying “functionality” and “correctness” of the information in disclaimers.

Non-Political but Detected

This study analyzed 1,420 advertisement samples from the Ad Library that were posted by pages in the non-political category and found that about 25% of the advertisements from non-political pages (i.e., commercial, news and media, and other categories) were incorrectly detected as political. The highest rate of false positives (i.e., ads erroneously identified), at 43%, occurred in the ‘commercial’ pages category, indicating that it was the most adversely impacted by over-enforcement.

Screenshots from the report

The study identified mis-detections where ads from commercial pages owned by political figures were inaccurately labeled as political. Even simple product promotions from companies owned by political figures faced incorrect categorization.

Ads promoting the sale of guidebooks, textbooks, novels, and services related to employment opportunities, studying abroad, and visa applications were detected as political, seemingly for being considered as a social issue, highlighting a need for more precise classification of social issues for Bangladesh.

Commercial pages encountered challenges stemming from keyword-related issues, where seemingly innocuous terms like ‘Minister’ triggered the misclassification of electronic appliance advertisements and marriage matchmaking services as political ads.

Keywords such as ‘winner’ and references to specific events further contributed to the misclassification of ads, underscoring the complexities in accurately categorizing content on the platform.

Recommendations

This study’s findings underline crucial areas requiring attention and improvement in ensuring transparency within online political advertising for Bangladesh.

Recommendations from the study, informed by expert interviews, highlight the necessity for Election Commission mandated regulations, regular audits of keywords by digital platforms, and stronger collaboration between various stakeholders for effective oversight.

Here is are some of the key recommendations:

  • Social media platforms, such as Meta, must conduct regular audits on keywords tailored to the Bangladeshi context to ensure accurate classifications of political ads and collaborate with stakeholders, including the Election Commission (EC), election watchdogs, researchers, and civil society for context-specific insights and evaluations for the effectiveness of these audits.
  • To foster transparency, the Election Commission or government must mandate all social media platforms to disclose political ads data specific to Bangladesh and policies on what information should be made available and how.
  • To clarify regulations for online advertising and campaigning in Bangladesh, the Election Commission (EC) guidelines must explicitly include provisions for overseeing online political campaigns and allowing parties to self-disclose all their official pages.
  • Meta should collaborate with local stakeholders to establish and publicly disclose a comprehensive list of relevant social issues specific to Bangladesh.
  • Meta must display and archive not only political but all ads uniformly, across all jurisdictions, as it does for the EU to comply with the Digital Services Act.
  • Social media platforms, including Meta, must invest in more human reviewers with specific knowledge of the local political landscape and language to ensure the effective review of political ads.

This study is an integral component of Digitally Right’s ongoing commitment to exploring the intersection of technology and its impact on society. Digitally Right, a research-focused company at the convergence of media, technology, and rights, provides essential insights and solutions to media, civil society, and industry to help them adapt to a rapidly changing information ecosystem.

The full report is available here.

Disinformation Risk Assessment: The Online News Market in Bangladesh

Disinformation Risk Assessment: The Online News Market in Bangladesh

Digitally Right has partnered with the Global Disinformation Index (GDI) to launch a pioneering report titled “Disinformation Risk Assessment: The Online News Market in Bangladesh”. The report provides insights on the dangers of disinformation in Bangladesh’s media industry, based on a study of 33 news domains.

The report was unveiled during an online event on March 28th, 2023, which included presentations from the Research Director at the Global Disinformation Index, and Digitally Right’s Founder, Miraj Ahmed Chowdhury. The event featured Shafiqul Alam, Bureau Chief at AFP, Dhaka, Ayesha Kabir, Head of English, Prothom Alo, Talat Mamun, Executive Director, Channel 24, and Saiful Alam Chowdhury, Associate Professor, Dhaka University, who shared their insights on the report.

The report presents GDI’s findings on disinformation risks in the media market of Bangladesh, based on a study of 33 news domains, and aims to provide an overview of the media market as a whole and its strengths and vulnerabilities.

The assessment found that all 33 domains had a medium to high risk of disinforming their users, including respected sites known for their independent news coverage. Sixteen sites had a high disinformation risk rating, and half of the sample had a medium risk rating. However, no site performed so poorly as to earn a maximum risk rating.

According to the findings, the main source of disinformation risk in Bangladesh media sites is the lack of transparent operational checks and balances. While all sites scored strongly in presenting unbiased, neutral, and accurate articles, 28 sites had no form of accuracy policy on their websites. Most sites lacked policies for editorial checks and balances including for post publication corrections, comment moderation, byline information, fact checking, and sourcing as well as clarity on funding and ownership structures.

The report highlights operational shortcomings that are hampering trust and transparency in the media industry. Recommendations made in the report urge news sites to adopt and make transparent universal standards of good journalistic practices, such as publishing beneficial ownership and funding information, maintaining a corrections policy, publishing bylines policies, and sourcing guidelines.

The findings show many of the operational issues afflicting the Bangladeshi websites can easily be fixed by adopting and making transparent universal standards of good journalistic practices as agreed upon by the Journalism Trust Initiative.

This risk-rating framework for Bangladesh provides crucial information to enable decision-makers to stem the tide of money that incentivizes and sustains disinformation and the report is aimed to encourage greater transparency and accountability in Bangladesh’s media landscape.

Download the report here.