DRAPAC 2024 Highlights: Digitally Right Navigates AI, Disinformation, and Data Protection Challenges

DRAPAC 2024 Highlights: Digitally Right Navigates AI, Disinformation, and Data Protection Challenges

EngageMedia hosted the 2024 edition of the Digital Rights in the Asia-Pacific (DRAPAC) Assembly in Taipei, Taiwan, from August 18 to 19, 2024. This gathering has brought together civil society representatives, journalists, activists, tech experts and more from all over the world with an aim to forge alliances in response to the evolving digital threats in the Asia-Pacific. The two-day event featured a diverse mix of plenary sessions, capacity-building workshops, digital security labs, and solidarity events. 

Out of 43 sessions, Digitally Right presented in 4 sessions and facilitated in 1 which included two plenary sessions, two workshops and a roundtable discussion. Three representatives from Digitally Right were part of these five sessions, Miraj Chowdhury, Managing Director of Digitally Right, Tohidul Islam and Aditi Zaman, the Research Officer and  Programme officer of Digitally Right respectively. 

Miraj Chowdhury is speaking at the opening plenary on harnessing digital rights resilience in the Asia-Pacific

On the first day of the event, Digitally Right was part of 4 sessions including the opening plenary on harnessing digital rights resilience in Asia-Pacific. Miraj Chowdhury presented on this panel discussion along with representatives from Nepal, Sri Lanka and Cambodia. This was followed by facilitating a roundtable collaborative discussion among civil society organizations (CSOs) to enhance their involvement in the governance of artificial intelligence (AI) technologies. The aim of this discussion was to create a roadmap for a more inclusive and transparent AI governance framework. Some participants agreed that the policies for AI governance should be framed in an ethical manner with accountability mechanisms in place while others believed that instead of making new laws and policies, we should consider our existing laws for AI governance. But the consensus was that no matter what policies and laws are used, the governance of AI should be multistakeholder.  

Digitally Right also ran two separate workshops on the first day. The first one was a workshop on how social media fuels disinformation and what can be done to prevent or minimize it. During this session, Tohidul Islam presented his own research titled “Misinformation On YouTube: High Profits, Low Moderation”. The research was presented as a regional case study, illustrating how both content creators and YouTube capitalize on the proliferation of misinformation on the platform for financial gain. The study identified that about 30% of 700 unique Bangla fact-checked misinformation videos, excluding Shorts, displayed advertisements, thereby generating profit for the platform. The presentation helped the participants gain a clear perspective on exactly how monetisation on social media platforms encourages misinformation. After the presentation concluded, the participants were divided into four groups for a breakout session where they came up with strategies to overcome this issue. An interesting observation from this breakout session was that the participants, who were all from different parts of Asia, were not able to agree on one single strategy. For instance, while some thought government control over social media governance would be helpful, others were against it. This workshop had garnered immense positive responses from the attendees. 

Tohidul Islam is presenting his research on how social media fuels disinformation

In the meantime, another workshop was being run by Miraj Chowdhury which aimed to engage women and youth-led digital rights communities across South Asia in sharing their ideas for inclusive digital technology advocacy. Mr. Chowdhury was a panelist during this session where he explored the tech policy awareness among women and youth in South Asia and discussed the unique challenges identified in a research Digitally Right is a part of conducted in India, Bangladesh, Sri Lanka, and Nepal. ImpactNet – a platform designed to bridge collaboration gaps, was introduced during this session and the participants were given a walkthrough on its use. They were encouraged to contribute and provide inputs on how to maximize the use of ImpactNet.

Aditi Zaman is speaking at a panel discussion on personal data protection governance in the age of AI

On the second day of DRAPAC, Digitally Right had one panel discussion on personal data protection governance in the age of artificial intelligence. Aditi Zaman was a panelist on this session along with representatives from Philippines, Cambodia and Maldives. The existing data protection legal frameworks and the recent developments in national AI strategies within South Asia were discussed in this session. Maldives and Cambodia shed some light on how their countries ensure data protection with no specific data protection regulation in their Country. In contrast, both the Philippines and Bangladesh have regulation for both personal data protection and AI. Miss Zaman discussed the recent changes and challenges of the Draft Personal Data Protection Act of Bangladesh and its potential impact on the upcoming Draft National AI Policy, particularly within the new political context of Bangladesh. 

AI and Human Rights: Concerns Raised at Draft National AI Policy Workshop

AI and Human Rights: Concerns Raised at Draft National AI Policy Workshop

On April 29, Digitally Right, in collaboration with ICNL, held a workshop focused on legislative issues titled “Understanding the Draft National AI Policy.” The session was attended by 17 participants, including representatives from media outlets, environmental organizations, human rights defenders, and gender activists. The workshop aimed to raise awareness, strengthen advocates’ capacity, encourage in-depth discussions, and develop strategies to influence changes to the Draft Policy. Shabnam Mojtahedi, Legal Advisor for Digital Rights at ICNL, delivered a presentation on the Draft AI Policy and its implications, followed by a discussion on strategies for engaging with the government on the issue. Participants also shared their individual concerns regarding the Policy.

During the workshop, ICNL’s presentation highlighted common concerns about AI and its potential impact on human rights. Shabnam then discussed best practices for consulting and engaging with the government on AI-related issues. She also pointed out that the lack of a globally accepted definition of AI has resulted in the Draft Policy being vague and broad. Since no jurisdiction has established standalone AI laws, there are no clear precedents for treating AI as a separate legal issue.

Participants at the workshop agreed that the government should base the AI Policy on international best practices, with civil society organizations (CSOs) playing a key role in bridging the gap between the government and global partners. They emphasized the need for effective and diverse engagement in the policy process. It was noted that the Draft Policy lacked impact assessment, monitoring, and evaluation mechanisms, which need to be addressed. While national security carve-outs are common in AI policies worldwide, participants stressed the importance of strong oversight in this area to prevent data abuse.

Following the presentation and Q&A session, participants held an open discussion to identify strategies for influencing the government to amend the Draft AI Policy. Three strategies emerged. The first is to file a Right to Information (RTI) request, seeking details on which CSO representatives were involved in the consultation and drafting process, and to challenge the legality of the Policy, as it is required by law to be drafted in Bangla rather than English. The second strategy involves rallying public support to delay the Policy’s implementation, eventually pushing for its repeal on the grounds that it is not functional. Some participants suggested pursuing both strategies simultaneously.

The third strategy proposed is the creation of a national alliance consisting of CSOs and AI experts. This alliance would engage with grassroots communities, gather their feedback, and present it to international forums.