Islamabad, (16 August 2022): Bytes for All (B4A) in collaboration with the International Minority Rights Group (MRG) has launched a report “Expression vs Hate Speech: A case study of Pakistani digital media” that focuses on the prevailing practices of Pakistani digital media vis-à-vis online hate speech against religious minorities. The study concludes that Pakistani digital media lacks in regulating users’ hateful commentary on their platforms, which often results in incitement to violence targeting vulnerable faith-based groups.

The core objective of this report is to stir a debate among digital media practitioners both the digital media and the legacy media on this important issue and to generate increased attention to and steps taken to counter this. Journalism is a profession bound to its ethical code of conduct and global good practices. The ethics also call upon the media as groups and journalists as individuals, to refrain from any expression that can cause harm to society, including intolerance, incitement to hostility, discrimination or violence directed to or incited against any vulnerable groups, including religious minorities.

The featured case studies in this report are a few examples from Pakistan’s digital media sphere filled with hatred for minority and sectarian faith-based groups, but countless similar conversations were tracked on many mainstream digital media platforms. Even in instances digital media groups would often produce content to positively highlight the issues of religious minorities; they may unknowingly and unintentionally become a trigger that generates hate speech. The report calls upon the digital media groups for the need to be sensitized to this effect.

Official Pakistani digital media platforms like Dekhlo, Dawn.com, Native Media, Naya Daur, Daily Pakistan, and many others who have been using mainstream social media companies as mediums for reaching out to the masses, targeting the public and promoting their businesses have direct responsibility for discouraging and removing hateful conversations happening on their pages that can incite violence or promote discrimination or hostility against vulnerable groups.

Mainstream social media platforms, including Facebook, Twitter, YouTube, TikTok, Snack Video, and others, undoubtedly have a huge responsibility in moderating the hate speech on their platforms. However, considering the magnitude of digital content being generated on their platforms and that too in hundreds of local, regional, and vernacular languages, it becomes a collective responsibility of the social media giants and users to devise an effort to moderate/remove hateful content against vulnerable groups, such as faith-based groups.

To access this report, please click here

End

Category:
0 Comments
Aug 12, 2022 By bytesforall