By Tenkeu Hilmelda and Benedicta Oyedayo
Bullying, Harassment and Other Forms of Technology-Facilitated Gender-Based Violence (TFGBV)
Social media was once a game-changer for human rights organizations, activists, and civil society groups, providing a powerful platform for connection, advocacy, and mobilization. It created space for historically excluded communities including persons with disabilities, Indigenous peoples, refugees and migrants, people of African descent, LGBTQI+ individuals, and sex workers to share their voices and build networks. These platforms enabled the rapid spread of information and amplified local struggles on a global scale. However, as social media has evolved, so too have the risks. Today, technology-facilitated gender-based violence (TFGBV), bullying, and harassment have become urgent crises, disproportionately impacting marginalized communities particularly women, girls, and sexual and gender-diverse individuals with intersecting identities.
TFGBV takes many forms, including doxxing, non-consensual image sharing, deepfake pornography, cyberstalking, and targeted harassment. Social media platforms, rather than being safe spaces for discourse and activism, have increasingly become breeding grounds for this violence. Harmful content spreads unchecked, misinformation thrives, and inadequate content moderation has allowed an atmosphere of hostility to take root. For instance, in September 2024, Initiative Tile, a Cote d’Ivoire-based CSO funded by the Feminist Opportunities Now project, released a statement condemning the online propaganda against the LGBTQI+ community in Cote d’Ivoire using the hashtag “woubi” (a stigmatizing term used to refer to gay men).
For those already at risk of discrimination, this unchecked violence ultimately leads to silencing, self-censorship, and the retreat of marginalized voices from digital spaces.
The Growing Threat of the Manosphere and Digital Radicalization
One of the most alarming developments in online spaces is the rise of the "manosphere," a network of online groups that perpetuate misogyny, misogynoir, anti-feminism, and patriarchal control. These forums, which span social media platforms, websites, and discussion boards, not only promote harmful ideologies but also serve as recruitment hubs for extremist groups that seek to roll back women’s rights and the rights of communities based on their cultural and historical background, ethnicity, disability, sexual identity and/or orientation.
There is an increasing body of evidence that connects online misogyny to the process of radicalization, suggesting that opposition to gender equality is often a tactic used by violent extremist groups to further their cause. These digital environments not only encourage attacks on women and sexual and gender-diverse individuals but also play a critical role in the silencing and further subjugation of these communities, pushing them to the margins of both online and offline spaces.
Specifically, Twitter (now X) has historically been a pivotal platform for human rights organizations, activists, (CSOs), and advocates. It enabled rapid dissemination of information, mobilization of supporters, and global amplification of local struggles. For instance, during Iran's 2009 post-election protests, activists utilized Twitter to share real-time updates, circumventing state-controlled media and bringing international attention to their cause.
Today, the platform is being exploited by extremist groups to disseminate radical ideologies and target marginalized communities.
The Role of AI and Regressive Policies in Digital Suppression
The rapid advancement of artificial intelligence (AI) has also contributed to the worsening of online safety for marginalized groups. AI-driven algorithms have been criticized for amplifying biases, disproportionately silencing marginalized voices, and prioritizing sensationalized, often harmful, content for engagement. As the UN Office of the High Commissioner for Human Rights (OHCHR) warns, “AI is not neutral, it carries the biases of those who created it, which often means further marginalization of vulnerable communities.”
Rather than serving as spaces for resistance, digital platforms have increasingly become tools of repression, where activists face doxing, digital harassment, and state-led crackdowns simply for speaking out. This is further compounded by regressive laws and policies that leverage these platforms for surveillance and suppression. In many countries, governments are tightening restrictions on LGBTQI+ individuals, women's rights, and other marginalized communities. As a result, social media has become a double-edged sword for human rights defenders—while it provides opportunities for connection and advocacy, it also exposes them to heightened risks of surveillance, harassment, and even arrest.
The Search for Safe Digital Spaces
What then?
The migration to alternative platforms like Bluesky reflects a broader transformation in digital activism. Beyond seeking decentralization and enhanced safety, this move highlights the urgent need for human rights organizations to rethink their digital strategies in an increasingly polarized online world.
This transition presents both opportunities and challenges. While alternative platforms offer promise, they also require rebuilding communities, re-establishing networks, and navigating new digital terrains. However, ensuring the continued fight for dignity, equity, and justice for all requires embracing this shift away from platforms rife with surveillance, censorship, and digital violence.
As human rights organizations and activists adapt to this evolving landscape, one thing remains clear: the need for safer, more inclusive digital spaces is more pressing than ever. The future of online advocacy will depend on how effectively these spaces can be reclaimed, protected, and used as instruments of resistance and liberation.
when