By Kasia Uzieblo, PhD, David S. Prescott, LICSW, & Kieran McCartan, PhD
Scrolling through my
X-feed recently, I was struck by an alarming trend: videos of Andrew Tate
bragging about (sexually) abusing women have suddenly become an almost daily
intrusion. His unashamed glorification of (sexual) violence and misogyny is a
stark reminder of how far platforms like X and Facebook have strayed from their
responsibility to maintain safe, accountable digital spaces.
But Tate’s presence is
just the tip of the iceberg. These platforms have become fertile ground for
hate speech, disinformation, and outright incitements to (sexual) violence. And
now, with Facebook’s decision to forgo fact-checking posts, the spiral toward
unchecked harmful content feels even steeper.
The amplification of
hate
Hate speech and
violent rhetoric thrive on engagement-driven algorithms. The more outrageous
and provocative the content, the more likely it is to be amplified. Andrew
Tate’s incendiary videos are a perfect example: their shock value ensures
likes, shares, and comments, propelling them further into people’s feeds—even
those who want no part of it.
There is a fear that digital
platforms serve as breeding grounds for harmful ideologies that spill over into
offline behavior. Studies have indeed shown links between violent (sexualized)
media, tolerance of violence and real-world violence (e.g., Burnay et al.,
2021; Müller & Schwarz, 2019; Sengupta et al., 2024). Dehumanizing
language—especially against women, minorities, and other marginalized
groups—normalizes abuse and lowers the psychological barriers to committing (sexual)
violence. The result? A culture in which acts of aggression, including sexual
violence, are trivialized or even celebrated.
Platforms’ negligence
is complicity
When Facebook abandons
its fact-checking program, it opens the floodgates to disinformation. This
isn’t just about misleading political ads; it’s about the broader erosion of
truth. Lies about sexual assault victims, false narratives blaming survivors,
and denial of systemic issues - all these contribute to a hostile environment
that discourages survivors from speaking out and emboldens perpetrators.
X, under its current
leadership, has only exacerbated the problem. By rolling back content
moderation and reinstating previously banned accounts known for spreading hate
and violence, the platform sends a clear message: hate/violent speech is not
only tolerated but prioritized.
The mental health
fallout
For those subjected to
this onslaught of hate, violence, and misogony the psychological toll is substantial.
Survivors of sexual violence, already navigating the trauma of their
experiences, are re-traumatized by the proliferation of content that mocks,
discredits, or outright denies their pain (e.g., Andreasen, 2020). Witnessing
the normalization of such violence can trigger anxiety, depression, and
feelings of hopelessness, not just for survivors but for anyone committed to
creating a safer world.
Young people, who are
among the heaviest users of these platforms, are particularly vulnerable.
Repeated exposure to misogynistic and (sexually) violent content can distort
their understanding of relationships, sexual behavior, and consent, while also
normalizing harmful behavior. The long-term consequences on their mental health
and societal values are staggering. As a parent, it’s nearly impossible to keep
up with the ever-changing landscape of social media and its addictive nature.
Even with vigilance, shielding children from harmful content feels like a
losing battle when algorithms prioritize engagement over safety. The sheer
volume of toxic material can slip through the cracks, making it harder to guide
young people and protect their mental well-being and development. This
challenge underscores the need for systemic changes in how these platforms
operate, but also in how we cope with these platforms.
A shared
responsibility
We cannot longer treat
platforms like X and Facebook as neutral tools. Their algorithms, policies, and
decisions actively shape our culture and, by extension, our realities. As an acquaintance of mine recently remarked dryly when I pointed out
the falsehoods about violence on social media: we live in different realities.
Of course, platforms
bear an immense responsibility and must primarily implement the necessary
protective measures. We have highlighted this in previous blogs as well. But
forgive my cynicism: the likelihood of this happening seems particularly slim
these days.
It is therefore
especially important that we take on our role and responsibility. Educating
ourselves and our children about the dangers of online violent speech is
crucial. Schools should integrate digital literacy and respect (sexual) education
into their curricula, empowering young people to critically evaluate content
and reject harmful narratives. Parents, friends, and community members must
remain alert and proactive, recognizing signs and stepping in when necessary. We
have highlighted these needs in previous blogs, but unfortunately, I must note
that the message does not yet seem to be percolating sufficiently to the
general public and relevant settings. For instance, sound sex education in
schools still too often leads to protests in Belgium but also elsewhere. Education in digital literacy has had a very slow and fragmented start
in Belgium, but I suspect this is also the case elsewhere. In short, we must
continue to highlight these needs and encourage parents, schools and other
relevant services to invest sufficiently in this area.
Daring to speak out is
equally important. By challenging hateful rhetoric—whether online or in
personal conversations—we can create -albeit small- ripples of change. Silence
in the face of hate and violence only enables it. Together, we must foster a
culture of accountability, empathy, and courage that counters the toxic
dynamics of today’s digital platforms.
Also, scientists should not be silent on
the sidelines. In an era where research findings are increasingly dismissed as
mere opinion or a “different reality”, discussing evidence-based links between
hate and violent speech, and (sexually) violent behavior has become fraught
with difficulty. This skepticism undermines efforts to address systemic
problems, as factual evidence is often overshadowed by emotional or ideological
debates. To combat this, it is essential to keep on promoting a broader
understanding of how research is conducted and why its findings matter. But it
is also required that academics continue to raise these practices and issues with
the general public and with policy. Solely focusing on scientific publications
that do not reach these target groups, I fear, will not make a significant
difference.
Addressing the issue of online hate and (sexual) violence requires not
just individual action but also robust policy interventions. Europe, for
instance, must strengthen its stance on regulating digital platforms. The European Digital Services Act is a step in the right direction, but enforcement is
key. Policymakers must ensure that tech companies are held accountable for the
content they host, with significant penalties for failures to act against hate
speech and disinformation. It is also of utmost importance that such
initiatives continue to resist the growing (financial) power of these
platforms. By standing firm and pushing for comprehensive, enforceable
policies, Europe and hopefully other regions can set a global example in
creating a safer digital space.
Where
do we go from here?
The current trajectory
of these platforms is unsustainable and deeply harmful. If we fail to hold them
accountable, we risk normalizing a culture where hate and violence, including
sexual violence, are not just accepted but celebrated. This is not merely a
digital issue; it is a societal one, with real-world implications for safety,
justice, and mental health.
Andrew Tate’s presence in my feed is a symptom of a much larger problem, but it’s also a wake-up call. We must demand better - from tech companies, from policymakers, and from ourselves. Because every time we scroll past hate and (sexual) violence, every time we let disinformation slide, we become complicit in its proliferation.
References
Andreasen, M. B.
(2020). ‘Rapeable’ and ‘unrapeable’ women: the portrayal of sexual violence in
Internet memes about #MeToo. Journal of Gender
Studies, 30(1), 102–113. https://doi.org/10.1080/09589236.2020.1833185
Burnay, J., Kepes, S.,
& Bushman, B.J. (2021). Effects of violent and nonviolent sexualized media
on aggression-related thoughts, feelings, attitudes, and behaviors: A
meta-analytic review. Aggressive Behavior, 48(1), 1-26. https://doi.org/10.1002/ab.21998
Müller, K., &
Schwarz, C. (2019). Fanning the Flames of Hate: Social Media and Hate Crime. Journal
of European Economic Association, 19(4), 2131-2167. https://doi.org/10.1093/jeea/jvaa045
Sengupta, N. K.,
Hammond, M. D., Deak, C. K., & Malhotra, R. S. (2024). Ambivalent Sexism
and Tolerance of Violence Against Women in India. Psychological
Science, 35(7), 712-721. https://doi.org/10.1177/09567976241254312
No comments:
Post a Comment