More and more Latin American women are finding naked pictures of themselves online that were the product of an AI image generator. Apart from a feeling of shame, they feel impotent, for some countries do not have good laws that protect them against offenses of this sort. 

In this article, we’ll comment on some key stats about digital violence against Latinas in different countries, share victim’s real stories, and mention how women can take legal action to protect themselves in different countries.

What Is Digital Violence?

Digital violence is an umbrella term, so there may be different legal definitions depending on the region you are in. According to the Law on Women's Access to a Life Free of Violence of Mexico City, digital violence as a modality of violence against women and it’s defined in the following terms: 

“Digital violence [...] is any act carried out through the use of printed materials, email, telephone messages, social networks, internet platforms, email, or any technological means, by which real or simulated images, audios or videos of intimate sexual content of a person are obtained, exposed, distributed, disseminated, exhibited, reproduced, transmitted, commercialized, offered, exchanged and shared, without her consent; that violates the integrity, dignity, intimacy, freedom, private life of women or causes psychological, economic or sexual harm both in the private and public spheres, as well as moral harm, both to women and their families. It manifests itself in pressure, persecution, harassment, harassment, coercion, humiliation, discrimination, threats or deprivation of liberty or life because of gender.”

Even though the law dates back from 2008, the process from which an illegal AI deep fake is identified to which it is taken down from the site is still quite difficult. This long period of time favors a further circulation of the photograph, which can increase the victim’s dejection. 

Statistics on Digital Violence Against Women

According to a 2021 report on digital violence against women in Mexico City, 40 percent of them face online harassment or sexual advances, mostly from anonymous profiles. The numbers are truly shocking to read. In a single day, there are between 15,000 and 20,000 gender-based hate messages. Even though it is a misconception to believe what happens in social media is an exact reflection of human interaction, the data speaks for itself. We Latinas live under a regime of violence that slut-shames us at every turn.

But what happens when we find pictures of ourselves that aren’t really us? According to Sensitivity AI, a company dedicated to detecting hyperrealistic videos, online AI deep fake content available doubles every six months and accumulates 134 million views. 95% of the times the videos are pornographic and nine out of ten women are victims of these AI picture generators

Children are also one of the main victims of these misdeeds. Most of them are “nudified” by means of an AI image generator and are depicted in scenes of sexual torture and rape. According to a study conducted by the Internet Watch Foundation (IWF), almost 3,000 generated images of child abuse were found in a single dark web site. 

AI-generated images’ rapid pace is taking the world by storm and no woman is safe from this type of fabrication. Even Rosalía became a victim after a stranger published a fake nude of the singer on social media. As a response, she expressed herself in Spanish:

“A woman's body is not public property, it is not a commodity for your marketing strategy. Those photos were edited and you created a false narrative around me when I don't even know you. There is such a thing as consent,” she wrote via Twitter, seeking for respect and acknowledgement. 

But you don’t have to be famous to find a nude picture of yourself from an AI picture generator. In San Juan, Argentina, a group of 15 young women found these types of images on Telegram, a social network known for guaranteeing its user’s anonymity. As soon as they became aware of the offense, they reported it to the Center for Addressing Domestic and Gender-Based Violence (or UFI CAVIG, its Spanish acronym). According to a local newspaper, the victims claim that the two alleged perpetrators copied photos of their faces and pasted them on top of other women’s naked images to then upload them. There is an ongoing investigation, but the Penal Code’s legal framework doesn’t specifically contemplate this type of crime.

Legislation against Digital Violence throughout Latin America

What can women do to shelter themselves from these gendered-biased attacks? According to El País, Colombia is one of the worst countries for women to live in terms of their current legislation, for there isn’t a specific law that protects its female citizens from felonies that target them specifically. This is also the case in regions like Nicaragua and Venezuela, in which it is not considered a criminal offense to produce or distribute non-consensual sexual pictures unless it is associated with other crimes. 

Luckily, Mexico has stepped up and became a pioneer in digital legislation thanks to the Olimpia Law. In force from 2018, this law pays homage to Olimpia Coral Melo, who became a victim of the distribution of a sexual video. If found guilty of a crime, this law dictates sanctions of up to six years in prison and a large fine. This example was followed by Argentina’s lawmakers, who passed their own Olimpia Law in 2023. This legislation addresses crimes that violate sexual privacy of all individuals and penalizes the distribution of any type of intimate content (this includes AI deep fake videos).

 Nonetheless, the justice system moves at a slower pace than its legal framework. In the City of Buenos Aires, 3,511 reports on digital-related crimes were filed between 2021 and 2022, yet only 1% of that figure got to a trial stage, and only half of them convicted a perpetrator. 

Advantages of AI against Digital Violence

However, not all AI does is detrimental to women: it can also be a powerful tool to fight back digital violence. In Panama City, an online chatbot named Sara was created to prevent and assess violence of all kinds against women of all ages. This was a conjoint initiative from the United States Agency for International Development (USAID) and the United Nations Development Program (UNPD). Through artificial intelligence and machine learning, the chat’s learn how to better protect victims’ identities and to guide them through the corresponding legal process. 

According to a 2023 report from the United Nations Industrial Development Organization (UNIDO), the UNDP has also launched an AI-based gender digital violence tracker that covers the Latin regions of Uruguay and Colombia. What this tool does is to scan social media content in order to detect all types of hate comments against women and girls. In 2022 alone, they reported almost 10,000 offensive tweets from Uruguay.

Even though the Internet can be a hostile environment to move through, it can also host the gathering of digital violence victims worldwide. Behind our screens, we women might feel too ashamed to speak up and bring our case to justice, but it’s important to stress that we are not alone and that the State has the obligation to offer us assistance. AI image generators are okay as long as nobody’s rights are compromised. 

If you were or currently are a victim of any type of gender-based violence, you can contact the following hotlines depending on the country you are now:

  • United States. Nationwide number 1−800−799−SAFE(7233) or TTY 1−800−787−3224 or (206) 518-9361 (Video Phone Only for Deaf Callers). 
  • Mexico. Atención a Mujeres en Situación de Violencia mediante la Línea Sin Violencia: 800 10 84 053.
  • Argentina. Línea 144.
  • Colombia. Línea #155 para orientación a mujeres víctimas de violencia basada en género.
  • Chile. Fono Orientación en violencia contra la mujer: 1455.
  • Peru. Línea 100.
Share this post