In the 2018 movie, “AI Rising,” a man’s wishes seem to come true—although he is initially skeptical. On an interminable voyage on a spacecraft to the star system of Alpha Centauri, he is accompanied by a single companion: an AI-powered female robot whose sole function, it appears, is to have near-constant sex with him.
Today, the depiction of the perfect AI lover who can never say “no” seems duplicated in a more tangible, and potentially much more harmful, way in the creation of young women on Instagram and Facebook.
Although they are not touted as sex objects, per se, their origins as manufactured products make them particularly malleable, with commentators focusing on the changing size of their bosoms, for instance.
“AI girls”—images of young women who have been created solely through artificial intelligence—are now among the top “influencers” on these sites, meaning they receive the most views, and in some cases, their creators are allegedly paid to allow their use in advertisements (the top one has almost 3 million followers).
Clear and present danger
The harmful side of this (besides the apparent objectification of women and their commodification, which happens generally on television and in other forms of advertising) is that it plays into self-esteem issues for young people on these platforms, particularly around body image.
Recent media reports have documented lawsuits brought by dozens of state attorneys general against Meta, the parent company of these two platforms, for failing to safeguard teenagers’ safety.
Besides the most grievous issue of trafficking young people on these sites, other media reports have documented and linked teen suicides to images and tools employed on these sites.
A handful of courses
Still, studies of combatting such pernicious influence are rare, perhaps reflective of a hesitation in this country (the U.S.) to proactively address the problem.
Only a handful of schools at the university level are offering courses in social media literacy.
This is both ironic, given the outcry in the media against the catastrophic influence of social media on teenagers, and devastating, given the growing number of instances of self-harm associated with teenage use of the sites.
One approach
A study done in Spain, however, shows the overwhelming effectiveness of raising awareness about the impact of social media powered by artificial intelligence.
Titled, “The Power of Beauty or the Tyranny of Algorithms. How do Teens Understand Body Image on Instagram?” the study was published in 2021 by researchers at the University of Barcelona.
Although several years old, it is particularly instructive as courses in social media literacy are struggling to gain a foothold.
Brainwashed
Participants in the study, who were teenage girls, were first revealed as having little to no awareness of the extreme power of social media images generated by AI to shape and reshape even their very beliefs.
When they began the study, before seeing the images, they were asked to predict how they would judge the appeal of a certain influencer (young woman) appearing on the site.
Overwhelmingly, most said it would be based on the personality’s beliefs and hobbies and other abstract, values-based factors.
However, surprisingly, after they interacted with the site, their criteria for rating the influencers had changed shockingly.
After viewing and interacting with the young women depicted on the screen, the majority now based their ratings almost exclusively on their looks—the appearance and style of the fashion monger.
Filter bubbles and self-image
Later, however, they were gradually introduced to how AI functions on these websites.
Filter bubbles create distorted illusions of reality for each participant, they learned.
A participant began by being shown an assortment of images. But as soon as she began interacting with any given one, the AI-powered algorithm began to select comparable ones—and eliminate all others.
The more she interacted with any image, the more fodder the algorithm had to hone in on her “preferences” and deliver only those images that matched them.
However, a user’s preferences may just as well be determined by fear and longing as by other desires.
For instance, a user with a fear of becoming overweight may be drawn to images of teenagers that appear thin. The more she interacts with such images, the more data the AI has to work with, and will eventually figure out this preference.
The end result? The user will only be fed images of impossibly thin, starvation-level models.
Since these will be the only images appearing, they will create a “bubble”—also called a “filter bubble”—in which reality becomes distorted, according to the study.
As a result, the user becomes virtually brainwashed into believing this is how reality appears, or ought to appear, and of course, she is the oddball out.
And this leads or can lead to, tremendous self-esteem issues.
It is a confidence killer.
Identifying shame triggers
However, when the participants were run through a training, in which they learned how filter bubbles worked, then were encouraged to analyze them, analyze their possible effects on the user, and share out and discuss their findings with others, they developed critical awareness about the effect and impact of AI on their self-esteem.
The work is reminiscent of a popular shame researcher, Brene Brown, who argues that we must identify the sources of our shame triggers, and then share them with trusted friends who will empathize with our distress. Only then, in a group, or with a friend, can one deconstruct the forces that triggered the shame.
In the case of the Spanish study, such investigations seemed, for the first time, correlated with the souped-up artificial-intelligence-driven world of social media, particularly for teenage girls.
The elements of the education
The study showed that education in social media literacy could develop critical thinking and evaluation of “images and narratives focused on body appearance, bodily ideals, and body pressure narratives.”
A second goal is to foster creativity so that users create their content, which cements in place their critical awareness about harmful narratives.
Teenagers ought also to be trained in “functional skills,” meaning a basic understanding of how the technology works.
Other parts of a curriculum should include practice in scrutinizing content on social media and the producers’ intent, as well as developing awareness of content that could be damaging to self-esteem.