Instagram’s algorithms suggested a myriad of sexualized content to adult users who primarily follow preteen and teen influencer accounts, according to an investigation by The Wall Street Journal published Monday.
A June WSJ investigation revealed that Instagram’s algorithms enable child predators to connect with each other by promoting content to users with similar interests. This time around, WSJ set up several adult accounts that followed young gymnasts, cheerleaders and influencers and found that Instagram’s algorithms suggested sexually explicit videos and questionable content with children.
The test accounts were also shown ads for companies like Walmart and Pizza Hut after videos of sexually explicit content, according to the WSJ. One Walmart ad was reportedly shown to a test account after a video of a woman exposing her genital area.
Many companies require that their advertising not run next to sexual or explicit content, according to the WSJ.
The test accounts also followed other users who followed similar accounts, which the WSJ found increased the risqué content that was suggested, including a video of a young clothed girl touching her torso and another of a child mimicking a sex act.
Ads also allegedly recommended dating apps, massage parlors with “happy endings” and chatbots for cybersex to the test accounts, according to the WSJ. An ad for the dating app Bumble appeared before a video of a person stroking a life-size latex doll and after another video of a girl, whose face was digitally obscured and could be seen lifting her shirt over her stomach.
One ad reportedly urging users to visit Disneyland was followed by a video of a woman initiating sexual acts with her father, according to the WSJ. Another ad for the company Him, which provides erectile dysfunction medication, was seen not long after a reel of a woman in a sexual position with a link to the “full video.”
Meta told the Daily Caller News Foundation that the results from WSJ’s investigation were a “manufactured experience” and that they do “not represent what billions of people around the world see every single day” on Instagram.
“We don’t want this kind of content on our platforms and brands don’t want their ads to appear next to it,” a Meta spokesperson said in a statement. “We continue to invest aggressively to stop it – and report every quarter on the prevalence of such content, which remains very low. Our systems are effective at reducing harmful content and we’ve invested billions in safety, security and brand suitability solutions.”
Disneyland, Walmart , Bumble, Him and Pizza Hut did not immediately respond to the DCNF’s request for comment.