‘Down Syndrome’ Filters Being Used to Promote Sexual Content

‘Down Syndrome’ Filters Being Used to Promote

Social media users are encountering video with text overlay such as, “Is Down syndrome a deal breaker for you?” or, “Would you date a girl with Down syndrome?”

But the girl in the video isn’t real, and she doesn’t have Down syndrome.

A disturbing trend is emerging where creators are using an AI filter to make it look like they have Down syndrome and creating content that is suggestive and sexual in nature. These accounts include links to monetized subscription pages where people can pay for explicit content.

National Down Syndrome Society (NDSS) President & CEO, Kandi Pickard told Newsweek: We have been monitoring the surge of fake online social media accounts utilizing AI to impersonate individuals with Down syndrome, and we are disappointed to see that this is growing in popularity.”

From the rise in this content to the risks it poses, Newsweek spoke to experts to find out more.

What is the ‘AI Down syndrome’ filter and why is it being used?

Down syndrome itself is a genetic condition caused when an unusual cell division results in an extra full or partial copy of chromosome 21, according to the Mayo Clinic. This extra genetic material causes developmental changes and physical features associated with Down syndrome.

‘Down Syndrome’ Filters Being Used to Promote Sexual Content

Photo Illustration by Newsweek/Getty Images

So why are people using an AI filter to make it seem like they have Down syndrome? A quick social media search will throw up hundreds of videos of people using the “AI Down syndrome” filter. In April of this year, there were spikes for search terms including “AI Down syndrome” and “Down syndrome girl” on Google, according to Google trends data. Searches including “Down syndrome fitness girl” and “Down syndrome beautiful girl” became popular on social media.

Social media accounts seen by Newsweek show creators using the filter to create highly suggestive content, with links to OnlyFans pages, an online content-sharing platform which can be used to share all kinds of content but is particularly popular among people who share explicit content.

A spokesperson for OnlyFans told Newsweek over email that “none of the accounts referred to are using AI to filter or fake images on OnlyFans. Our Terms of Service are clear that AI generated images of this kind would not be allowed on the platform.”

Accounts also include links to Telegram groups, a social media and instant messaging service known for its strong emphasis on privacy. One such group on Telegram had more than 20,000 members, according to the outlet The News Movement.

Newsweek has reached out to Telegram for comment.

The language of these posts is often centered on exploiting a fetishization of Down syndrome and has sexual undertones. Captions include, “Bro I never again on tinder she has Down syndrome,” “Why can’t I be sexy with Down syndrome,” and “Does Down syndrome change how you feel about me?”

What started with a handful of accounts has quickly snowballed as the trend surges in popularity. In the majority of cases, these accounts do not declare that they are using AI.

What Are the Risks of This Content?

“This type of content and reactions around it simultaneously sexually fetishizes people with Down syndrome while it also positions Down syndrome and sexual desirability as incompatible, and thus a site to laugh at,” Dr. Amy Gaeta, a research associate at The University of Cambridge told Newsweek via email.

“There is nothing immoral or embarrassing about sex work,” Gaeta said, but “to use the image of another subject is using that subject position as a shield and means of profit. This is the commodification of disability.”

Dr. Emily Springer, a responsible AI leader, told Newsweek that the trend represents “a deeply troubling intersection of ableism, sexual exploitation and the unethical use of generative AI technologies.”

“When able-bodied creators use generative AI filters to make themselves appear as if they have Down syndrome, especially in a sexualized way, they are engaging in a form of representational harm,” Springer said. “These creators do not live with the systemic, social, and material realities of having a disability. Instead, they adopt an AI-modified ‘presentation of disability’ that reduces complex identities to fetishized caricatures for profit.”

Disability, stigma and stereotypes

It is important to remember that disability is a spectrum, and advocates continually challenge stereotypes associated with conditions like Down syndrome, with notable examples including the 2024 “Assume That I Can” campaign. The fact of having a disability does not mean an individual is inherently vulnerable, or not able to live a fulfilling life.

Dr. Aisha Sobey, a research fellow at Cambridge University, told Newsweek: “Socially, we often view people with Down syndrome as inherently vulnerable even though many people with Down syndrome can enjoy fulfilling sexual lives.”

However, Gaeta told Newsweek: “This hyper sexualization of women with Down syndrome can put actual women with Down syndrome at greater risk for epistemic, symbolic and material harm by propagating false images.”

Disabled women face disproportionately high rates of abuse, according to the CDC, which found that as of 2024, an estimated two in five (39 percent) of female victims of rape had a disability at the time of the rape.

“While we, as a society, should be more open to seeing people with disabilities as also being able to be sexual beings, the expectations of bodies created by these fakes only highlight the fetishized and misogynistic views of women that often stem from development of products that have involved few diverse voices,” Sobey told Newsweek.

Packard told Newsweek: “Individuals with Down syndrome are the only people who should be speaking about what it’s like to have Down syndrome.”

What Responsibility Do Developers Bear?

As AI becomes more prevalent, so too do concerns about deepfakes and deepfake porn. According to Stateline, as of 2023, 98 percent of deepfake videos online were pornographic in nature. “Deepfakes and filters are well noted to pose massive epistemic risks. Social media platforms and AI developers have a responsibility to ban the use of some such ‘deepfake’ filters in certain contexts when real harm can be done,” Gaeta told Newsweek.

“The line must be drawn when someone can wear another’s disability, identity and/or subject hood as a costume,” Gaeta added. Platforms and developers have a moral responsibility to listen to disabled women, among other marginalized groups, about what good representation means to them, what a safe internet looks like for them, and what sexual expression means and looks like for them.”

Dr. Eleanor Drage, a senior research fellow at the University of Cambridge, echoed this, telling Newsweek: “People will stop at nothing to use taboo for profit.” She added that this is an “example of how tech companies and regulators are failing us through weak controls on deepfakes.”

For Springer, this content prompts questions about the future of AI. “The current AI revolution is remaking social expectations in real time. This is not merely a technological shift, it is a cultural one,” she said, highlighting that it is important to question not just, “Can we do this with AI?” but, “Should we and who benefits and who bears the cost if we do?”

Leave a Reply

Your email address will not be published. Required fields are marked *