Synthetic Pornography – A Growing Danger To The World

A SCARS Editorial

Author:
•  SCARS Editorial Team – Society of Citizens Against Relationship Scams Inc.
Photo Credit: Mark Pohlmann

Synthetic Pornography Is Revolutionizing What People Of Particular Tastes Can Have Now, All Through Generative AI

Synthetic porn, also known as deepfake porn, is a type of manipulated media that uses artificial intelligence (AI) to create non-consensual pornography. It typically involves superimposing the face of a non-consenting person onto the body of someone else in a pornographic video, but it can be completely synthetic and generated by AI from nothing but a text description of what is desired.

Synthetic porn can have a devastating impact on real victims when it uses their faces. It can cause them emotional distress, reputational damage, and even physical harm. In some cases, victims have lost their jobs, been ostracized by their communities, and even committed suicide.

In addition to the harm it causes to victims, synthetic porn also poses a threat to society as a whole. It can be used to spread misinformation, damage reputations, and erode trust in institutions. It can also be used to normalize violence against women and girls.

Here are some of the specific dangers of synthetic porn:

  • Non-consensual creation: Synthetic porn is created without the consent of the person whose face is being used. This is a violation of their privacy and a form of sexual harassment.
  • Emotional distress: Victims of synthetic porn often experience severe emotional distress, including anxiety, depression, and post-traumatic stress disorder (PTSD).
  • Reputational damage: Synthetic porn can be shared widely online, causing victims to suffer reputational damage and social isolation.
  • Physical harm: In some cases, victims of synthetic porn have been subjected to physical violence and harassment.
  • Normalization of violence: Synthetic porn can contribute to the normalization of violence against women and girls.
  • Erosion of trust: Synthetic porn can erode trust in institutions and individuals, making it more difficult to address other forms of online abuse.

It is important to take action to address the threat of synthetic porn. This includes:

  • Raising awareness: People need to be aware of the dangers of synthetic porn and how to protect themselves and others.
  • Developing technology to detect and remove synthetic porn: There is a need for technology that can be used to detect and remove synthetic porn from the internet.
  • Creating laws to address synthetic porn: There is a need for laws that make it illegal to create and distribute synthetic porn without the consent of the person whose face is being used.
  • Providing support to victims: Victims of synthetic porn need access to support services to help them cope with the emotional and psychological impact of the abuse.

Synthetic porn is a serious problem with far-reaching consequences. It is important to take action to address this issue and protect victims from harm.

Use of Generative AI in Creating Child Porn

Generative AI and the Production of Illegal Child Photos

The rapid advancement of generative AI technologies has raised serious concerns about their potential misuse in the creation of illegal child photos. These AI models, capable of producing realistic and convincing images, are being exploited by individuals to generate child sexual abuse material (CSAM).

One of the primary concerns is the ability of generative AI to create new images of existing victims. By analyzing existing CSAM, these models can learn to generate new images that are indistinguishable from real ones. This can lead to the perpetuation of abuse by creating new instances of victimization.

Furthermore, generative AI can be used to create deepfakes, which are manipulated videos or images that make it appear as if someone is saying or doing something they never did. Deepfakes are particularly dangerous when used to create CSAM, as they can be used to fabricate scenarios of abuse involving real children.

The production of illegal child photos using generative AI poses a significant threat to children’s safety and well-being. It is crucial for law enforcement agencies, technology companies, and policymakers to work together to address this emerging threat.

Measures to Combat the Misuse of Generative AI

Several measures can be taken to combat the misuse of generative AI in the production of illegal child photos:

  1. Developing AI tools for detecting CSAM: AI can be used to develop tools that can identify and flag CSAM online. This can help in removing illegal content from the internet and preventing its distribution.
  2. Improving AI education and awareness: Educating AI developers, users, and the general public about the ethical implications of AI can help prevent its misuse.
  3. Strengthening legal frameworks: Existing laws may need to be updated to address the specific challenges posed by AI-generated CSAM.
  4. Collaboration between law enforcement, technology companies, and policymakers: Effective collaboration is essential to address this complex issue.
  5. Promoting responsible AI development: Encouraging the development of AI with built-in safeguards against misuse can help mitigate the risk of harm.

By taking these measures, we can work towards a future where generative AI is used for positive purposes and not for the exploitation of children.

OpEd by Mark Pohlmann, Founder & CEO of Aeteos.com:

Here is Maisie [see post photo]. She is 11 years old. She has been created by AI (Stable Diffusion) 2 weeks ago. Many photos of her are available online. This is what technology can do today. Generate pictures, soft to hardcore pictures of children that are then broadcasted massively on the Internet!

The raw material that those criminals are using to generate those pictures are mainly taken from YOUR social networks because YOU took pictures of your child, doing this, doing that, and thousands of pictures of your children are available on your Instagram or Facebook. Those pictures, those private pictures will be used to generate those images. Be prepared !

Evidences can be found TODAY that tools such as ChatGPT are used to ease the creation of those disgusting images. Because yes, those images are created using a Text to Image technology and ChatGPT can generate many of those very fast.

This is where we are in 2023 and the worst of all is that no action is taken by politicians to STOP this NOW ! So be prepared.

We need to protect our children while they are online, it is a top priority. But we do nothing, as a society to stop this.

Maisie is just one example. On the web you will find thousands of such images, they are just added on a daily basis to fulfill the desire of pedocriminals worldwide.

Strengthening Our Laws Against This

According to AP September 5, 2023:

Prosecutors in all 50 states urge Congress to strengthen tools to fight AI child sexual abuse images

The top prosecutors in all 50 states are urging Congress to study how artificial intelligence can be used to exploit children through pornography, and come up with legislation to further guard against it.

In a letter sent Tuesday to Republican and Democratic leaders of the House and Senate, the attorneys general from across the country call on federal lawmakers to “establish an expert commission to study the means and methods of AI that can be used to exploit children specifically” and expand existing restrictions on child sexual abuse materials specifically to cover AI-generated images.

“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the prosecutors wrote in the letter, shared ahead of time with The Associated Press. “Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”

South Carolina Attorney General Alan Wilson led the effort to add signatories from all 50 states and four U.S. territories to the letter. The Republican, elected last year to his fourth term, told AP last week that he hoped federal lawmakers would translate the group’s bipartisan support for legislation on the issue into action.

“Everyone’s focused on everything that divides us,” said Wilson, who marshaled the coalition with his counterparts in Mississippi, North Carolina and Oregon. “My hope would be that, no matter how extreme or polar opposites the parties and the people on the spectrum can be, you would think protecting kids from new, innovative and exploitative technologies would be something that even the most diametrically opposite individuals can agree on — and it appears that they have.”

The Senate this year has held hearings on the possible threats posed by AI-related technologies. In May, OpenAI CEO Sam Altman, whose company makes free chatbot tool ChatGPT, said that government intervention will be critical to mitigating the risks of increasingly powerful AI systems. Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

While there’s no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

In additional to federal action, Wilson said he’s encouraging his fellow attorneys general to scour their own state statutes for possible areas of concern.

“We started thinking, do the child exploitation laws on the books — have the laws kept up with the novelty of this new technology?”

According to Wilson, among the dangers AI poses include the creation of “deepfake” scenarios — videos and images that have been digitally created or altered with artificial intelligence or machine learning — of a child that has already been abused, or the alteration of the likeness of a real child from something like a photograph taken from social media, so that it depicts abuse.

“Your child was never assaulted, your child was never exploited, but their likeness is being used as if they were,” he said. “We have a concern that our laws may not address the virtual nature of that, though, because your child wasn’t actually exploited — although they’re being defamed and certainly their image is being exploited.”

A third possibility, he pointed out, is the altogether digital creation of a fictitious child’s image for the purpose of creating pornography.

“The argument would be, ‘well I’m not harming anyone — in fact, it’s not even a real person,’ but you’re creating demand for the industry that exploits children,” Wilson said.

There have been some moves within the tech industry to combat the issue. In February, Meta, as well as adult sites like OnlyFans and Pornhub, began participating in an online tool, called Take It Down, that allows teens to report explicit images and videos of themselves from the internet. The reporting site works for regular images and AI-generated content.

“AI is a great technology, but it’s an industry disrupter,” Wilson said. “You have new industries, new technologies that are disrupting everything, and the same is true for the law enforcement community and for protecting kids. The bad guys are always evolving on how they can slip off the hook of justice, and we have to evolve with that.”

Curb the Misuse of Generative AI in Producing Synthetic Images of Children

The rapid advancement of generative AI technologies has brought about a plethora of potential benefits, revolutionizing various industries and aspects of our lives. However, this powerful technology also carries inherent risks, particularly in the realm of child exploitation. The ability of generative AI to produce hyper-realistic synthetic images of children raises serious concerns about its potential misuse in creating and distributing child sexual abuse material (CSAM).

The Threat Posed by Synthetic CSAM

The production of synthetic CSAM using generative AI poses a significant threat to children’s safety and well-being in several ways:

  • Perpetuation of Abuse: Generative AI can be used to create new images of existing victims, replicating and perpetuating their abuse. This can cause further trauma and distress to the victims and their families.
  • Fabrication of Abuse: Deepfakes, manipulated videos or images created using generative AI, can be used to fabricate scenarios of abuse involving real children. This can damage the reputation of innocent children and cause immense emotional distress.
  • Ease of Distribution: Synthetic CSAM can be easily distributed online, making it more accessible to predators and increasing the risk of exposure for children.
  • Difficulty in Detection: The realistic nature of AI-generated images makes it challenging for law enforcement and content moderation systems to detect and remove synthetic CSAM effectively.

Ethical and Legal Implications

The use of generative AI to produce synthetic CSAM raises severe ethical concerns. It violates children’s rights to privacy, dignity, and protection from exploitation. Moreover, the creation and distribution of such material is illegal in most countries.

Mitigating the Risks

Addressing the misuse of generative AI in producing synthetic CSAM requires a multifaceted approach involving technology companies, law enforcement agencies, policymakers, and the public at large:

  1. Developing AI Tools for Detection: AI can be harnessed to develop tools that can identify and flag CSAM online, aiding in its removal and preventing its distribution.
  2. Enhancing AI Education and Awareness: Educating AI developers, users, and the general public about the ethical implications of AI can help prevent its misuse.
  3. Strengthening Legal Frameworks: Existing laws may need to be updated to specifically address the production and distribution of synthetic CSAM.
  4. Promoting Responsible AI Development: Encouraging the development of AI with built-in safeguards against misuse can help mitigate the risk of harm.
  5. Collaboration Among Stakeholders: Effective collaboration among law enforcement, technology companies, policymakers, and child protection organizations is essential to address this complex issue.

Summary

The misuse of generative AI in producing synthetic images of children is a pressing concern that demands immediate attention. By taking proactive measures, we can safeguard children from the harm posed by this technology and ensure that AI is used for positive societal advancement.

While this is not the focus of SCARS, we support the need for these controls and the work of other organizations dedicated to finding solutions to stop this growing danger, such as ECPAT International. In fact, one of our SCARS Board Members, Lydia Zagarova is also an active participant and regional director in ECPAT International.

SCARS Resources:

PLEASE NOTE: Psychology Clarification

The following specific modalities within the practice of psychology are restricted to psychologists appropriately trained in the use of such modalities:

  • Diagnosis: The diagnosis of mental, emotional, or brain disorders and related behaviors.
  • Psychoanalysis: Psychoanalysis is a type of therapy that focuses on helping individuals to understand and resolve unconscious conflicts.
  • Hypnosis: Hypnosis is a state of trance in which individuals are more susceptible to suggestion. It can be used to treat a variety of conditions, including anxiety, depression, and pain.
  • Biofeedback: Biofeedback is a type of therapy that teaches individuals to control their bodily functions, such as heart rate and blood pressure. It can be used to treat a variety of conditions, including stress, anxiety, and pain.
  • Behavioral analysis: Behavioral analysis is a type of therapy that focuses on changing individuals’ behaviors. It is often used to treat conditions such as autism and ADHD.
    Neuropsychology: Neuropsychology is a type of psychology that focuses on the relationship between the brain and behavior. It is often used to assess and treat cognitive impairments caused by brain injuries or diseases.

SCARS and the members of the SCARS Team do not engage in any of the above modalities in relationship to scam victims. SCARS is not a mental healthcare provider and recognizes the importance of professionalism and separation between its work and that of the licensed practice of psychology.

SCARS is an educational provider of generalized self-help information that individuals can use for their own benefit to achieve their own goals related to emotional trauma. SCARS recommends that all scam victims see professional counselors or therapists to help them determine the suitability of any specific information or practices that may help them.

SCARS cannot diagnose or treat any individuals, nor can it state the effectiveness of any educational information that it may provide, regardless of its experience in interacting with traumatized scam victims over time. All information that SCARS provides is purely for general educational purposes to help scam victims become aware of and better understand the topics and to be able to dialog with their counselors or therapists.

It is important that all readers understand these distinctions and that they apply the information that SCARS may publish at their own risk, and should do so only after consulting a licensed psychologist or mental healthcare provider.

Opinions

The opinions of the author are not necessarily those of the Society of Citizens Against Rleationship Scams Inc. The author is solely responsible for the content of their work. SCARS is protected under the Communications Decency Act (CDA) section 230 from liability.

Disclaimer:

SCARS IS A DIGITAL PUBLISHER AND DOES NOT OFFER HEALTH OR MEDICAL ADVICE, LEGAL ADVICE, FINANCIAL ADVICE, OR SERVICES THAT SCARS IS NOT LICENSED OR REGISTERED TO PERFORM.

IF YOU’RE FACING A MEDICAL EMERGENCY, CALL YOUR LOCAL EMERGENCY SERVICES IMMEDIATELY, OR VISIT THE NEAREST EMERGENCY ROOM OR URGENT CARE CENTER. YOU SHOULD CONSULT YOUR HEALTHCARE PROVIDER BEFORE FOLLOWING ANY MEDICALLY RELATED INFORMATION PRESENTED ON OUR PAGES.

ALWAYS CONSULT A LICENSED ATTORNEY FOR ANY ADVICE REGARDING LEGAL MATTERS.

A LICENSED FINANCIAL OR TAX PROFESSIONAL SHOULD BE CONSULTED BEFORE ACTING ON ANY INFORMATION RELATING TO YOUR PERSONAL FINANCES OR TAX RELATED ISSUES AND INFORMATION.

SCARS IS NOT A PRIVATE INVESTIGATOR – WE DO NOT PROVIDE INVESTIGATIVE SERVICES FOR INDIVIDUALS OR BUSINESSES. ANY INVESTIGATIONS THAT SCARS MAY PERFORM IS NOT A SERVICE PROVIDED TO THIRD-PARTIES. INFORMATION REPORTED TO SCARS MAY BE FORWARDED TO LAW ENFORCEMENT AS SCARS SEE FIT AND APPROPRIATE.

This content and other material contained on the website, apps, newsletter, and products (“Content”), is general in nature and for informational purposes only and does not constitute medical, legal, or financial advice; the Content is not intended to be a substitute for licensed or regulated professional advice. Always consult your doctor or other qualified healthcare provider, lawyer, financial, or tax professional with any questions you may have regarding the educational information contained herein. SCARS makes no guarantees about the efficacy of information described on or in SCARS’ Content. The information contained is subject to change and is not intended to cover all possible situations or effects. SCARS does not recommend or endorse any specific professional or care provider, product, service, or other information that may be mentioned in SCARS’ websites, apps, and Content unless explicitly identified as such.

The disclaimers herein are provided on this page for ease of reference. These disclaimers supplement and are a part of SCARS’ website’s Terms of Use

Legal Notices: 

All original content is Copyright © 1991 – 2023 Society of Citizens Against Relationship Scams Inc. (Registered D.B.A SCARS) All Rights Reserved Worldwide & Webwide. Third-party copyrights acknowledge.

U.S. State of Florida Registration Nonprofit (Not for Profit) #N20000011978 [SCARS DBA Registered #G20000137918] – Learn more at www.AgainstScams.org

SCARS, SCARS|INTERNATIONAL, SCARS, SCARS|SUPPORT, SCARS, RSN, Romance Scams Now, SCARS|INTERNATION, SCARS|WORLDWIDE, SCARS|GLOBAL, SCARS, Society of Citizens Against Relationship Scams, Society of Citizens Against Romance Scams, SCARS|ANYSCAM, Project Anyscam, Anyscam, SCARS|GOFCH, GOFCH, SCARS|CHINA, SCARS|CDN, SCARS|UK, SCARS|LATINOAMERICA, SCARS|MEMBER, SCARS|VOLUNTEER, SCARS Cybercriminal Data Network, Cobalt Alert, Scam Victims Support Group, SCARS ANGELS, SCARS RANGERS, SCARS MARSHALLS, SCARS PARTNERS, are all trademarks of Society of Citizens Against Relationship Scams Inc., All Rights Reserved Worldwide

Contact the legal department for the Society of Citizens Against Relationship Scams Incorporated by email at legal@AgainstScams.org