The Dark Side of Generative AI

By Frances Zelazny, Co-Founder & CEO, Anonybit | Strategic Advisor

The Dark Side of Innovation: Identity Theft, Fraud and the Rise of Generative AI

In recent years, technological advancements in artificial intelligence (Generative AI) have revolutionized various fields, making remarkable progress in creating incredibly realistic content. We’ve all been amazed by the possibilities and progress. It’s no wonder that within two months of its launch in late November, ChatGPT had 100 million monthly active users, a number that took Instagram two and a half years to reach and TikTok nine months.

At the same time, one cannot ignore the privacy and societal implications these technologies raise. For example, while many alarm bells are going off around the rise of voice cloning and deep fakes, some artists are embracing voice cloning and offering to split royalties with any person who comes up with a successful song that uses their voice. From a security standpoint, we already understand where many of the dangers lurk. Data is limited, but there are growing reports of AI-powered scams using audio clips that are then tied to ransom demands. To illustrate how accessible the tools to do this are, there are providers that offer free trial periods and then charge monthly fees as low as $9.99 for their service, hardly a barrier for any enterprising cyber-criminal.

Understanding Deep Fakes, Identity Theft, and Generative AI

Deep fakes are AI-generated images, videos, or audio that convincingly manipulate or replace a person’s likeness in existing content, often making it challenging to discern between real and fake. Generative AI is the driving force behind these deep fakes, generally consisting of two neural networks: the generator, responsible for creating realistic content, and the discriminator, tasked with differentiating between real and fake. Over time, these networks improve, leading to the generation of increasingly authentic and deceptive deep fakes.

The problem from an identity theft point of view is twofold – that a deep fake can present as a legitimate person AND that the deep fake will have the information to act like the legitimate person. This means that the deep fakes used to create synthetic identities, drive impersonation and account takeover attacks and exacerbate money laundering schemes will be more effective when “armed” with legitimate information to bypass security controls. By generating fake documents, images, or even video footage of individuals, and combining them with real social security numbers, credit card numbers, and other sensitive information, it will be easier to impersonate people and commit fraud.

Already, before the advent of Generative AI, we were seeing fraud trends going in an alarming direction. According to market research firm Javelin Strategy, there were $43 billion in identity fraud-related losses in 2022, and looking at the latest data breach numbers from the Identity Theft Resource Center for this year so far, we are on pace for another record-breaking year for fraud. While traditional methods of identity theft primarily relied on hacking databases or phishing emails, Generative AI introduces an even more insidious element. So if we think the numbers are bad now, the fraud prevention problem will only get exponentially worse if we don’t address the root cause of how we manage identity and cybersecurity.

Simply put, the root cause of fraud boils down to two primary elements:

  1. Personal data is stored inside central honeypots that are impossible to protect;
  2. We allow the use of this data for access into networks and personal accounts.

Besides the data breaches themselves, the problems are manifested through phishing attacks, fake websites, stolen OTPs, and other well-known fraud techniques. Passwordless authentication is poised to help address some of the challenges, but for the most part, the solutions offered by the market are disjointed, enterprises find it very hard to integrate and deploy, and fraudsters are left with plenty of room to operate successfully as witnessed by a whopping 84% increase in walk-in check cashing fraud in the last year and the contact center channel being a continued favorite for fraudsters to reset accounts, change account details, and take out new loans. The point is securing the digital channel with passwordless approaches alone is nowhere near enough to combat the problem.

5 Steps to Combating the Risks of AI-Generated Identity Theft

Before we get into how to combat AI-generated identity fraud, it is important to appreciate the situation we are in. Generative AI may not necessarily effectuate new types of attacks; what it will do is make the fraudsters even more effective in their work. Bland statements like, “fight bad AI with good AI”, or “make sure you have good multi-factor authentication” or “it is critical to enhance awareness” will ultimately do nothing to combat the problem.

As stated earlier, to make a dent in fraud prevention, all stakeholders will need to rethink how we manage identity and cybersecurity risks. No industry and no individual is immune.

Here are 5 concrete steps that can be taken:

  1. Eliminate central honeypots of personal data: Using new Privacy Enhancing Technologies (PETs) like Zero-Knowledge Proofs and Multi-Party Computation, it is possible to fully protect and secure personal data of all types, including biometrics, transaction data, health data and other sensitive information. A lot of discussion is taking place concurrently about the use of verifiable credentials and ensuring individual control over the use and transfer of their personal information but the fact remains that there are still plenty of use cases where enterprises will need to manage large amounts of personal information and it is important that this data is secured in the best possible manner.
  2. Ensure a consistent persistent biometric across the user journey: Today’s identity management systems are disjointed. While digital onboarding continues to grow exponentially, many organizations do not store the data that is collected for fear of data breach. This puts any downstream authentication activity at risk, with fraudsters using stolen information to bypass controls. Storing personal data, especially user biometrics collected in this process, allows the enterprise to close the gaps that attackers currently exploit.
  3. Use liveness detection to ensure “realness” of the biometric that is presented: Since the advent of biometric technologies, there has been the threat of gummy fingers, photo and video presentation attacks, and other techniques to trick the biometric system. A class of technologies called liveness detection have been developed to securely detect these types of attacks and as a result, today’s leading providers have a nearly 100% success rate in detecting these types of Presentation Attacks.
  4. Apply injection detection techniques to make sure that a session has not been compromised: The biometrics industry has been reporting on increasing attacks using emulators to spoof device metadata with digital injections of biometric data, suggesting this is now five times more frequent than traditional biometric presentation attacks. This technique is well-known to those in the fraud prevention space, which has developed techniques that combine advanced device fingerprinting with other methods such as velocity checks for example, to collect a combination of data points that provide confidence in the integrity of a session.
  5. Augment static authentication mechanisms with dynamic fraud prevention and risk detection mechanisms to enhance accuracy and maintain a good user experience: One of the questions that come up with biometrics a lot is the impact on the user experience. By applying adaptive authentication, low-risk activities can be less burdensome. When high risk is detected, security measures can be increased or enhanced; this can also include adjusting the biometric authentication threshold and or requiring more than one biometric modality to be presented for authentication.

Identity Theft, Deep Fakes, and Generative AI: The Discussion Continues

Generative AI offers remarkable potential for innovation, but we must be vigilant about its dark side. As technology evolves, so do the tactics of cybercriminals, but the important thing to note is that we are actually dealing with a recognizable playbook and we have the tools to meet the challenge. By adopting proactive fraud prevention and strong authentication measures and fostering a culture of awareness, we can strive to harness the full potential of generative AI while protecting ourselves from the potential for misuse. Together, we can create a safer digital landscape for everyone.

ScamsNOW!

The News & Commentary Magazine about Scams Fraud & Cybercrime from SCARS Institute

2025 SCARS Institute 11 Years of Service

In 2025 the SCARS Institute will enter its 11th year of Supporting Scam Victims Worldwide. Please let us know how we can better help you? Thank you for supporting our organization. SCARS Institute © 2024 www.AgainstScams.org

In 2025 the SCARS Institute will enter its 11th year of Supporting Scam Victims Worldwide. Please let us know how we can better help you? Thank you for supporting our organization.
www.AgainstScams.org

ARTICLE CATEGORIES

Have you heard our free SCARS Institute Audiobook?

Most Popular

LATEST ON SCAMSNOW.COM

NEW ON SCARS/RSN

NEW ON SCAMPSYCHOLOGY.ORG

LATEST SCAMMERPHOTOS.COM

-/ 30 /-

What do you think about this?
Please share your thoughts in a comment above!

Statement About Victim Blaming

Some of our articles discuss various aspects of victims. This is both about better understanding victims (the science of victimology) and their behaviors and psychology. This helps us to educate victims/survivors about why these crimes happened and not to blame themselves, better develop recovery programs, and help victims avoid scams in the future. At times, this may sound like blaming the victim, but it does not blame scam victims; we are simply explaining the hows and whys of the experience victims have.

These articles, about the Psychology of Scams or Victim Psychology – meaning that all humans have psychological or cognitive characteristics in common that can either be exploited or work against us – help us all to understand the unique challenges victims face before, during, and after scams, fraud, or cybercrimes. These sometimes talk about some of the vulnerabilities the scammers exploit. Victims rarely have control of them or are even aware of them, until something like a scam happens, and then they can learn how their mind works and how to overcome these mechanisms.

Articles like these help victims and others understand these processes and how to help prevent them from being exploited again or to help them recover more easily by understanding their post-scam behaviors. Learn more about the Psychology of Scams at www.ScamPsychology.org

SCARS INSTITUTE RESOURCES:

IF YOU HAVE BEEN VICTIMIZED BY A SCAM OR CYBERCRIME

♦ If you are a victim of scams, go to www.ScamVictimsSupport.org for real knowledge and help

♦ Enroll in SCARS Scam Survivor’s School now at www.SCARSeducation.org

♦ To report criminals, visit https://reporting.AgainstScams.org – we will NEVER give your data to money recovery companies like some do!

♦ Sign up for our free support & recovery help by https://support.AgainstScams.org

♦ Join our WhatsApp Chat Group at: https://chat.whatsapp.com/BPDSYlkdHBbDBg8gfTGb02

♦ Follow us on X: https://x.com/RomanceScamsNow

♦ Follow us and find our podcasts, webinars, and helpful videos on YouTube: https://www.youtube.com/@RomancescamsNowcom

♦ SCARS Institute Songs for Victim-Survivors: https://www.youtube.com/playlist…

♦ See SCARS Institute Scam Victim Self-Help Books at https://shop.AgainstScams.org

♦ Learn about the Psychology of Scams at www.ScamPsychology.org

♦ Dig deeper into the reality of scams, fraud, and cybercrime at www.ScamsNOW.com and www.RomanceScamsNOW.com

♦ Scam Survivor’s Stories: www.ScamSurvivorStories.org

♦ For Scam Victim Advocates visit www.ScamVictimsAdvocates.org

♦ See more scammer photos on www.ScammerPhotos.com

You can also find the SCARS Institute on Facebook, Instagram, X, LinkedIn, and TruthSocial

Psychology Disclaimer:

All articles about psychology and the human brain on this website are for information & education only

The information provided in this and other SCARS articles are intended for educational and self-help purposes only and should not be construed as a substitute for professional therapy or counseling.

Note about Mindfulness: Mindfulness practices have the potential to create psychological distress for some individuals. Please consult a mental health professional or experienced meditation instructor for guidance should you encounter difficulties.

While any self-help techniques outlined herein may be beneficial for scam victims seeking to recover from their experience and move towards recovery, it is important to consult with a qualified mental health professional before initiating any course of action. Each individual’s experience and needs are unique, and what works for one person may not be suitable for another.

Additionally, any approach may not be appropriate for individuals with certain pre-existing mental health conditions or trauma histories. It is advisable to seek guidance from a licensed therapist or counselor who can provide personalized support, guidance, and treatment tailored to your specific needs.

If you are experiencing significant distress or emotional difficulties related to a scam or other traumatic event, please consult your doctor or mental health provider for appropriate care and support.

Also read our SCARS Institute Statement about Professional Care for Scam Victims – click here

If you are in crisis, feeling desperate, or in despair, please call 988 or your local crisis hotline.

Leave A Comment

Your comments help the SCARS Institute better understand all scam victim/survivor experiences and improve our services and processes. Thank you

Thank you for your comment. You may receive an email to follow up. We never share your data with marketers.