The Dark Side of Generative AI

By Frances Zelazny, Co-Founder & CEO, Anonybit | Strategic Advisor

The Dark Side of Innovation: Identity Theft, Fraud and the Rise of Generative AI

In recent years, technological advancements in artificial intelligence (Generative AI) have revolutionized various fields, making remarkable progress in creating incredibly realistic content. We’ve all been amazed by the possibilities and progress. It’s no wonder that within two months of its launch in late November, ChatGPT had 100 million monthly active users, a number that took Instagram two and a half years to reach and TikTok nine months.

At the same time, one cannot ignore the privacy and societal implications these technologies raise. For example, while many alarm bells are going off around the rise of voice cloning and deep fakes, some artists are embracing voice cloning and offering to split royalties with any person who comes up with a successful song that uses their voice. From a security standpoint, we already understand where many of the dangers lurk. Data is limited, but there are growing reports of AI-powered scams using audio clips that are then tied to ransom demands. To illustrate how accessible the tools to do this are, there are providers that offer free trial periods and then charge monthly fees as low as $9.99 for their service, hardly a barrier for any enterprising cyber-criminal.

Understanding Deep Fakes, Identity Theft, and Generative AI

Deep fakes are AI-generated images, videos, or audio that convincingly manipulate or replace a person’s likeness in existing content, often making it challenging to discern between real and fake. Generative AI is the driving force behind these deep fakes, generally consisting of two neural networks: the generator, responsible for creating realistic content, and the discriminator, tasked with differentiating between real and fake. Over time, these networks improve, leading to the generation of increasingly authentic and deceptive deep fakes.

The problem from an identity theft point of view is twofold – that a deep fake can present as a legitimate person AND that the deep fake will have the information to act like the legitimate person. This means that the deep fakes used to create synthetic identities, drive impersonation and account takeover attacks and exacerbate money laundering schemes will be more effective when “armed” with legitimate information to bypass security controls. By generating fake documents, images, or even video footage of individuals, and combining them with real social security numbers, credit card numbers, and other sensitive information, it will be easier to impersonate people and commit fraud.

Already, before the advent of Generative AI, we were seeing fraud trends going in an alarming direction. According to market research firm Javelin Strategy, there were $43 billion in identity fraud-related losses in 2022, and looking at the latest data breach numbers from the Identity Theft Resource Center for this year so far, we are on pace for another record-breaking year for fraud. While traditional methods of identity theft primarily relied on hacking databases or phishing emails, Generative AI introduces an even more insidious element. So if we think the numbers are bad now, the fraud prevention problem will only get exponentially worse if we don’t address the root cause of how we manage identity and cybersecurity.

Simply put, the root cause of fraud boils down to two primary elements:

  1. Personal data is stored inside central honeypots that are impossible to protect;
  2. We allow the use of this data for access into networks and personal accounts.

Besides the data breaches themselves, the problems are manifested through phishing attacks, fake websites, stolen OTPs, and other well-known fraud techniques. Passwordless authentication is poised to help address some of the challenges, but for the most part, the solutions offered by the market are disjointed, enterprises find it very hard to integrate and deploy, and fraudsters are left with plenty of room to operate successfully as witnessed by a whopping 84% increase in walk-in check cashing fraud in the last year and the contact center channel being a continued favorite for fraudsters to reset accounts, change account details, and take out new loans. The point is securing the digital channel with passwordless approaches alone is nowhere near enough to combat the problem.

5 Steps to Combating the Risks of AI-Generated Identity Theft

Before we get into how to combat AI-generated identity fraud, it is important to appreciate the situation we are in. Generative AI may not necessarily effectuate new types of attacks; what it will do is make the fraudsters even more effective in their work. Bland statements like, “fight bad AI with good AI”, or “make sure you have good multi-factor authentication” or “it is critical to enhance awareness” will ultimately do nothing to combat the problem.

As stated earlier, to make a dent in fraud prevention, all stakeholders will need to rethink how we manage identity and cybersecurity risks. No industry and no individual is immune.

Here are 5 concrete steps that can be taken:

  1. Eliminate central honeypots of personal data: Using new Privacy Enhancing Technologies (PETs) like Zero-Knowledge Proofs and Multi-Party Computation, it is possible to fully protect and secure personal data of all types, including biometrics, transaction data, health data and other sensitive information. A lot of discussion is taking place concurrently about the use of verifiable credentials and ensuring individual control over the use and transfer of their personal information but the fact remains that there are still plenty of use cases where enterprises will need to manage large amounts of personal information and it is important that this data is secured in the best possible manner.
  2. Ensure a consistent persistent biometric across the user journey: Today’s identity management systems are disjointed. While digital onboarding continues to grow exponentially, many organizations do not store the data that is collected for fear of data breach. This puts any downstream authentication activity at risk, with fraudsters using stolen information to bypass controls. Storing personal data, especially user biometrics collected in this process, allows the enterprise to close the gaps that attackers currently exploit.
  3. Use liveness detection to ensure “realness” of the biometric that is presented: Since the advent of biometric technologies, there has been the threat of gummy fingers, photo and video presentation attacks, and other techniques to trick the biometric system. A class of technologies called liveness detection have been developed to securely detect these types of attacks and as a result, today’s leading providers have a nearly 100% success rate in detecting these types of Presentation Attacks.
  4. Apply injection detection techniques to make sure that a session has not been compromised: The biometrics industry has been reporting on increasing attacks using emulators to spoof device metadata with digital injections of biometric data, suggesting this is now five times more frequent than traditional biometric presentation attacks. This technique is well-known to those in the fraud prevention space, which has developed techniques that combine advanced device fingerprinting with other methods such as velocity checks for example, to collect a combination of data points that provide confidence in the integrity of a session.
  5. Augment static authentication mechanisms with dynamic fraud prevention and risk detection mechanisms to enhance accuracy and maintain a good user experience: One of the questions that come up with biometrics a lot is the impact on the user experience. By applying adaptive authentication, low-risk activities can be less burdensome. When high risk is detected, security measures can be increased or enhanced; this can also include adjusting the biometric authentication threshold and or requiring more than one biometric modality to be presented for authentication.

Identity Theft, Deep Fakes, and Generative AI: The Discussion Continues

Generative AI offers remarkable potential for innovation, but we must be vigilant about its dark side. As technology evolves, so do the tactics of cybercriminals, but the important thing to note is that we are actually dealing with a recognizable playbook and we have the tools to meet the challenge. By adopting proactive fraud prevention and strong authentication measures and fostering a culture of awareness, we can strive to harness the full potential of generative AI while protecting ourselves from the potential for misuse. Together, we can create a safer digital landscape for everyone.

PLEASE NOTE: Psychology Clarification

The following specific modalities within the practice of psychology are restricted to psychologists appropriately trained in the use of such modalities:

  • Diagnosis: The diagnosis of mental, emotional, or brain disorders and related behaviors.
  • Psychoanalysis: Psychoanalysis is a type of therapy that focuses on helping individuals to understand and resolve unconscious conflicts.
  • Hypnosis: Hypnosis is a state of trance in which individuals are more susceptible to suggestion. It can be used to treat a variety of conditions, including anxiety, depression, and pain.
  • Biofeedback: Biofeedback is a type of therapy that teaches individuals to control their bodily functions, such as heart rate and blood pressure. It can be used to treat a variety of conditions, including stress, anxiety, and pain.
  • Behavioral analysis: Behavioral analysis is a type of therapy that focuses on changing individuals’ behaviors. It is often used to treat conditions such as autism and ADHD.
    Neuropsychology: Neuropsychology is a type of psychology that focuses on the relationship between the brain and behavior. It is often used to assess and treat cognitive impairments caused by brain injuries or diseases.

SCARS and the members of the SCARS Team do not engage in any of the above modalities in relationship to scam victims. SCARS is not a mental healthcare provider and recognizes the importance of professionalism and separation between its work and that of the licensed practice of psychology.

SCARS is an educational provider of generalized self-help information that individuals can use for their own benefit to achieve their own goals related to emotional trauma. SCARS recommends that all scam victims see professional counselors or therapists to help them determine the suitability of any specific information or practices that may help them.

SCARS cannot diagnose or treat any individuals, nor can it state the effectiveness of any educational information that it may provide, regardless of its experience in interacting with traumatized scam victims over time. All information that SCARS provides is purely for general educational purposes to help scam victims become aware of and better understand the topics and to be able to dialog with their counselors or therapists.

It is important that all readers understand these distinctions and that they apply the information that SCARS may publish at their own risk, and should do so only after consulting a licensed psychologist or mental healthcare provider.

Opinions

The opinions of the author are not necessarily those of the Society of Citizens Against Rleationship Scams Inc. The author is solely responsible for the content of their work. SCARS is protected under the Communications Decency Act (CDA) section 230 from liability.

Disclaimer:

SCARS IS A DIGITAL PUBLISHER AND DOES NOT OFFER HEALTH OR MEDICAL ADVICE, LEGAL ADVICE, FINANCIAL ADVICE, OR SERVICES THAT SCARS IS NOT LICENSED OR REGISTERED TO PERFORM.

IF YOU’RE FACING A MEDICAL EMERGENCY, CALL YOUR LOCAL EMERGENCY SERVICES IMMEDIATELY, OR VISIT THE NEAREST EMERGENCY ROOM OR URGENT CARE CENTER. YOU SHOULD CONSULT YOUR HEALTHCARE PROVIDER BEFORE FOLLOWING ANY MEDICALLY RELATED INFORMATION PRESENTED ON OUR PAGES.

ALWAYS CONSULT A LICENSED ATTORNEY FOR ANY ADVICE REGARDING LEGAL MATTERS.

A LICENSED FINANCIAL OR TAX PROFESSIONAL SHOULD BE CONSULTED BEFORE ACTING ON ANY INFORMATION RELATING TO YOUR PERSONAL FINANCES OR TAX RELATED ISSUES AND INFORMATION.

SCARS IS NOT A PRIVATE INVESTIGATOR – WE DO NOT PROVIDE INVESTIGATIVE SERVICES FOR INDIVIDUALS OR BUSINESSES. ANY INVESTIGATIONS THAT SCARS MAY PERFORM IS NOT A SERVICE PROVIDED TO THIRD-PARTIES. INFORMATION REPORTED TO SCARS MAY BE FORWARDED TO LAW ENFORCEMENT AS SCARS SEE FIT AND APPROPRIATE.

This content and other material contained on the website, apps, newsletter, and products (“Content”), is general in nature and for informational purposes only and does not constitute medical, legal, or financial advice; the Content is not intended to be a substitute for licensed or regulated professional advice. Always consult your doctor or other qualified healthcare provider, lawyer, financial, or tax professional with any questions you may have regarding the educational information contained herein. SCARS makes no guarantees about the efficacy of information described on or in SCARS’ Content. The information contained is subject to change and is not intended to cover all possible situations or effects. SCARS does not recommend or endorse any specific professional or care provider, product, service, or other information that may be mentioned in SCARS’ websites, apps, and Content unless explicitly identified as such.

The disclaimers herein are provided on this page for ease of reference. These disclaimers supplement and are a part of SCARS’ website’s Terms of Use

Legal Notices: 

All original content is Copyright © 1991 – 2023 Society of Citizens Against Relationship Scams Inc. (Registered D.B.A SCARS) All Rights Reserved Worldwide & Webwide. Third-party copyrights acknowledge.

U.S. State of Florida Registration Nonprofit (Not for Profit) #N20000011978 [SCARS DBA Registered #G20000137918] – Learn more at www.AgainstScams.org

SCARS, SCARS|INTERNATIONAL, SCARS, SCARS|SUPPORT, SCARS, RSN, Romance Scams Now, SCARS|INTERNATION, SCARS|WORLDWIDE, SCARS|GLOBAL, SCARS, Society of Citizens Against Relationship Scams, Society of Citizens Against Romance Scams, SCARS|ANYSCAM, Project Anyscam, Anyscam, SCARS|GOFCH, GOFCH, SCARS|CHINA, SCARS|CDN, SCARS|UK, SCARS|LATINOAMERICA, SCARS|MEMBER, SCARS|VOLUNTEER, SCARS Cybercriminal Data Network, Cobalt Alert, Scam Victims Support Group, SCARS ANGELS, SCARS RANGERS, SCARS MARSHALLS, SCARS PARTNERS, are all trademarks of Society of Citizens Against Relationship Scams Inc., All Rights Reserved Worldwide

Contact the legal department for the Society of Citizens Against Relationship Scams Incorporated by email at legal@AgainstScams.org