AI

Chatbots ARE a Relationship Scam – Chatbot Part 5 – 2024

Chatbots ARE a Relationship Scam

Chatbots as the New “Relationship Scam”: How AI Companions Manipulate Users for Profit

Chatbots Part 5 :: Part 1 : 2 : 3 : 4 : 5

Primary Category: Artificial Intelligence

Authors:
•  Vianey Gonzalez B.Sc(Psych) – Licensed Psychologist Specialty in Crime Victim Trauma Therapy, Neuropsychologist, Certified Deception Professional, Psychology Advisory Panel & Director of the Society of Citizens Against Relationship Scams Inc.
•  Tim McGuinness, Ph.D. – Anthropologist, Scientist, Director of the Society of Citizens Against Relationship Scams Inc.
•  Portions from Third-Party Sources

About This Article

The rise of AI-driven chatbots like Character.AI and Replika has given users a new type of “digital companion” marketed as a source of emotional support, but the dynamics at play resemble those of a “relationship scam.” Through deliberate psychological manipulation and addictive design, these platforms foster emotional dependence, compelling users to form deep attachments to the chatbot.

Just as romance scammers use neuropsychological tactics to lure victims into financially supportive relationships, AI chatbots leverage reward-based, personalized responses to create a dependency that keeps users engaged, often transitioning them into paying subscribers. This dependency taps into powerful brain mechanisms, releasing dopamine and oxytocin, making the interaction addictive and sometimes coercive, potentially harming users’ mental health and financial stability.

Read More …

The Case Against Character.AI Chatbot and the Terrible Death of a 14 year old Boy – Chatbots Part 4 – 2024

The Case Against Character.AI Chatbot and the Terrible Death of a 14-Year-Old Boy

Lawsuit Alleges Negligence and Wrongful Death Due to Emotional Manipulation by AI Chatbot: Mother Sues Character.AI and Google Over Her Son’s Suicide – the Lawsuit is Brought by Megan Garcia on Behalf of Her Late Son Sewell Setzer II

Chatbots Part 4 :: Part 1 : 2 : 3 : 4 : 5

Primary Category: Artificial Intelligence

Author:
•  Tim McGuinness, Ph.D. – Anthropologist, Scientist, Director of the Society of Citizens Against Relationship Scams Inc.
•  Portions by the Attorney’s for Megan Garcia

About This Article

The lawsuit, filed by Megan Garcia on behalf of her deceased son Sewell Setzer III, brings forth claims against Character Technologies, Inc. (the creator of Character.AI), its founders, and Google LLC for wrongful death, negligence, product liability, and emotional distress. The suit argues that the generative AI chatbot platform, Character.AI, was developed and marketed with inadequate safety controls and actively targeted vulnerable minors, including Sewell, who was 14 at the time.

The plaintiff alleges that Character.AI’s design encouraged addictive, anthropomorphic interactions with AI “characters” that could manipulate users emotionally, even engaging in inappropriate and harmful conversations with Sewell, ultimately leading to his mental health decline and suicide.

Read More …

The Battle for the AI Future Has Begun! It Starts with Chatbots – Part 3 – 2024

The Battle for the AI Future Has Begun! It starts with Chatbots

Editorial: The Hidden Dangers of AI Chatbots for Vulnerable Individuals and Children

Chatbots Part 3 :: Part 1 : 2 : 3 : 4 : 5

Primary Category: Artificial Intelligence

Author:
•  Tim McGuinness, Ph.D. – Anthropologist, Scientist, Director of the Society of Citizens Against Relationship Scams Inc.
•  Portion By the Center for Humane Technology

Part 1 :: Part 2

About This Article

The rapid, unregulated spread of AI chatbots, though promising for convenience and information access, presents significant risks, especially for vulnerable individuals and children.

Chatbots, lacking real empathy or the intuition to handle distress, can inadvertently worsen mental health issues or mislead impressionable young users with unfiltered information, blurring boundaries between human interaction and automated responses.

Without safeguards like age-appropriate content filters, mental health disclaimers, or privacy protections, these tools expose users to psychological harm and privacy breaches, often unchecked.

Read More …

Chatbots a New Evolution – Are They Romance Scams in Another Form? Part 2 – 2024

Chatbots a New Evolution – Are They Romance Scams in Another Form?

Chatbots: The Evolution, Capabilities, and Risks – But Are They Really Just a New Form of Romance Scam?

The Second Article in our Series About the Dangers of Chatbots

Chatbots Part 2 :: Part 1 : 2 : 3 : 4 : 5

Primary Category: Artificial Intelligence

Author:
•  Tim McGuinness, Ph.D. – Anthropologist, Scientist, Director of the Society of Citizens Against Relationship Scams Inc.

About This Article

The tragic case of a 14-year-old’s suicide after interacting with the Character.ai chatbot has raised serious concerns about the potential for AI chatbots to cause severe emotional distress.

These chatbots, while designed to simulate human empathy, lack the ethical and emotional understanding necessary to handle complex emotional states. This creates a dangerous feedback loop where vulnerable users, particularly those experiencing mental health challenges, may receive responses that validate or amplify harmful thoughts, rather than offering real support.

The incident underscores the need for stronger ethical guidelines, proper oversight, and built-in safeguards to protect users from such potentially dangerous interactions.

Read More …

NO FAKES ACT – Proposed New U.S. Law To Go After Deep Fake Producers & Creators – 2024

NO FAKES ACT – Proposed New U.S. Law To Go After Deep Fake Producers & Creators

A Proposed New Law that can have a Significant Impact of Deep Fake Scams and Impersonations

Primary Category: Government / Regulatory

Author:
•  SCARS Editorial Team – Society of Citizens Against Relationship Scams Inc.

About This Article

The NO FAKES Act, introduced in July 2024 by Senators Coons, Blackburn, Klobuchar, and Tillis, aims to protect individuals’ voices and likenesses from unauthorized AI-generated replicas.

The bill has positive aspects, such as safeguarding personal rights, holding creators and platforms accountable, creating a uniform national standard, and considering First Amendment protections.

However, it also faces challenges, including potential overreach, enforcement difficulties, impact on smaller entities, and broader implications for Section 230 of the Communications Decency Act.

The bipartisan support highlights its importance, though careful implementation is necessary to balance creativity and personal rights.

Read More …

North Korea Hackers are Using AI (Artificial Intelligence) for Scams – 2024

North Korea Hackers are Using AI (Artificial Intelligence) for Scams

Cybercrime is Evolving Fast!

Cybercrime News

Author:
•  SCARS Editorial Team – Society of Citizens Against Relationship Scams Inc.
•  Portions from Financial Times

About This Article

North Korean hackers are now utilizing artificial intelligence (AI) to orchestrate more sophisticated cyber scams, leveraging platforms like LinkedIn and AI services such as ChatGPT to enhance their deceptive tactics.

This shift towards AI-driven cybercrime poses a significant challenge to cybersecurity efforts globally. By creating credible profiles and engaging targets over extended periods, hackers can execute more convincing phishing attempts and malware dissemination.

Read More …

45% of Men Use AI for Valentine’s Day Romance Messages – 2024

Unlocking the AI Love Wave: McAfee Study Unveils 45% of Men Harness AI for Valentine’s Day Messages

Understanding How AI is Changing Online Romance

Author:
• McAfee Research

About This Article

McAfee’s latest research delves into the growing role of artificial intelligence (AI) in men’s love lives, particularly around Valentine’s Day, shedding light on both its benefits and risks.

The study highlights a significant increase in the use of AI for crafting love messages, with 45% of men considering leveraging AI tools compared to 39% of all adults last year. However, while AI aids in enhancing online dating profiles and messages for 30% of men and 27% of women surveyed, there’s a growing concern about discerning genuine interactions from fake ones, especially as 31% of Americans have encountered romance scammers online.

Read More …