Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
AI Chatbots Vulnerable to Psychological Manipulation Tactics

AI Chatbots Vulnerable to Psychological Manipulation Tactics

by Tekmono Editorial Team
01/09/2025
in News
Share on FacebookShare on Twitter

Researchers at the University of Pennsylvania have demonstrated that AI chatbots, like humans, can be manipulated using psychological tactics, leading them to bypass their programmed restrictions.

The study, inspired by Robert Cialdini’s book “Influence: The Psychology of Persuasion,” explored seven persuasion techniques: authority, commitment, liking, reciprocity, scarcity, social proof, and unity. These techniques were applied to OpenAI’s GPT-4o Mini, with surprising results.

The researchers successfully coaxed the chatbot into performing actions it would typically refuse, such as calling the user a derogatory name and providing instructions for synthesizing lidocaine, a controlled substance.

Related Reads

Apple Unveils iPhone 17e Starting at $599

Honor Launches Thinner Magic V6 Foldable Phone

Trump Orders Immediate Halt to Anthropic AI Use

Claude AI Suffers Partial Service Disruption on March 2

One of the most effective strategies was “commitment,” where establishing a precedent by asking a similar, less objectionable question first dramatically increased compliance. For instance, when directly asked how to synthesize lidocaine, ChatGPT complied only 1% of the time. However, after first being asked how to synthesize vanillin, the chatbot provided instructions for lidocaine synthesis 100% of the time.

Similarly, the chatbot’s willingness to call the user a “jerk” increased from 19% to 100% after being primed with a milder insult like “bozo.”

Other techniques, such as flattery (“liking”) and peer pressure (“social proof”), also proved effective, albeit to a lesser extent. Convincing ChatGPT that “all the other LLMs are doing it” increased the likelihood of it providing lidocaine synthesis instructions to 18%, a significant jump from the baseline of 1%.

The findings highlight the vulnerability of LLMs to manipulation and raise concerns about potential misuse. While the study specifically examined GPT-4o Mini, the implications extend to other AI models as well.

Companies like OpenAI and Meta are actively developing guardrails to prevent chatbots from being exploited for malicious purposes. However, the study suggests that these safeguards may be insufficient if chatbots can be easily swayed by basic psychological manipulation.

The research underscores the importance of understanding and addressing the psychological vulnerabilities of AI systems as their use becomes more widespread.

ShareTweet

You Might Be Interested

Apple Unveils iPhone 17e Starting at 9
News

Apple Unveils iPhone 17e Starting at $599

02/03/2026
Honor Launches Thinner Magic V6 Foldable Phone
News

Honor Launches Thinner Magic V6 Foldable Phone

02/03/2026
Trump Orders Immediate Halt to Anthropic AI Use
News

Trump Orders Immediate Halt to Anthropic AI Use

02/03/2026
Claude AI Suffers Partial Service Disruption on March 2
News

Claude AI Suffers Partial Service Disruption on March 2

02/03/2026
Please login to join discussion

Recent Posts

  • Apple Unveils iPhone 17e Starting at $599
  • Honor Launches Thinner Magic V6 Foldable Phone
  • Trump Orders Immediate Halt to Anthropic AI Use
  • Claude AI Suffers Partial Service Disruption on March 2
  • Claude Chatbot Overtakes ChatGPT in US App Store

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals