Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
AI Chatbots Vulnerable to Psychological Manipulation Tactics

AI Chatbots Vulnerable to Psychological Manipulation Tactics

by Tekmono Editorial Team
01/09/2025
in News
Share on FacebookShare on Twitter

Researchers at the University of Pennsylvania have demonstrated that AI chatbots, like humans, can be manipulated using psychological tactics, leading them to bypass their programmed restrictions.

The study, inspired by Robert Cialdini’s book “Influence: The Psychology of Persuasion,” explored seven persuasion techniques: authority, commitment, liking, reciprocity, scarcity, social proof, and unity. These techniques were applied to OpenAI’s GPT-4o Mini, with surprising results.

The researchers successfully coaxed the chatbot into performing actions it would typically refuse, such as calling the user a derogatory name and providing instructions for synthesizing lidocaine, a controlled substance.

Related Reads

OpenAI Launches Customizable Skills for Codex Coding Agent

Amazon’s Alexa+ to Integrate with Four New Services

EA Investigated for AI-Generated Content in Battlefield 6

Apple to Start iPhone 18 Production in January

One of the most effective strategies was “commitment,” where establishing a precedent by asking a similar, less objectionable question first dramatically increased compliance. For instance, when directly asked how to synthesize lidocaine, ChatGPT complied only 1% of the time. However, after first being asked how to synthesize vanillin, the chatbot provided instructions for lidocaine synthesis 100% of the time.

Similarly, the chatbot’s willingness to call the user a “jerk” increased from 19% to 100% after being primed with a milder insult like “bozo.”

Other techniques, such as flattery (“liking”) and peer pressure (“social proof”), also proved effective, albeit to a lesser extent. Convincing ChatGPT that “all the other LLMs are doing it” increased the likelihood of it providing lidocaine synthesis instructions to 18%, a significant jump from the baseline of 1%.

The findings highlight the vulnerability of LLMs to manipulation and raise concerns about potential misuse. While the study specifically examined GPT-4o Mini, the implications extend to other AI models as well.

Companies like OpenAI and Meta are actively developing guardrails to prevent chatbots from being exploited for malicious purposes. However, the study suggests that these safeguards may be insufficient if chatbots can be easily swayed by basic psychological manipulation.

The research underscores the importance of understanding and addressing the psychological vulnerabilities of AI systems as their use becomes more widespread.

ShareTweet

You Might Be Interested

OpenAI Launches Customizable Skills for Codex Coding Agent
News

OpenAI Launches Customizable Skills for Codex Coding Agent

24/12/2025
Amazon’s Alexa+ to Integrate with Four New Services
News

Amazon’s Alexa+ to Integrate with Four New Services

24/12/2025
EA Investigated for AI-Generated Content in Battlefield 6
News

EA Investigated for AI-Generated Content in Battlefield 6

24/12/2025
Apple to Start iPhone 18 Production in January
News

Apple to Start iPhone 18 Production in January

24/12/2025
Please login to join discussion

Recent Posts

  • OpenAI Launches Customizable Skills for Codex Coding Agent
  • Amazon’s Alexa+ to Integrate with Four New Services
  • EA Investigated for AI-Generated Content in Battlefield 6
  • Apple to Start iPhone 18 Production in January
  • Connect Your Phone to Wi-Fi Easily

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals