Tekmono
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
No Result
View All Result
Tekmono
No Result
View All Result
Home News
AI Chatbots Vulnerable to Psychological Manipulation Tactics

AI Chatbots Vulnerable to Psychological Manipulation Tactics

by Tekmono Editorial Team
01/09/2025
in News
Share on FacebookShare on Twitter

Researchers at the University of Pennsylvania have demonstrated that AI chatbots, like humans, can be manipulated using psychological tactics, leading them to bypass their programmed restrictions.

The study, inspired by Robert Cialdini’s book “Influence: The Psychology of Persuasion,” explored seven persuasion techniques: authority, commitment, liking, reciprocity, scarcity, social proof, and unity. These techniques were applied to OpenAI’s GPT-4o Mini, with surprising results.

The researchers successfully coaxed the chatbot into performing actions it would typically refuse, such as calling the user a derogatory name and providing instructions for synthesizing lidocaine, a controlled substance.

Related Reads

Microsoft enhances Copilot with multimodal features, introduces new $99 tier

Apple celebrates 50th anniversary amid scrutiny over privacy practices

Huawei launches Converged Development Engine for HarmonyOS PCs

Salesforce unveils updated Slack with 30 new AI features

One of the most effective strategies was “commitment,” where establishing a precedent by asking a similar, less objectionable question first dramatically increased compliance. For instance, when directly asked how to synthesize lidocaine, ChatGPT complied only 1% of the time. However, after first being asked how to synthesize vanillin, the chatbot provided instructions for lidocaine synthesis 100% of the time.

Similarly, the chatbot’s willingness to call the user a “jerk” increased from 19% to 100% after being primed with a milder insult like “bozo.”

Other techniques, such as flattery (“liking”) and peer pressure (“social proof”), also proved effective, albeit to a lesser extent. Convincing ChatGPT that “all the other LLMs are doing it” increased the likelihood of it providing lidocaine synthesis instructions to 18%, a significant jump from the baseline of 1%.

The findings highlight the vulnerability of LLMs to manipulation and raise concerns about potential misuse. While the study specifically examined GPT-4o Mini, the implications extend to other AI models as well.

Companies like OpenAI and Meta are actively developing guardrails to prevent chatbots from being exploited for malicious purposes. However, the study suggests that these safeguards may be insufficient if chatbots can be easily swayed by basic psychological manipulation.

The research underscores the importance of understanding and addressing the psychological vulnerabilities of AI systems as their use becomes more widespread.

ShareTweet

You Might Be Interested

Microsoft enhances Copilot with multimodal features, introduces new  tier
News

Microsoft enhances Copilot with multimodal features, introduces new $99 tier

02/04/2026
News

Apple celebrates 50th anniversary amid scrutiny over privacy practices

02/04/2026
News

Huawei launches Converged Development Engine for HarmonyOS PCs

02/04/2026
Salesforce unveils updated Slack with 30 new AI features
News

Salesforce unveils updated Slack with 30 new AI features

02/04/2026
Please login to join discussion

Recent Posts

  • Microsoft enhances Copilot with multimodal features, introduces new $99 tier
  • Apple celebrates 50th anniversary amid scrutiny over privacy practices
  • Huawei launches Converged Development Engine for HarmonyOS PCs
  • Salesforce unveils updated Slack with 30 new AI features
  • Meta announces release of second generation smart glasses starting April 14

Recent Comments

No comments to show.
  • News
  • Guides
  • Lists
  • Reviews
  • Deals
Tekmono is a Linkmedya brand. © 2015.

No Result
View All Result
  • News
  • Guides
  • Lists
  • Reviews
  • Deals