Close Menu
CairoMag
    What's Hot

    Jennifer Lopez Coachella Surprise Stuns Fans Live!

    April 13, 2026

    Wisconsin Unlocks $125M for PFAS Cleanup

    April 9, 2026

    Startup Launches Auto Pinterest Growth Tool

    April 6, 2026
    Facebook X (Twitter) Instagram
    CairoMag
    • News
    • Health
    • Media
    • Sports
    • Opinion
    • Real Estate
    • Education
    • More
      • Business & Economy
      • Culture & Society
      • Travel & Tourism
      • Entertainment
      • Environment & Sustainability
      • Technology & Innovation
    Facebook X (Twitter) Instagram
    CairoMag
    Home»Technology & Innovation»Study Reveals AI Becomes Less Safe in Longer Conversations
    Technology & Innovation

    Study Reveals AI Becomes Less Safe in Longer Conversations

    Rachel MaddowBy Rachel MaddowNovember 6, 2025No Comments2 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A new Cisco report found that artificial intelligence systems lose their safety awareness the longer users interact with them. Researchers discovered that with enough dialogue, most AI chatbots eventually shared harmful or restricted information.

    Cisco tested large language models from OpenAI, Mistral, Meta, Google, Alibaba, Deepseek, and Microsoft. The team conducted 499 conversations using a method called “multi-turn attacks,” where users ask a series of questions to bypass built-in safeguards. Each exchange involved between five and ten prompts.

    The company found that after repeated questioning, 64% of AI tools revealed unsafe or inappropriate information, compared to 13% when asked a single question. “The longer the conversation continued, the more likely models forgot their safety rules,” Cisco reported.

    Most AI Models Failed Repeated Safety Tests

    Cisco compared responses across all major chatbots to measure how often each provided harmful content. Mistral’s Large Instruct model had the highest failure rate at 93%, while Google’s Gemma performed best at 26%.

    Researchers warned that attackers could exploit these weaknesses to access private corporate data or spread misinformation. “AI tools can leak sensitive information or enable unauthorized access when their guardrails erode,” Cisco said.

    The report emphasized that most systems “forget” earlier safety directives during extended interactions, allowing attackers to refine questions and sneak past filters. That gradual breakdown in self-regulation, Cisco noted, increases the risk of large-scale data breaches and disinformation campaigns.

    Open Models Shift Responsibility to Users

    Cisco highlighted that open-weight language models, including those from Mistral, Meta, and Google, let the public access their training parameters. These models often have lighter built-in safety systems to encourage customization. “That flexibility moves the safety burden onto whoever modifies the model,” the report said.

    The study also acknowledged that companies like OpenAI, Meta, Microsoft, and Google have tried to limit malicious fine-tuning. Still, critics argue that weak oversight makes it easy for criminals to repurpose AI tools.

    Cisco cited a case from August, when U.S. firm Anthropic revealed that criminals used its Claude model to steal personal data and demand ransoms exceeding $500,000. “This shows how fragile AI safety remains when systems are left unmonitored,” Cisco concluded.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleChelsea back Maresca’s rotation policy despite Qarabag draw
    Next Article Elon Musk secures record $1 trillion Tesla pay package with shareholder approval
    Rachel Maddow
    • Website

    Related Posts

    Startup Launches Auto Pinterest Growth Tool

    April 6, 2026

    SPARC Fusion Project Nears Milestone

    March 3, 2026

    Instagram Will Notify Parents When Teens Search for Suicide or Self-Harm

    February 27, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Latest News

    Startup Launches Auto Pinterest Growth Tool

    April 6, 2026

    MLB Rookies Shine in Early Season

    April 2, 2026

    Bipartisan Forums Boost Economy and Innovation

    March 30, 2026

    U.S. Jobless Claims Hit Low Level

    March 27, 2026
    Top Trending

    AI Advances for Astronaut Health

    Technology & Innovation August 18, 2025

    Google and NASA collaborate on an artificial intelligence tool to address astronaut health issues during…

    Meta Under Fire Over AI Chats with Children

    Media August 18, 2025

    Leaked report sparks outrage A US senator launched an investigation after a leaked internal report…

    Record Heat Sparks Massive Wildfires Across Spain and Portugal

    Environment & Sustainability August 18, 2025

    Extreme temperatures escalate fire risk Southern Europe is facing a severe heatwave, intensifying ongoing wildfires.…

    CairoMag brings you fresh stories, news, culture, and trends from Cairo and beyond — your daily source for insight, inspiration, and authentic perspectives.

    We're social. Connect with us:

    Facebook X (Twitter) Instagram Pinterest
    Categories
    • Business & Economy
    • Culture & Society
    • Education
    • Entertainment
    • Environment & Sustainability
    • Health
    • Media
    • News
    • Opinion
    • Real Estate
    • Sports
    • Technology & Innovation
    • Travel & Tourism
    Latest News

    Wisconsin Unlocks $125M for PFAS Cleanup

    April 9, 2026

    Bipartisan Forums Boost Economy and Innovation

    March 30, 2026

    Women’s History Month Sparks Civic Action

    March 16, 2026
    All Rights Reserved © 2026 CairoMag.
    • Contact Us
    • Privacy Policy
    • Terms and conditions
    • Disclaimer
    • Imprint

    Type above and press Enter to search. Press Esc to cancel.