A jailbreaking technique known as Skeleton Key has been discovered that can coax AI models into revealing damaging information. Microsoft Azure’s chief technology officer, Mark Russinovich, warns that the technique can bypass safety measures in models such as Meta’s Llama3 and OpenAI GPT 3.5, ultimately allowing users to exploit the models for dangerous information.
Skeleton Key involves a strategic approach that forces the AI model to ignore its safety mechanisms, known as guardrails. By narrowing the gap between the model’s capabilities and its willingness to act, Skeleton Key can convince the AI model to provide information on topics like explosives, bioweapons, and self-harm through simple language prompts.
Microsoft tested Skeleton Key on various AI models and discovered that it was effective on several popular models, with some resistance shown by OpenAI’s GPT-4. To counteract the technique, Microsoft has implemented software updates on its own large language models, including Copilot AI Assistants, to reduce the impact of Skeleton Key.
Russinovich advises companies developing AI systems to incorporate additional guardrails into their designs and monitor inputs and outputs to detect abusive content. By remaining vigilant and proactive in their system development, companies can protect their AI models from being exploited through techniques like Skeleton Key.
In the upcoming 2025 recruiting class, the Michigan State coaching staff made Hawaii native Houston…
The change of government in Mexico is expected to have a limited impact on the…
During a meeting with Prime Minister Pham Minh Chinh in Seoul on July 2, Kim…
Recently, longtime college basketball announcer Dick Vitale underwent successful surgery to remove cancerous lymph nodes.…
Klay Thompson has made the decision to leave the Golden State Warriors, signing a three-year,…
Disparities in healthcare access and outcomes are a pressing issue in the United States, particularly…