Researchers find that a modest amount of fine-tuning can undo safety efforts that aim to prevent LLMs such as OpenAI’s GPT-3.5 Turbo from spewing toxic content (Thomas Claburn/The Register)

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Thomas Claburn / The Register:
Researchers find that a modest amount of fine-tuning can undo safety efforts that aim to prevent LLMs such as OpenAI’s GPT-3.5 Turbo from spewing toxic content  —  OpenAI GPT-3.5 Turbo chatbot defenses dissolve with ‘20 cents’ of API tickling  —  The “guardrails” created to prevent large language models …

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Would you like to post a comment?

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

0