0
Paper by Anthropic outlines how LLMs can be forced to generate responses to potentially harmful requests