Researchers at the Carnegie Mellon University School of Computer Science, the CyLab Security and Privacy Institute, and the Center for AI Safety in San Francisco have developed an attack that can bypass security measures of large language models. Their method enables chatbots like ChatGPT, Claude, and Google Bard, to generate objectionable content at high success rates.