The scientists are applying a way known as adversarial education to prevent ChatGPT from letting buyers trick it into behaving terribly (called jailbreaking). This get the job done pits various chatbots versus one another: 1 chatbot performs the adversary and attacks A different chatbot by making textual content to pressure it to buck its usual con