Research paper https://arxiv.org/abs/2307.11760
You must log in or register to comment.
Large Language Models "Understand" and Can Be Enhanced by Emotional Stimuli
yeah sure. They "understand" that emotionally salient phrases in examples in their data are associated with a different kind of candor, but I don't find their experiments a compelling demonstration that emotional reasoning has emerged in LLMs.
Are people not already exploiting emotional language to get around guard rails?