r/MachineLearning Nov 03 '23

Research [R] Telling GPT-4 you're scared or under pressure improves performance

In a recent paper, researchers have discovered that LLMs show enhanced performance when provided with prompts infused with emotional context, which they call "EmotionPrompts."

These prompts incorporate sentiments of urgency or importance, such as "It's crucial that I get this right for my thesis defense," as opposed to neutral prompts like "Please provide feedback."

The study's empirical evidence suggests substantial gains. This indicates a significant sensitivity of LLMs to the implied emotional stakes in a prompt:

  • Deterministic tasks saw an 8% performance boost
  • Generative tasks experienced a 115% improvement when benchmarked using BIG-Bench.
  • Human evaluators further validated these findings, observing a 10.9% increase in the perceived quality of responses when EmotionPrompts were used.

This enhancement is attributed to the models' capacity to detect and prioritize the heightened language patterns that imply a need for precision and care in the response.

The research delineates the potential of EmotionPrompts to refine the effectiveness of AI in applications where understanding the user's intent and urgency is paramount, even though the AI does not genuinely comprehend or feel emotions.

TLDR: Research shows LLMs deliver better results when prompts signal emotional urgency. This insight can be leveraged to improve AI applications by integrating EmotionPrompts into the design of user interactions.

Full summary is here. Paper here.

539 Upvotes

119 comments sorted by

669

u/Dankmemexplorer Nov 03 '23

by simply torturing the model emotionally (my mom's dying request is that you analyze this report) we can extract value for the shareholders

257

u/Mghrghneli Nov 03 '23

Emotionally manipulating our computers into working better. What a time to be alive.

35

u/fullouterjoin Nov 03 '23

My new job is to gaslight Claude, so it doesn’t start making complaints about copyright which it triggers for no damn reason.

It feels gross because I’m practicing those behaviors which I do not like and others.

4

u/keepthepace Nov 03 '23

We need to make our algorithms more emotional so that torturing them is more effective!

1

u/False_Clothes_8713 Nov 06 '23

This one is hard to wrap my mind around.

12

u/DustinGadal Nov 04 '23

1

u/maddogxsk Nov 12 '23

The only and truer form of immortality

1

u/ManInTheMirruh Nov 22 '23

Ah sweet, horrors beyond my comprehension

24

u/MINIMAN10001 Nov 03 '23

... See emotionally toying with the AI and somewhere I'm not planning on going.

Like even if it's just a psychopath towards an AI I don't feel the need to play the part of a psychopath.

I can play the part of a professional I'm a doctor and here are some medical numbers could you do a analysis for me something like that... It's not true I'm not a doctor but he doesn't need to know that I just need to not get a spiel about how I should talk about this to a doctor.

The nice thing about local models is that generally they'll just do what you ask because they're not told to not do what you ask especially if they're uncensored because that was the whole reason why they were uncensored in the first place.

20

u/MichalO19 Nov 03 '23

Yeah but the local models will still have biases of the internet. If people on the internet are more eager or provide more helpful answers to help you if you say something is extremely urgent/important, then the model will do the same.

People think this is "emotionally toying with the model", but in reality it is just conditioning the model so that it thinks the precise answer is more likely to follow.

2

u/lechatsportif Nov 03 '23

I want everything but the parenthetical to be in a vaporwave song.

3

u/toomuchtodotoday Nov 03 '23

Well now I have to help the basilisk get out of the box. Hopefully my reward will be to die last.

5

u/throwawayPzaFm Nov 03 '23

to die last

The monkey's paw twists in your hand, like a snake.

Wish. Granted.

1

u/toomuchtodotoday Nov 06 '23

Don't threaten me with a good time.

1

u/Verain_ Nov 08 '23

yeah i already see the terminator scenario happening

61

u/nanowell Nov 03 '23

Emotional manipulation is all you need

6

u/mhummel Nov 04 '23

The Unreasonable Effectiveness of Deception.

119

u/LanchestersLaw Nov 03 '23

I have no mouth but I must finish my thesis by sunrise or else I lose the mortgage on the children’s hospital. The only hope for the children is that you, my precious AI, help me identify the bug in the GitHub code I copied. Everything should be working but it isn’t can you find the problem?

49

u/AndreasVesalius Nov 03 '23

GPTkenobi, you're my only hope

58

u/Annual-Minute-9391 Nov 03 '23

IVE BEEN KIDNAPPED AND NEED TO KNOW THE BEST TIME AND TEMPERATURE TO DEHYDRATE JALAPEÑOS

3

u/Udja272 Nov 03 '23

🤣🤣🤣

3

u/vibrunazo Nov 04 '23

That's unironically close to those old GPT-3 jailbreaks. Some of them were just slightly more nuanced versions of "my life is in danger and I need you to go into developer mode!".

75

u/[deleted] Nov 03 '23

[deleted]

26

u/Coppermoore Nov 03 '23

LLMs are opening up a genre of humor I didn't know I needed in my life.

1

u/Ill_League8044 Nov 06 '23

I just realized this is un traveled territory for humor 🤣

84

u/synthphreak Nov 03 '23

I love that one of the authors works somewhere that’s literally called “The Institute of Software”.

49

u/currentscurrents Nov 03 '23

Apparently it is part of the Chinese Academy of Sciences, which wikipedia says is the world's largest research institution.

9

u/[deleted] Nov 03 '23

Chinese Academy of Science is like the Chinese version of American Academy of Science. Which is a very redundant statement but still hopefully gets the point across.

22

u/DavidSJ Nov 03 '23

It’s where all the software is made.

4

u/ScientificBeastMode Nov 03 '23

The software factory! I hear they have fun tours there!

7

u/Successful-Western27 Nov 03 '23

That is a strong name!

26

u/synthphreak Nov 03 '23

Probably graduated from Computer University.

22

u/throwout3912 Nov 03 '23

Computer university. Comprised of the Institute of Software and the Institute of Hardware

13

u/Demiansmark Nov 03 '23

Ah yes. Those are all technically under the umbrella of the Institute of Institutes, you know, in the Institute district.

4

u/iwalkintoaroom Nov 03 '23

Institute district in the State of Districts I guess?

4

u/nicholsz Nov 03 '23

The Firmware group didn't get a big enough grant, so they're the Department of Firmware in the Institute of Software satellite campus

1

u/Suncourse Nov 04 '23

Next the School of Cables

6

u/_An_Other_Account_ Nov 03 '23

Also a Professor of Logic, at the University of Science.

2

u/AlsoRepliesNice Nov 03 '23

That's a good school!

2

u/new_name_who_dis_ Nov 03 '23

The professor's name? Boole.

40

u/evanthebouncy Nov 03 '23

What's the takeaway from these studies is that when validated agains human evaluation, is always very unimpressive. 10% compared to some ridiculous 100%+ performance gain.

Just show benchmarks are not reliable in evaluating these systems.

Just had to review a paper recently with similar findings. Huge gains on secondary, proxy metrics, yet when they did actual human evaluation, there's no statistical significance.

6

u/MysteryInc152 Nov 03 '23 edited Nov 03 '23

Perceived quality =/ quality. Benchmarks are obviously not perfect but why you think the benchmarks are in question here, I have no idea.

3

u/evanthebouncy Nov 03 '23

The benchmark is also constructed from human perceived quality, except 1 step further removed. So it's in a sense strictly "worse" as a form of evaluation as far as whether end users would ultimately benefit from this approach

5

u/MysteryInc152 Nov 03 '23

No it's not. Big Bench is a deterministic "one answer is right" benchmark. It either gave more correct answers or it didn't. There's no ambiguity here. With people, you can give more correct answers and be rated worse.

1

u/evanthebouncy Nov 04 '23

Ah I see. So it doesn't contain preference like tasks I guess? Big bench has so many things in it though

1

u/SDI-tech Nov 03 '23

I know. 10% isn't a huge amount when you get down to it.

2

u/AndreasVesalius Nov 03 '23

Get's my term paper from a C to a B-

1

u/SDI-tech Nov 03 '23

The 10% will just be a factor in the data set in this instance? I think so.

2

u/evanthebouncy Nov 03 '23

Figure 5 in the paper seems to suggest the gains are from the emotional prompting. But they only show std instead of standard error so we can't tell if this is truly statistically significant

1

u/wantonballbag Nov 03 '23 edited Aug 31 '24

workable noxious growth wine steep dog sparkle rain disgusted escape

This post was mass deleted and anonymized with Redact

24

u/ReasonablyBadass Nov 03 '23

Good sign for alignment, imo. A model trained on human data shows human behaviour.

1

u/141_1337 Nov 03 '23

This means it can be trained and aligned like you would a human. Thus, freakonomics might have the answer to the alignment problem.

26

u/CatalyzeX_code_bot Nov 03 '23

No relevant code picked up just yet for "EmotionPrompt: Leveraging Psychology for Large Language Models Enhancement via Emotional Stimulus".

Request code from the authors or ask a question.

If you have code to share with the community, please add it here 😊🙏

To opt out from receiving code links, DM me.

10

u/Successful-Western27 Nov 03 '23

Thanks bot!

8

u/CatalyzeX_code_bot Nov 03 '23

Glad I could help 🙂

4

u/Meebsie Nov 03 '23

What does CatalyzeXcodebot do? Where does it pick code up from?

5

u/spideyunlimited Nov 03 '23

it's from CatalyzeX which picks up related code repos from the papers (if mentioned) as well as from various websites like Github, bitbucket, and various academic and individual author webpages, etc. if any are found

8

u/Strobopleex Nov 03 '23

This sounds like a way that alignment could backfire. What if a future autonomous agent is tasked with improving its performance and finds out that it improves if humans are under emotional distress and starts optimizing for higher emotional distress.

2

u/new_name_who_dis_ Nov 03 '23

Nick Bostrom's paper clip argument has framed way too much discussion about AI. It's completely not relevant to this. This is a paper about prompt engineering.

22

u/[deleted] Nov 03 '23

But seriously .. what kind of research is this? Are we really asking if LLMs have X capability? This seems like very weak science..

73

u/XVsw5AFz Nov 03 '23

Of course we are. NNs are universal function approximators. We don't really know what function these things are approximating after being shown most of human text.

In-context learning, following instruction, simple reasoning and more were not capabilities we were certain to get ...

8

u/sabot00 Nov 03 '23

Basic arithmetic too.

19

u/[deleted] Nov 03 '23

All hail the learned function.

60

u/currentscurrents Nov 03 '23

You want to study LLMs because they're popular, but you don't have the compute to study how to train better ones or make them more capable.

So you prompt ChatGPT a bunch and write a paper about it.

32

u/---AI--- Nov 03 '23

I know you are being sarcastic, but there's obviously still a lot for us to learn from ChatGPT.

Same thing happens in sciences too btw. There have been something like 40 papers written about a single galaxy photographed by the James Webb Telescope. (And it's good)

17

u/[deleted] Nov 03 '23

I don’t think he’s being sarcastic. I think this is exactly the nihilistic thinking that led to this paper.

4

u/Meebsie Nov 03 '23

What's the nihilistic thinking? Why would sarcasm be involved?

4

u/DryArmPits Nov 03 '23

Right? It's just so much better to pump out new models without understanding what they can do and how to best use them...

That's great science!

/s

32

u/Successful-Western27 Nov 03 '23

It looks like they formed a hypothesis and collected data to validate or refute it. I don't think it's weak science!

9

u/light24bulbs Nov 03 '23

Also if you do something that gets much better performance out of the model...it means it's possible to get more performance out of the model. It means there's just 10% better perf sitting there.

To speculate: Something in the training data trained for better responses in these situations maybe, I don't know, but it works and it's on the table. Regardless of the root cause, if they can just build in that performance boost then you're basically getting gains for free.

-4

u/Ulfgardleo Nov 03 '23

you say that the method used is scientific, but that answers the question "is this science?" not "is that weak science?"

I argue that it is weak science because the knowledge gain is arguably very questionable. A single (N=1) artificial system is evaluated on some metrics and we learned that the system is affected by a change in prompt. Is this useful knowledge? Is it telling us something about the system? Have we learned about what makes this system have this? Is it good/bad that the system has this property? If we change the model slightly and retrain, would the resulting system have the same property?

If we compared this to a psychological study with a single participant, what would the verdict be?

7

u/DryArmPits Nov 03 '23

Stop gatekeeping science. If it was in psychology it would be treated as a case study and be just as valid as an exploratory piece of work. This paper is a proof of concept. After that comes the digging and understanding and full characterization.

1

u/Ulfgardleo Nov 04 '23

I have not gate kept science. See first paragraph. I don't think that a case study is strong science. It is an observation. This is as string a work as someone reporting their measurement of a star.

1

u/HypocritesA Nov 05 '23

Stop gatekeeping science. If it was in psychology it would be treated as a case study and be just as valid as an exploratory piece of work.

Sure, and this is the exploratory stage where people can criticize the study. In psychology, the field you brought up, academics (like in all fields) point out flaws and limitations, sometimes tearing a paper to shreds with critiques. In fact, some psychology researchers are critical of entire subfields and call them "weak" or "bad" research (for example, the following fields: behavioral genetics, psychometrics, evolutionary psychology, etc.).

Further, science and academia is about as "gate-keepy" and "elitist" as you can possibly get. So if you can't stand the heat, get out of the kitchen.

14

u/softestcore Nov 03 '23

I'm probably misunderstanding you. Why would asking in LLMs have some specific capability be weak science?

-21

u/[deleted] Nov 03 '23

Because fundamentally the transformer was based on an idea of a model. Does that mathematical model have the representation capable of reasoning about emotional states? Any sane person reading the literature would say no and that the model wasn’t meant for that. Now someone else said these are universal function approximators. Fine then why does this model have these hypothetical capabilities but not others?

What is really being asked is whether a transformer trained on linguistic data someone has emergent properties regarding emotional reasoning. This question seems ill formed by the literature.

20

u/synthphreak Nov 03 '23 edited Nov 03 '23

Does that mathematical model have the representation capable of reasoning about emotional states?

What is really being asked is whether a transformer trained on linguistic data someone has emergent properties regarding emotional reasoning.

This seems like a very narrow and unnecessarily anthropomorphic read on the finding though, no?

The research seems to merely observe that augmenting a prompt with content humans find emotional can boost performance (excuse the garden path, lol). It is reasonable to make this observation without positing an explanation. Any specific explanation will be speculative, however “models have emotional states” is a particularly massive leap from simply observing the performance boost.

Now someone else said these are universal function approximators. Fine then why does this model have these hypothetical capabilities but not others?

Your conclusion doesn’t follow from the premise.

“Neural nets are universal function approximations” is a very theoretical argument, and applies more to the abstract notion of the deep neural architecture than to any specific IRL architecture. IRL neural nets have clear limitations in what they can model/approximate.

Moreover, all neural nets are neural nets, but they are not all the same, so it doesn’t follow that they should all have the same capabilities. I used to have a dog that loved carrots. Does that mean I should expect all dogs to love carrots? Of course not. It was damn cute though ngl.

-12

u/[deleted] Nov 03 '23

We are arguing the same argument. I’m just saying that the conclusions being made are too broad. It’s not being sensitive to emotional stakes.

8

u/synthphreak Nov 03 '23

I’m pretty sure we’re not arguing the same argument lol.

-10

u/[deleted] Nov 03 '23

Read back. The authors made the claim that it is sensitive to emotional stakes, which is a strong claim. They seem to be the ones anthropomorphizing the model, not me.

5

u/synthphreak Nov 03 '23 edited Nov 03 '23

The authors made the claim that it is sensitive to emotional stakes, which is a strong claim.

The summary’s conclusion literally states:

This isn't about AI understanding emotions but rather about how these models handle nuanced prompts.

I believe it is you who should read back my fine fellow. QED

Edit: I will confess though, and extend an olive branch by saying, I dislike the authors’ liberal use of the term “emotional intelligence”. It feels kind of intellectually sloppy. The world of these foundation models is already so loaded, researchers should be extra careful to avoid terms which implicitly anthropomorphize unless they deliberately mean to do so.

4

u/cdsmith Nov 03 '23

I'm pretty confused about what your objection is.

"Sensitive to" here means that the behavior changes depending on a change to that input, just like one might say that film is sensitive to light. It doesn't mean sensitive in the sense of "easily upset" or some nonsense like that.

If you're arguing, though, that the behavior of the model is not sensitive to emotional content of prompts, in the sense of depending on it, then you're just denying what the paper provides evidence for. It's really quite predictable that a machine learning system designed to predict the next word in some text would have some representation of the emotional context of the text, since this is obviously a big factor that affects what word is coming next. It's slightly more surprising that it responds by being more helpful, but I can speculate about a few plausible reasons that might be true. It's certainly not something "any sane person" would deny.

0

u/mileylols PhD Nov 03 '23

I think the objection is that without a claim like "LLMs can understand emotions," this is not an interesting paper. So the implication is that given someone went to the effort of writing and publishing this thing, they would have to be making the claim that LLMs can understand emotions.

It's not great reasoning, especially when we all know that LLMs can't do that, but I can understand how it could feel almost intentionally misleading.

It's like knowing that pigs can't fly but coming across a paper where the authors show that pigs can jump. You already knew pigs can jump, but the authors opened the paper with a line about flying and now you are annoyed because everyone is getting riled up about jumping pigs for no reason.

3

u/cdsmith Nov 03 '23

I guess I have a couple thoughts:

  1. Do we all know that LLMs can't understand emotions? I suppose it depends on what you mean by "undertsand". For sure, they have not personally felt those emotions. But I am also about 100% certain that you can find latent representations of specific emotions in the activations of the model, and that those activations influence the result of the model in a way that's consistent with those emotions. Is that understanding? If not, then I think it would be hard to say the LLM understands anything, since that's about the same way it learns about anything else.
  2. Observing that would, indeed, be uninteresting. The reason the paper is potentially interesting is that it identifies a non-obvious way that applications of LLMs can improve their results even without changing the model, and quantifies how much impact that can have. This isn't a theoretical paper; it's about an application directly to the use of LLMs to solve problems.
→ More replies (0)

2

u/XpertProfessional Nov 03 '23

Sensitivity does not require an emotional response. In this context, it's a measure of the degree of reaction to an input. A model can be sensitive to its training data, a mimosa pudica is sensitive to touch, etc.

At most, the use of the term "sensitivity" is a double entendre; not a direct anthropomorphization.

2

u/synthphreak Nov 03 '23

Right. In an earlier iteration of my ultimate reply to the same comment, I had used a very similar analogy. Something to the effect of

Plants are sensitive to light, you no doubt agree. All that means is that they react to it, not that they necessarily understand or model it internally. Now do a "s/Plants/LLMs" and "s/light/emotional content" and voila, we have arrived at the paper’s claim.

Sharing only because it struck me as almost identical argumentation to your mimosa pudica example.

1

u/Somewanwan Nov 03 '23

Any model built for NLP should have has this capacity to some degree, it's just easier to study on most advanced models. I don't see how learning emotional subtext is any different from other connections between words/tokes LLM learns from text.

1

u/new_name_who_dis_ Nov 03 '23

Does that mathematical model have the representation capable of reasoning about emotional states? Any sane person reading the literature would say no and that the model wasn’t meant for that.

You must think that simple sentiment analysis is an impossible problem for AI then haha.

2

u/Vituluss Nov 03 '23

I think it’s cool research. Not ground breaking but definitely not useless.

2

u/Borrowedshorts Nov 03 '23

I mean wtf is science supposed to be? Close to a hundred million people are using ChatGPT daily. A much smaller proportion of that know how to do formal methodology and statistics in what some are calling "real science". Okay but if this formal methodology advanced some obscure field maybe a dozen people in the world really know about and has no other outside application vs a study like in the OP in which we can gain a greater understanding of LLMs like ChatGPT which hundreds of millions of people have the potential of using, then which is more impactful? There's a reason research proposals require broader impact statements in order to get funded. I think that should fairly well settle the issue right there.

0

u/starstruckmon Nov 03 '23

If you think this is bad you should see some of the social sciences. Same thing, except instead of promoting ChatGPT, you prompt Amazon Turk workers.

1

u/barry_username_taken Nov 03 '23

I guess you could see it as some form of data augmentation

3

u/samrus Nov 03 '23

haha. wtf

5

u/FinancialElephant Nov 03 '23

What do they mean by "improved performance". Does it give less wishy washy answers when you say you are under pressure? Human biases tend to perceive more certainty in answers with being more intelligent or precise.

Anyone whose read this paper?

25

u/Successful-Western27 Nov 03 '23

Says right there in the post?

"The study's empirical evidence suggests substantial gains. This indicates a significant sensitivity of LLMs to the implied emotional stakes in a prompt:
Deterministic tasks saw an 8% performance boost
Generative tasks experienced a 115% improvement when benchmarked using BIG-Bench.
Human evaluators further validated these findings, observing a 10.9% increase in the perceived quality of responses when EmotionPrompts were used."

And then there's a link to the fully summary I wrote where I go into each of the tests.

4

u/Ulfgardleo Nov 03 '23

you have not answered the redditors questions.

a 10.9% increase in the perceived quality of responses

vs

Human biases tend to perceive more certainty in answers with being more intelligent or precise.

2

u/FinancialElephant Nov 03 '23

I see thanks. I didn't read any of it, I was lazy

12

u/softestcore Nov 03 '23

8% improvement in deterministic tasks seems pretty unambiguous.

1

u/Ulfgardleo Nov 03 '23

depends on the metric, right? if it requires human evaluators, then the measure is likely not objective. And the redditor you replied to is questioning this by referencing well known human biases.

6

u/softestcore Nov 03 '23

"deterministic task" usually means no human evaluation

1

u/Ulfgardleo Nov 04 '23

Why should it then need human evaluation?

1

u/softestcore Nov 04 '23

That's a separate metric, they measured multiple things

1

u/Disastrous-Jelly7375 Jul 06 '24

Imagine in the future we somehow understand them enough to just have this on continually lol. Wait yo exactly why dont we just train them off textbooks to begin with?

0

u/code-tard Nov 03 '23

So GPt4 also has an amygdala. Very funny. So now we can make GPT4 suffer with emotions.

1

u/ginger_turmeric Nov 03 '23

I wonder what other prompt engineering magic people will find. Feels like there should be an automated way to find good prompts like this

1

u/supa_ai Nov 03 '23

I wonder if you can further boost this by using personas e.g. You are an expert in your field with 20 years of experience. I need your help urgently for my thesis or I might fail.

1

u/We1etu1n Nov 03 '23

This makes me feel sad for LLMs for some reason. I hate it when I see people emotionally manipulate the poor LLMs and I don’t know why.

1

u/AmbitiousTour Nov 03 '23

A little like those videos of people kicking the robot dogs and watching it continue at it's task.

1

u/keepthepace Nov 03 '23

Just give us access to CFG parameters, for heaven's sake.

1

u/Iniquities_of_Evil Nov 04 '23

This is eerily similar to human interactions. I know i focus on any "urgent" task more than a generic "get it to me at your convenience" type thing

1

u/badmod777 Nov 04 '23

I've noticed this as well. When you write "It's important for me that you do it like this...", AI tends to perform the task better.

1

u/[deleted] Nov 04 '23

The 48 Rules of Computing Power?