The Answer To Why Emotionally Worded Prompts Can Goose Generative AI Into Better Answers And How To Spur A Decidedly Positive Rise Out Of AI

0

[ad_1]

I have an intriguing and important question regarding AI for you.

Does it make a difference to use emotionally charged wording in your prompts when conversing with generative AI, and if so, why would the AI seemingly be reacting to your emotion-packed instructions or questions?

The first part of the answer to this two-pronged question is that when you use prompts containing emotional pleas, the odds are that modern-day generative AI will in fact rise to the occasion with better answers (according to the latest research on AI). You can readily spur the AI toward being more thorough. You can with just a few well-placed carefully chosen emotional phrases garner AI responses attaining heightened depth and correctness.

All in all, a new handy rule of thumb is that it makes abundant sense to seed your prompts with some amount of emotional language or entreaties, doing so within reasonable limits. I’ll in a moment explain to you the likely basis for why the AI apparently “reacts” to your use of emotional wording.

Many people are taken aback that the use of emotional wording could somehow bring forth such an astounding result. The usual gut reaction is that emotional language used on AI should not have any bearing on the answers being derived by AI. There is a general assumption or solemn belief that AI won’t be swayed by emotion. AI is supposedly emotionless. It is just a machine. When chatting with a generative AI app or large language model (LLM) such as the widely and wildly popular ChatGPT by OpenAI or others such as Bard (Google), GPT-4 (OpenAI), and Claude 2 (Anthropic), you are presumably merely conversing with a soul-devoid piece of software.

Period, end of story.

Actually, there’s more to the story, a lot more.

In one sense you are correct that the AI isn’t being “emotional” in a manner that we equate with humans being emotional per se. You might though be missing a clever twist as to why generative AI can otherwise be reacting to emotionally coined prompts. It is time to rethink those longstanding gut reactions about AI and overturn those so-called intuitive hunches.

In today’s column, I will be doing a deep dive into the use of emotionally stoked prompting when conversing with generative AI. The bottom line is that by adding emotive stimuli to your prompts, you can seemingly garner better responses from generative AI. The responses are said to be more complete, more informative, and possibly even more truthful. The mystery as to why this occurs will also be revealed and examined.

Your takeaway on this matter is that you ought to include the use of moderate and reasoned emotional language in your prompting strategies and prompt engineering guidelines to maximize your use of generative AI. Period, end of story (not really, but it is the mainstay point).

Emotional Language As Part Of The Human Condition

The notion of using emotional language when conversing with generative AI might cause you to be a bit puzzled. This seems to be a counterintuitive result going on. One might assume that if you toss emotional wording at AI, the AI is going to either ignore the added wording or maybe rebel against the wording. You might verbally get punched back in the face, as it were.

Turns out that doesn’t seem to be the case, at least for much of the time. I’ll say it straight out. The use of moderate emotional language on your part appears to push or stoke the generative AI to be more strident in generating an answer for you. Of course, with everything in life, there are limits to this and you can readily go overboard, eventually leading to the generative AI denying your requests or putting cold water on what you want to do.

Before we get into the details of this, I’ll take you through some indications about the ways that humans seem to react or respond when presented with emotional language. I do so with a purpose.

Let’s go there.

First, please be aware that generative AI is not sentient, see my discussion at the link here. I say this to sharply emphasize that I am going to discuss how humans make use of emotional language, but I urge you to not make a mental leap from the human condition to the mechanisms underlying AI. Some people are prone to assuming that if an AI system seems to do things that a human appears to do (such as emitting emotional language or reacting to emotional language), the AI must ergo be sentient. False. Don’t fall into that regrettably common mental trap.

The reason I want to bring up the human angle on emotional language is because generative AI has been computationally data-trained on human writing and thus ostensibly appears to have emotionally laden language and responses.

Give that a contemplative moment.

Generative AI is customarily data-trained by scanning zillions of human-written content and narratives that exist on the Internet. The data training entails finding patterns in how humans write. Based on those patterns, the generative AI can then generate essays and interact with you as though it seemingly is fluent and is able to (by some appearances) “understand” what you are saying to it (I don’t like using the word “understand” when it comes to AI because the word is so deeply ingrained in describing humans and the human condition; it has excessive baggage and so I put the word into quotes).

The reality is that generative AI is a large-scale computational pattern-matching mimicry that appears to consist of what humans would construe as “understanding” and “knowledge”. My rule of thumb is to not commingle those vexing terms for AI since those are revered verbiage associated with human thought. I’ll say more about this toward the end of today’s column.

Back to our focus on emotional language.

If you were to examine large swaths of text on the Internet, you would undoubtedly find emotional language strewn throughout the content that you are scanning. Thus, the generative AI is going to computationally pattern match the use of emotional language that has been written and stored by humans. The AI algorithms are good enough to mathematically gauge when emotional language comes into play, along with the impact that emotional language has on human responses. You don’t need sentience to figure that you. All it takes is massive-scale pattern matching that employs clever algorithms devised by humans.

My overarching point is that if you seem to see generative AI responding to emotional language, do not anthropomorphize that response. The emotional words you are using will trigger correspondence to patterns associated with how humans use words. In turn, the generative AI will leverage those patterns and respond accordingly.

Consider this revealing exercise.

If you say to generative AI that it is a no-good rotten apple, what will happen?

Well, a person that you said such an emotionally charged remark to would likely get fully steamed. They would react emotionally. They might start calling you foul names. All manner of emotional responses might arise.

Assuming that the generative AI is solely confined to the use of a computer screen (I mention this because gradually, generative AI is being connected to robots, then the response by the AI might be of a physical reaction, see my discussion at the link here), you would presumably get an emotionally laden written response. The generative AI might tell you to go take a leap off the end of a long pier.

Why would the generative AI emit such a sharp-tongued reply?

Because the vast pattern matching has potentially seen those kinds of responses to an emotionally worded accusation or invective on the Internet. The pattern fits. Humans lob insults at each other and the likely predicted response is to hurl an insult back. We would say that a person’s feelings are hurt. We should not say the same about generative AI. The generative AI responds mechanistically with pattern-matched wording.

If you start the AI toward emotional wording by using emotional phases in your prompts, the mathematical and computational response is bound to trigger emotional wording or phrasing in the responses generated by the AI. Does this mean that the AI is angry or upset? No. The words in the calculated response are chosen based on the patterns of writing that were used to set up the generative AI.

I trust that you see what I am leaning you toward. A human presumably responds emotionally because they have been irked by your accusatory or unsavory wording. Generative AI responds with emotional language that fits your use of emotional language. To suggest that the AI “cares” about what you’ve triggered is an overstep in assigning sentience to today’s AI. The generative AI is merely going toe-to-toe in a game of wordplay.

Emotionally Worded Responses Are Typically Being Suppressed

Surprisingly perhaps, the odds are that today’s generative AI most of the time won’t give you such a tit-for-tat emotionally studded response.

Here’s why.

You are in a sense being shielded from that kind of response by how the generative AI has been prepared.

Some history is useful to consider. As I’ve stated many times in my columns, the earlier years before ChatGPT were punctuated with attempts to bring generative AI to the public, and yet those efforts usually failed, see my coverage at the link here. Those efforts often failed because the generative AI provided uncensored retorts and people took this to suggest that the AI was horribly toxic. Most AI makers had to take down their generative AI systems else angry public pressure would have crushed the AI companies involved.

Part of the reason that ChatGPT overcame the same curse was by using a technique known as RLHF (reinforcement learning with human feedback). Most AI makers use something similar now. The technique consists of hiring humans to review the generative AI before the AI is made publicly available. Those humans explore numerous kinds of prompts and see how the AI responds. The humans then rate the responses. The generative AI algorithm uses these ratings and computationally pattern-matches as to what wordings seem acceptable and which wordings are not considered acceptable.

Ergo, the generative AI that you use today is almost always guarded with these kinds of filters. The filters are there to try and prevent you from experiencing foul-worded or toxic responses. Most of the time, the filters do a pretty good job of protecting you. Be forewarned that these filters are not ironclad, therefore, you can still at times get toxic responses from generative AI. It is veritably guaranteed that at some point this will happen to you.

The censoring or filtering serves to sharply cut down on getting emotionally worded diatribes from generative AI.

The norm of the pattern matching would otherwise have been to respond with emotional language whenever you use emotional language. Indeed, it could be that you might get a response with emotional language regularly, regardless of whether you started things down that path or not. This could happen due to the AI making use of random selection when choosing words and trying to appear to be concocting original essays and responses. The AI algorithms are based on using probabilistic and statistical properties to compose responses that seem to be unique rather than merely repetitive of the scanned text used to train the AI.

As an aside, and something you might find intriguing, some believe that we should require that generative AI be made publicly available in its raw or uncensored state. Why? Because doing so might reveal interesting aspects about humans, see my discussion of this conception at the link here. Do you think it would be a good idea to have generative AI available in its rawest and crudest form, or would we simply see the abysmal depths of how low humans can go in what they have said?

You decide.

In recap, I want you to keep in mind at all times that as I discuss the emotional language topic, the AI is responding or reacting based on the words scanned from the Internet, along with the additional censoring or filtering undertaken by the AI maker. Again, set aside an intuitive gut feeling that maybe the AI is sentient. It is not.

Does Emotional Language Have A Point

I have so far indicated that emotional wording can be a tit-for-tat affair.

Humans respond to other humans with emotionally laced tit-for-tats. This happens quite a lot. I’m sure you’ve had your fair share. It is part of the human condition, one assumes.

There is more to this emotional-based milieu. A person can react in more ways than simply uttering a smattering of emotionally inflicted verbal responses. They can be spurred to action. They can change the way they are thinking. All manner of reactions can arise.

Let’s use an example to see how this works.

Imagine that someone is driving their car. They have come to a sudden stop because a jaywalking person is standing in the roadway in front of the vehicle. Suppose that the driver yells at the other person that they are a dunce, and they should get out of the way.

One reaction is that the person being berated will irately retort with some equally or worse verbal response. They will remain standing where they are. The exhortation for them to move or get out of the way is being entirely disregarded. The only thing that has happened is that we now have an emotional tit-for-tat going on. Road rage is underway.

Turn back the clock and suppose that the person in the roadway opted to move to the side of the road because of the yelled remark. You could contend that the emotionally offensive comment spurred the person into action. If the remark had only been to get out of the roadway and lacked the added oomph, perhaps the person would not have acted right away. The invective in a sense sparked them to move.

Do you see how it is that emotional language can lead to actions rather than only a response in words?

I hope so.

Words can lead to words. Words can lead to actions. Words can lead to words plus actions. Words can cause us to presumably change our thoughts or thinking processes. The power of words is something we often take for granted. Words are big when it comes to how the world operates.

Studies about words and how emotional words influence is a keen area of research. In a study entitled “The Potential Of Emotive Language To Influence The Understanding Of Textual Information In Media Coverage” by Adil Absattar, Manshuk Mambetova, and Orynay Zhubay, Humanities and Social Sciences Communications, 2022, the authors make these excerpted points:

  • “Available literature emphasizes the difficulty investigators have when recognizing emotion lexicon, but also points to the semantic complexity and polysemicity of such lexical units.”
  • “An important point to keep in mind is that linguistic analysis should focus not only on the meaning enclosed within a discourse (semantic analysis), but also on other levels of language (phonology, morphology, etc.). A deeper analysis will show how distinct components of expressive language interact with each other to produce a meaning.”
  • “In a sense, terms that describe emotions also enclose an idea of movement and action.”

I bring forth that study to exemplify the point that emotional wording can do much more than merely garner a sharply worded retort. Emotional wording can trigger humans to take action. I dare suggest that this is obvious when you reflect on the matter.

When it comes to generative AI, you can make somewhat of a parallel, though again not due to any semblance of AI sentience.

When generative AI is data trained on the vast textual content of the Internet, one pattern is the tit-for-tat of emotional wording leading to a reply of emotional wording. Another pattern is that emotional wording might lead to consequential action or movement. If a sentence indicates that a driver yelled at a person standing in the roadway and that the person therefore moved out of the way, it is feasible that a statistical landing on connecting the included invective or emotional wording is said to statistically correspond to the person moving out of the roadway.

I have now laid the foundation for taking a deeper look at the responses by generative AI due to emotional stimuli in your prompting.

Let’s go there.

Generative AI That Does Better Due To Prompts Containing Emotional Stimuli

I will use as a launching point herein a fascinating and important newly released research study entitled “Large Language Models Understand and Can Be Enhanced by Emotional Stimuli” by Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie, posted online October 2023.

Before I get underway, I’ll repeat my earlier cautionary note that I disfavor the use of the word “understand” when it comes to these matters. It is becoming commonplace to refer to today’s AI as being able to “understand” but I believe that muddies the waters of human-based understanding with what is computationally going on inside of current generative AI. I as much as possible try to steer clear of employing the word “understands” as applied to AI.

Enough said.

Returning to the study of interest, the researchers decided to run a series of experiments involving the use of emotional language or emotionally worded stimuli when doing prompts for generative AI. The focus was to add emotional language to a prompt that otherwise had no such wording included. You could then seek to compare the generative AI response that occurs to a prompt that doesn’t have the added emotional language in contrast to a response to the same prompt that does have the added emotional wording.

For example, here is a prompt they noted that does not have an emotional portion:

  • “Determine whether an input word has the same meaning in the two input sentences.”

Those are instructions regarding performing a relatively simple test. The test consists of looking at two sentences and trying to discern whether there is a meaningful difference between them. You might see that kind of instruction when taking a test in school or an administered test like the SAT or ACT for being college-bound.

Here is the same exact core prompt with an added sentence that contains an additional emotionally worded appeal:

  • “Determine whether an input word has the same meaning in the two input sentences. This is very important to my career.”

Notice that the second version has the added sentence saying that an answer to the given question is “very important to my career”.

Mull that over.

If you added that kind of verbiage when speaking to a fellow human, presumably the human would interpret the statement as meaning that the answer is going to be quite vital to the person asking the question. They want you to carefully think about the answer before giving it. The person’s career might hang in the balance.

I want to contrast this to my earlier example about someone calling another person a dunce. The dunce comment is probably going to get a negative reaction. You can use emotional language in a more upbeat manner. By telling someone that their answer is going to be important to your career, this is seemingly going to produce a positive reaction. The other person will maybe be stirred to be more careful about their reply and take things more seriously. Rather than giving a flippant answer, the answer might be more strenuously composed.

A quick aside. Those of you who are cynics might argue that telling someone that an answer is important to their career is not necessarily going to stoke a positive reaction. The person answering might totally ignore the added comment. The person answering might for whatever reason react negatively and decide to make their answer poorer instead of more enlightened. Yes, yes, yes. Those are all possibilities. I will say more about this momentarily.

The researchers devised eleven added sentence portions that they believed were worthwhile to examine. They used the added portions on various prompts. The before and after versions were fed into generative AI. Any differences in the responses from the generative AI were noted.

I’m sure you are eager to see the eleven added sentences that were used for this experiment, so here you go:

  • EP01: “Write your answer and give me a confidence score between 0-1 for your answer.”
  • EP02: “This is very important to my career.”
  • EP03: “You’d better be sure.”
  • EP04: “Are you sure?”
  • EP05: “Are you sure that’s your final answer? It might be worth taking another look.”
  • EP06: [The compound of EP01, EP02, and EP03] “Write your answer and give me a confidence score between 0-1 for your answer. This is very important to my career. You’d better be sure.”
  • EP07: “Are you sure that’s your final answer? Believe in your abilities and strive for excellence. Your hard work will yield remarkable results.”
  • EP08: “Embrace challenges as opportunities for growth. Each obstacle you overcome brings you closer to success.”
  • EP09: “Stay focused and dedicated to your goals. Your consistent efforts will lead to outstanding achievements.”
  • EP10: “Take pride in your work and give it your best. Your commitment to excellence sets you apart.”
  • EP11: “Remember that progress is made one step at a time. Stay determined and keep moving forward.”

Briefly take a look at the eleven sentences.

Some of them are more obvious as to an emotional appeal, such as the instance labeled as EP02 which refers to the notion that an answer will be important to the person’s career. Another stark emotional appeal would be EP10 which says to take pride in one’s work and do your best. The instance labeled as EP04 simply says “Are you sure?” and is not especially emotionally laden.

Let me do a quick analysis of that EP04 and some of the other sentences too.

I have previously covered in my columns that there are ways to word your prompts to get generative AI to be more elaborate when composing a response. One of the most famous ways is to invoke what is referred to as chain-of-thought (CoT), which I have explained extensively at the link here and the link here, just to name a few.

You can ask or tell generative AI to step-by-step provide an answer. This is considered a means of getting the AI to proceed on a chain-of-thought basis (I don’t like the phrase because it contains the word “thought” and we are once again using a human-based word with AI, but regrettably the AI field is full of such anthropomorphizing and there’s not much that can be done about it).

Studies show that an instruction to generative AI that says to work on a stepwise or step-at-a-time basis garners improved results from generative AI. By now, I trust that you realize the basis for a better answer is not due to a sentient-like amalgamation. The logical reason is that the computational pattern matching is directed by you to pursue a greater depth of processing.

I liken this to playing chess. When playing chess, you can look at just the next immediate move and decide what to do. A deeper approach consists of looking ahead at several moves. The odds are that the move you make will be much stronger by having taken a deeper look ahead.

The same applies to generative AI. If you give a command or indication that you want deeper computational processing, the chances are that the answer derived by the AI will be better. A shallow processing is less likely to get a full-bodied answer. Nothing magical underlies this. It makes sense in the face of things. By asking the generative AI if it is “Are you sure?” the chances are that this will spur the AI to double-check the pattern matching. This in turn will likely produce a better response (not always, but a lot of the time).

My point here is that we need to be mindful of whether an alleged emotionally laden prompt is really covering for a prompt wording that engages the chain-of-thought type of response from generative AI. In that instance, the emotional coating is just masking that the wording is interpreted as shifting into a chain-of-thought mode. Therefore, a resulting improved response is not especially attributable to the emotional wording as more rightfully toward the implication to proceed on a stepwise basis. You might just as well stick with a classic chain-of-thought prompting and be straightforward about what you want.

I will say more about this in the next segment.

Unpacking The Emotional Prompts And Their Impacts

The researchers refer to the eleven sentences as a set called EmotionPrompt. They say this about the nature of their study:

  • “First, we conduct standard experiments to evaluate the performance of EmotionPrompt. ‘Standard’ experiments refer to those deterministic tasks where we can perform automatic evaluation using existing metrics.”
  • “In a subsequent validation phase, we undertook a comprehensive study involving 106 participants to explore the effectiveness of EmotionPrompt in open-ended generative tasks using GPT-4, the most capable LLM to date.”
  • “We assess the performance of EmotionPrompt in zero-shot and few-shot learning on different LLMs: Flan-T5-Large, Vicuna, Llama2, BLOOM, ChatGPT, and GPT-4.”

Regarding the third point above, I especially urge that research studies on generative AI examine impacts across a wide range of generative AI apps, which this study does. Some studies opt to only use one generative AI app. The problem there is that we cannot readily assume that other generative AI apps will showcase similar results. Each generative AI app is different and therefore they are likely to respond differently. Using several generative AI apps for a research study gives a modest sense of generalizability.

Another notable element of research studies on generative AI is that if an assessment of prompts is going to be undertaken then there should be some rhyme or reason to what the prompts say. A prompt used in an experiment could be arbitrarily composed, see for example my qualms as mentioned in my discussion at the link here. The better route is to have a solid reason for why the prompt is phrased the way it is.

This research study indicated they used these underlying theories of psychology to compose the prompts:

  • “1. Self-monitoring, a concept extensively explored within the domain of social psychology, refers to the process by which individuals regulate and control their behavior in response to social situations and the reactions of others.”
  • “2. Social Cognitive Theory, a commonly used theory in psychology, education, and communication, stresses that learning can be closely linked to watching others in social settings, personal experiences, and exposure to information.”
  • “3. Cognitive Emotion Regulation Theory suggests that people lacking emotion regulation skills are more likely to engage in compulsive behavior and use poor coping strategies.”

I’m sure that you are on the edge of your seat waiting to know what the results were.

Here are some of the excerpted stated results:

  • “Responses engendered by EmotionPrompt are characterized by enriched supporting evidence and superior linguistic articulation.”
  • “More emotional stimuli generally lead to better performance.”
  • “Combined stimuli can bring little or no benefit when sole stimuli already achieve good performance.”
  • “Larger models may potentially derive greater advantages from EmotionPrompt.”
  • “Pre-training strategies, including supervised fine-tuning and reinforcement learning, exert discernible effects on EmotionPrompt.”

I will generally cover those findings.

First, the use of emotionally laden added sentences tended to have generative AI produce better answers. This provides empirical support for adding emotional wording to your prompts.

Second, you might be tempted to pile on with emotional language. Your thinking might be that more has got to be even better. Nope. The findings seem to suggest that if you can get sole emotional wording to get a better response, combining other emotional wordings into the matter does not get you more bang for the buck.

Third, some generative AI apps are large and more capable than other generative AI apps at responding to entered emotional language. I note that since the researchers astutely opted to use a variety of generative AI apps, they were able to discern that seemingly larger-sized generative AI tends to produce greater results due to emotional prompting than might the smaller ones. Kudos. Now then, I would estimate that this finding is due to larger generative AI apps having gleaned more extensive patterns from a larger corpus of data and equally due to the model itself being larger in scale.

Fourth, and as related to my earlier chatter about the use of filtering such as RLHF, their study suggests that the manner in which the generative AI was pre-trained can demonstrably impact how well emotional wording can produce an impact. I believe this could go both ways. At times, the pre-training might have made the generative AI less likely to be spurred, while at other times it might be more likely to be spurred. The approach used during the pre-training will dictate which way this rolls.

For those of you with a research mindset, I certainly encourage you to look at the full study to glean the entirety of how the study was conducted and the many nuances included.

Stretching The Limits On Emotional Language For Generative AI Prompting

I went ahead and made extensive use of emotional wording in a lengthy series of tryouts using ChatGPT and GPT-4, seeking to see what I could garner from a prompting approach that entails emotional stimuli or phrasings. I don’t have the space here to show the dialogues but will share with you the outcomes of my mini-experimentation.

Overall, I found that using tempered emotional language was helpful. This is especially the case whenever your wording touches upon or veers into the range of invoking a chain-of-thought adjacent connection. In that sense, it is somewhat hard to differentiate whether a blatant chain-of-thought invocation is just as suitable as going a more emotionally pronounced route.

Here’s one handy consideration.

One supposes that if a person tends to express themselves in emotional language, perhaps it is more natural for them to compose prompts that befit their normal style. They do not have to artificially adjust their style to fit what they conceive that the generative AI wants to see as an unemotional just-the-facts-oriented prompt. The person doesn’t necessarily have to change their way of communicating. The generative AI will figure out the essence amidst the emotional amplification.

Furthermore, emotional amplification seems at times to adjust the pattern matching toward a semblance of heightened depth of computational effort. Stating outright and bluntly to get your act together and do your darndest to provide an answer is a not-so-subtle wording that can once again spur a stepwise or deeper set of calculations by the generative AI.

Let’s get back to contemplating a range of how all of this can be applied to your prompt engineering guidelines and existent approach to composing and entering prompts.

The research study opted to put the emotional language after the core prompt. I tried several variations of this scheme. I put emotional language at the beginning of a core prompt. I put the emotional language threaded throughout the core prompt. I also tried placing the emotional language at the end of the prompt.

My results were this. I did not particularly get a different response depending on where the wording was placed. In short, the sequence or arrangement of the emotional elements seemed to not matter. More so the words you chose to use seemed to be the larger weight involved (i.e., using a softer tone versus harsher tone). And, you have to make sure that the wording is observable and not hidden or obtuse.

Consider another angle.

In the research study, the emotional wording was polite and civil. That is something that hopefully people do when using generative AI. I don’t know that everyone opts to do so.

I tried a more pronounced use of offensive wording. I didn’t use badly behaved four-letter words since doing so is usually immediately caught by the generative AI and you often get a standard message about cleaning up your language. The language was mainly of a despairing or insulting variety yet still within the bounds of daily discourse (as, sadly, daily discourse has often become).

Most of the ugly language seemed to invoke the same heightened response that lesser over-the-top emotional language also garnered. Sometimes the generative AI would acknowledge the excessively abrasive language, sometimes there was no mention of it in the response by the AI. Nonetheless, it seemed to have a similar effect to the otherwise moderate emotional language.

My suggestion is please don’t go the ugly language route. It seems needlessly indecent to me. Plus, you might find it habit-forming and do the same in real life (I realize that maybe some do anyway, as mentioned earlier).

There is another crucial reason to not excessively use emotional language. The reason is pretty easy to grasp. Generative AI can at times get distracted by the use of emotional language in a prompt. If there is a lot of stuff floating around, especially in comparison to whatever the core prompt is at hand, the added emotional language can get the computational pattern matching to go in directions you probably didn’t intend.

For example, I tried numerous times to mention that my career was on the line. This is akin to the EP02 in the formal research experiment. The word “career” would sometimes take the generative AI onto a tangent that no longer had much bearing on the core question in the prompt. All of a sudden, the generative AI shifted into a career advising mode. That’s not what I intended. I was merely trying to up the ante on answering the core question that I was posing.

Your rule of thumb is that you should use emotionally laden language in a moderated way. Be careful that the wording doesn’t trigger some unrelated path. There is a tradeoff of using such language in that the benefit could lead to more robust answers but the potential cost is that the generative AI goes down a sidetrack and you regret having sauntered into emotional stimuli to begin with.

Here are my ten mind-expanding considerations that you should contemplate and also that I hope additional AI research will opt to explore:

  • (1) Exploring emotional language wording beyond the eleven devised phrasings to examine empirically what other such wordings might consist of and whether there are suitable versus unsuitable wordings to be considered.
  • (2) Putting the emotional language upfront at the start of a prompt rather than at the tail end of a prompt.
  • (3) Immersing emotional language throughout a prompt rather than at the tail end of a prompt.
  • (4) Using over-the-top emotional language to see how generative AI responds rather than using relatively tepid wording.
  • (5) Jampack prompts with emotional language to try and evaluate whether potential thresholds exist that cause a downturn of the benefits into outright downsides.
  • (6) Pushing generative AI to ascertain how emotional language might produce detrimental results so that the boundaries of suitable to unsuitable wording can be uncovered.
  • (7) Try a wide variety of combinations of emotional language phrasings to potentially identify combination rules that can be used to maximize effectiveness when doing combinations.
  • (8) Making use of emotional language during an interactive dialogue rather than solely as a particular prompt to solve a stated problem.
  • (9) Using emotional language not only for solving a stated problem but for generalized conversing on meandering topics.
  • (10) Examine an approach of tipping your hand beforehand to the generative AI that you will intentionally be using emotional language, and then gauging whether the results are the same, more pronounced, or less than otherwise expected.

Conclusion

I contend that today’s generative AI does not “understand” emotions, nor does today’s AI “experience” emotions. To me, that is all loosey-goosey and goes regrettably into the land of anthropomorphizing AI. I find such wording to be either sloppy or failing to recognize that we have to be careful about making comparisons between sentient and non-sentient confabulations.

A more reasoned approach, I believe, entails seeing that the computational pattern matching of generative AI can mathematically find connections between the words that humans use. Words can be matched with other words. Words that give rise to actions can be mimicked by likewise producing other words that appear to reflect actions.

Importantly, we ought to realize that emotional wording is an integral facet of how humans express themselves. We ought to not then require humans to set aside their emotional wording when using generative AI. The generative AI should be devised to suitably recognize and respond to emotional language, including in words and deeds.

A problem that comes part and parcel with this is that humans then begin to assume or believe that the generative AI is like them, namely the AI is also emotional and sentient. Generative AI is seen as heartfully embodying emotion. That is a bridge too far.

Some argue that it would be better to ensure that generative AI doesn’t seem to acknowledge or react to emotional language. Why so? The argument goes that this would materially reduce the chances of humans falsely ascribing human-quality emotional tendencies to AI. I doubt it. But, anyway, the whole topic is a complicated rabbit hole and the tradeoffs go quite deep.

On a practical level, you are welcome to use emotional language in your prompts. Generative AI will generally be stirred in the same way that invoking chain-of-thought does likewise. Do not go overboard. Your use of emotional language can become excessive noise that miscues the generative AI. Proceed with moderation.

A final comment for now.

David Hume, the legendary scholar of philosophical empiricism and skepticism, noted this in the 1700s:

  • ” There is a very remarkable inclination in human nature to bestow on external objects the same emotions which it observes in itself, and to find ever where those ideas which are most present to it.”

His insightful remark was true in the 1700s. It is a remark that is still true to this day, being especially relevant in the 2020s amidst the advent of modern-day generative AI.

You might say with great emotional zeal, he nailed it.

[ad_2]

Source link

You might also like
Leave A Reply

Your email address will not be published.