How to Create AI robots’ Characters
AI robots have their own personalities…
ROBOmind-Project is attempting to extract emotions of the other people in conversations.
If we can make AI extract emotions, AI will be able to respond accordingly.
Responses will differ depending on the character of AI.
Even If there is only one thing the person wants to deliver in a conversation, there could be numbers of response that can be corresponded.
The difference is dependent on the character of AI.
Depending on the response, how the conversation will elaborate changes drastically, and it is one of the Important determinants for keeping the conversation going.
So today, let me explain how to generate different types of character for AI robots.
First, let’s go through the method to judge emotions once again.
The basic nature of humans is to ”avoid displeasure and seek pleasure “.
Pleasure refers to activities that are considered positive, such as to eat food and purchase valuable things.
This generates the emotion “happy”.
Displeasure refers to activities that are considered negative, such as to become hungry or lose something valuable.
This generates the emotion “sad”.
Next, we are to determine who is subject to such positive / negative impact, and from whom.
If one is positively influenced by the other person, one will have the emotion of ”gratitude” to the others.
If one is negatively influenced by the other person, one will have the emotion of “anger”.
In addition, doing good for others is considered socially as “the good”.
Such as “one should help those in need ””one should pick up rubbish on the street”, patterns that are accompanied by ”should”.
If one does something negative for the others, it will be considered socially as ”the bad”.
“One should not bully the weak””one should not throw away rubbish”, patterns that are accompanied by ”should not”.
Upon the understanding of this process, let us move on and look at emotions from two different perspectives.
The first one is the perspective of changes in time.
It’s a pattern in which one takes action and upon the consequence, one will focus on the impact of the action .
For instance, if one takes action and it results in a negative consequence, one regrets the action or reflects the action after. These are possible cognitive patterns.
I’ll call this time differential cognitive patterns.
The second perspective is the perspective of comparison and difference In status between others.
Cognitive patterns such as to feel that one is unhappier than the others or happier than the others.
I’ll call this horizontal differential cognitive patterns.
Now AI robots respond according to extracted cognitive patterns（emotions）.
Here responses will differ on the character of AI.
Now let us apply some characters to AI and look at how differently they respond to below.
In this example, we have an AI robot who is consoling Its friend who failed in college entrance exam.
Friend ”I failed in college entrance exam. What should I do from now on?”
First, the robot will extract emotions from the incident that has happened to the friend.
Here the friend mentioned “failed”, the robot figures that the incident was negative for him. Hence the emotion extracted in the friend is “sad”.
The most simple response is to receive the other’s emotions and return them back.
AI Robot ”You must be very sad”
This response alone could sustain the minimum conversation.
If the robot does a research with the keyword “college” and acquired information and say,
AI Robot ”The average height of college students in recent years is 2 cm higher than those 10 years ago”.
The friend will be puzzled “What are you trying to say?”
This part connects to the last post ”the Frame Problem”.
Simply speaking, if the answer one seeks is subject of conversation, there will be infinite number of response available. That’s how the frame problem arises.
If one specifies the purpose and chooses the subject of conversation and types of response from the purpose, the possible options will greatly decrease, and there will be no frame problem.
In this case, ”the purpose” refers to ”understanding other people’s emotions“.
Now let’s move on to another possible response.
AI Robots are programmed to do ”the good”. “The good” refers to something that brings benefits and pleasures to others.
A response like “you must be very sad” is accommodating to the other person’s emotions, and understands and accepts such emotions.
The other person becomes relieved to have been accepted of their own sadness – beneficial for the other person – we can say that this is one correct answer.
Next, we will apply incidents into time differential cognitive patterns.
When we align the series of events, it looks like this:
The friend studies for entrance exam → The friend takes the exam → The friend fails the exam（current situation）
First, to shine a light on the past, and goes
AI Robot ”You should have studied more”
a response like this is possible.
Or to put the focus on the future, and goes
AI Robot ”From now on, you should study more and next year, let’s put into work again!”
A response like this is possible.
The response “You should have studied more” fosters self-reflection,” From now on, you should study more and next year, let’s put into work again!” gives further encouragement for the next challenge.
Now the person has negative emotions and is down, so it encourages him and brings positive emotions in him.
Both are correct responses.
Now let’s apply it into horizontal differential cognitive pattern.
When the person says he failed in college entrance exam, it implies the assumption that there are people who are aiming for college and among them, he failed to do so.
So, here it changes the assumption of pursuing for college:
AI Robot ”Life is not all about college”
”You have other options in your life”
A response like this is possible.
This response introduces another perspective instead of demanding further effort from the person, it allows the other person to accept the current condition and have positive emotions for the near future.
This response isn’t too bad.
So we could think of various responses to the same situation.
Then which response is the correct one?
Which one is correct? Actually all of them are correct.
There is no one correct answer; the correct way to answer depends on the character of AI robots.
“Kind ”AI will empathize others’ emotions like “it must be really sad” as if it is about itself.And it will get sad and cry.
Not just “kind” but ”strict” AI will give an encouragement like ”let’s push it more”.
Proposing another idea like “life is not only about college” will be seen in AI that is capable of changing the negative emotion to a positive one.
Kinds of response depends on the character of each AI .
You can change the character in the character parameter.
If you raise the “kindness” parameter, it emphasizes more; if you raise the ”strictness” parameter, it gives you critical comments out of consideration for the other one.
Adjustment of parameters will be learned through experience.
Through experiencing being too sweet with others and spoiling them, or being too strict with others and resulting in resistance from others, they will learn to respond in most optimal ways.
Provided that we have a model of mind ready, surely deep learning will optimize the adjustment of parameters.
Well, cognitive patterns（emotions）of humans are more than those mentioned above.
In communication, there is another important emotion:
The emotion of “humor”.
Now let’s install the emotion ”humor” into the AI Robot.
“Humor” occurs when a positive situation turns into a negative one.
For instance, a situation where ”you slipped stepping on a banana peel” is where humor occurs.
Friend “I failed in my college entrance exam…”
AI Robot ”What, you failed to get into college？”
“Bahaha, you could easily get in if you memorize all the textbooks”
“What are you, you can’t even memorize a textbook? How small is your memory capacity…”
“Like a floppy disc”
Oh no, this is not great.
It was too soon to install “humor” into AI just yet.
Let’s uninstall the function “humor” for the time being.
Instead, let’s install the emotion “pity”.
AI Robot ”I heard that memory capacity of humans is only 3 megabytes”
“What pitiful creatures”
Oh no, I guess we can’t co-exist with AI!