Figures
Abstract
As AI technologies progress, social acceptance of AI agents, including intelligent virtual agents and robots, is becoming even more important for more applications of AI in human society. One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. By empathizing, humans act positively and kindly toward agents, which makes it easier for them to accept the agents. In this study, we focus on self-disclosure from agents to humans in order to increase empathy felt by humans toward anthropomorphic agents. We experimentally investigate the possibility that self-disclosure from an agent facilitates human empathy. We formulate hypotheses and experimentally analyze and discuss the conditions in which humans have more empathy toward agents. Experiments were conducted with a three-way mixed plan, and the factors were the agents’ appearance (human, robot), self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and empathy before/after a video stimulus. An analysis of variance (ANOVA) was performed using data from 918 participants. We found that the appearance factor did not have a main effect, and self-disclosure that was highly relevant to the scenario used facilitated more human empathy with a statistically significant difference. We also found that no self-disclosure suppressed empathy. These results support our hypotheses. This study reveals that self-disclosure represents an important characteristic of anthropomorphic agents which helps humans to accept them.
Citation: Tsumura T, Yamada S (2023) Influence of agent’s self-disclosure on human empathy. PLoS ONE 18(5): e0283955. https://doi.org/10.1371/journal.pone.0283955
Editor: Roland Bouffanais, University of Ottawa Faculty of Engineering, CANADA
Received: August 24, 2022; Accepted: March 21, 2023; Published: May 10, 2023
Copyright: © 2023 Tsumura, Yamada. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting information files.
Funding: This work was partially supported by JST, CREST (JPMJCR21D4), Japan. This work was also supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2136. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This does not alter our adherence to PLOS ONE policies on sharing data and materials.
Introduction
Humans live in society and use various tools, and artifacts are sometimes treated as if they were human beings. It is known that humans tend to treat artificial objects like humans in media equations [1]. However, these objects are very pervasive, and humans may not consider some of them to be like humans. In fact, AI’s problems today are related to the reliability and to the ethical usage of AI technologies. Ryan [2] focused on trust and discussed AI ethics and the issue of people anthropomorphizing AI. It was determined that even complex machines like those that use AI should not be viewed as trustworthy. Instead, he suggested that we should ensure that organizations that use AI and the individuals within those organizations are trustworthy. AI ethics was also discussed in depth from an applied-ethics perspective in a study by Hallamaa and Kalliokoski [3].
At the same time as trust, we often empathize with artificial objects. Artificial objects that we empathize with include cleaning robots, pet-type robots, and anthropomorphic agents that provide services in online shopping and at help desks. These are already used in society and coexist with humans. In addition, the appearance of these agents varies depending on the application and usage environment. However, some humans cannot accept these kinds of agents [4–6]. As agents continue to permeate society, they should have elements that humans find acceptable.
In this study, we use a target empathy agent to facilitate human empathy. Tsumura and Yamada [7] previously focused on “(human) empathy” as an attribute that an agent should have in order to be accepted by human society, and we aimed to create an empathy agent. In this study, we focused on how humans improve their relationships with agents. One method is to have humans empathize with the agents. By empathizing, humans act positively toward agents and are more likely to accept them. Various studies have been done on linguistic information [8–10], nonverbal information [11–13], situations [14–16], and relationships [17–19] as factors that cause empathy.
In previous studies, self-disclosure is also regarded as important among humans. Therefore, we thought that self-disclosure would be necessary for anthropomorphic agents to establish relationships with humans. In recent years, agents have been used more and more in human society, and building relationships between humans and agents has become increasingly important. Just as humans have built relationships with each other through self-disclosure, it makes sense for agents to use self-disclose in order to deepen their relationships with humans. However, anthropomorphic agents can be pre-designed and operated in society, so designing conditions for self-disclosure that are appropriate to the situation is an important factor in improving relationships. We focused on self-disclosure and experimentally examined what types of self-disclosure affect the characteristics of empathy. Moreover, empathy has been studied in the fields of HAI and HRI. However, different appearances have been used in each study. Hence, we decided to set appearance as a factor to compare the effect that it has on empathy. For this, human-like and robot-like appearances were prepared and tested as symbols of the HAI and HRI fields.
In this study, we assume that empathy agents influence human empathy. However, we investigate only human empathy toward empathy agents, not the human capacity for empathy, since our focus is on investigating the factors that make agents acceptable to humans. Below, empathy in this study refers to this kind of empathy.
Considering its relevance in previous studies on self-disclosure and empathy, we select self-disclosure as a factor to investigate human empathy toward agents. In this study, we designed agent self-disclosure to investigate what happens when an agent self-discloses. We confirm whether the self-disclosure is perceived by humans as agent self-disclosure in a pre-experiment and investigate whether the designed self-disclosure affects human empathy. We focus on self-disclosure from agents to humans, and we conduct experiments to investigate the relationship between human empathy and agent self-disclosure, as well as self-disclosure that is effective in promoting the characteristics of empathy. At the same time, we investigate the relationship between anthropomorphic agents with different appearances and self-disclosure.
In the remaining sections, we propose our empathy agents for facilitating human empathy. Then, we cover our experiments and the results. Finally, we discuss the results and describe our future work.
Definition of empathy
We consider empathy to be a significant element in being accepted by humans as a member of society. For humans to get along with each other, it is important that they empathize with the other [20, 21].
Empathy and the effects that it has on others have been a focus of research in the field of psychology. Omdahl [22] roughly classifies empathy into three types: (1) affective empathy, which is an emotional response to the emotional state of others, (2) cognitive understanding of the emotional state of others, which is defined as cognitive empathy, and (3) empathy including the above two. Preston and de Waal [23] suggested that at the heart of the empathic response was a mechanism that allowed the observer to access the subjective emotional state of the target. They defined the perception-action model (PAM) and unified the different perspectives in empathy. They defined empathy as three types: (a) sharing or being influenced by the emotional state of others, (b) assessing the reasons for the emotional state, and (c) having the ability to identify and incorporate other perspectives. Olderbak et al. [24] described theoretical and empirical support for the emotional specificity of empathy and presented an emotion-specific empathy questionnaire that assesses affective and cognitive empathy for six basic emotions.
Although we focus on the positive effects of empathy to improve society’s acceptance of AI agents in this study, empathy has been discussed from various aspects including negative effects in psychological literature. Bloom tried to introduce a neutral aspect of empathy by introducing not only positive influences but also negative ones [25]. He claimed that it is possible for empathy to act as a moral guide that leads humans to irrational decision-making and relationships to violence and anger. Also, he claimed that we can overcome this problem by using conscious, deliberative reasoning and altruistic approaches.
Various questionnaires are used as a measure of empathy, but we examined two famous ones. The Ten Item Personality Inventory (TIPI) is used to investigate human personality [26]. For the experiment of this study, since empathy may be biased by human personality, TIPI could be used as a questionnaire survey. The Interpersonal Reactivity Index (IRI), also used in the field of psychology, is used to investigate the characteristics of empathy [27]. Baron-Cohen and Wheelwright [28] reported a new self-report questionnaire, the Empathy Quotient (EQ), for use with adults of normal intelligence. Lawrence et al. [29] investigated the reliability and validity of the EQ and determined its factor structure. The experimental results of Lawrence et al. showed a moderate association between the EQ subscale and the IRI subscale. Regarding questionnaires about empathy, IRI and EQ are widely used in the field of psychology. In particular, examples of research on empathy using only the IRI are widely seen in psychology and the HRI and HAI fields, such as the studies by Konrath et al., Shaffer et al., and Perugia et al. Since the results of previous studies have shown that the EQ scale is related to the IRI scale, we used the IRI questionnaire, which has fewer questions and allows for the four characteristics of empathy to be investigated. We also used the widely used IRI for comparison with previous and future research on empathy. We also focused on investigating the impact of each characteristic of empathy. For investigating this, the use of the IRI was appropriate as based on previous studies.this, the use of the IRI was appropriate as based on previous studies.
Empathy in engineering
Empathy has been studied not only in the field of psychology but also in the field of engineering. For example, empathy has received a lot of attention in the field of virtual reality. Bertrand et al. [30] proposed a theoretical analysis of various mechanisms of empathy practice to define a possible framework for the design of empathy training in virtual reality. Herrera et al. [31] compared the short- and long-term effects of traditional and VR viewpoint acquisition tasks. They also conducted experiments investigating the role of technological immersion with respect to different types of intermediaries. Curran et al. [32] investigated empathy by showing a video from the visual perspective of a person watching a virtual reality movie.
Empathy has also been studied in the online environment. Pfeil and Zaphiris [33] performed a qualitative content analysis on 400 messages from a bulletin board on depression to investigate how empathy was expressed and facilitated in online communication. Empathy is also attracting attention in product design, and Bennett and Rosner [34] discussed and investigated a human-centered design process (promise of empathy) in which designers try to understand the target user in order to inform technology development.
Chella et al. [35] discussed self-awareness and inner speech in humans and AI agents and provided an initial proposal for a cognitive architecture for implementing inner speech in robots. While the foundations of internal speech had been investigated primarily in the fields of psychology and philosophy, research in robotics had not yet addressed self-aware behavior. Therefore, after discussing self-awareness and inner speech in humans and AI agents, they proposed the above cognitive architecture. Their approach had an advantage in that a robot’s inner speech could be heard by an external observer, and introspective and self-regulated speech could be detected.
Empathy in human-robot interaction
In other fields studying empathy, humans empathize with artificial objects. In the fields of human-robot interaction (HRI) and human-agent interaction (HAI), empathy between humans and agents and robots is studied. The following research has been conducted in the field of HRI.
Beck et al. [36] studied the effects of changing a robot’s head position on the interpretation of emotional key poses, valence, arousal, and stances. The results supported the idea that body language is an appropriate medium for robots to express emotions. On the basis of the concept of cognitive developmental robotics, Asada [37] proposed “affective developmental robotics,” which produces more truly artificial empathy. Artificial empathy refers to AI systems (such as companion robots and virtual agents) that can detect emotions and respond empathically. The design of artificial empathy is one of the most essential issues in social robotics, and empathic interaction with the public is necessary to introduce robots into society.
Dumouchel et al. [38] also summarized artificial empathy. The relationship between humans and robots appearing in daily life was discussed. They suggested that the human-robot dynamics in emotional relationships need to be considered. Mollahosseini et al. [39] applied a deep neural network-based system for automatically recognizing facial expressions to the speech dialogue of social robots. The function was extended and enhanced beyond voice dialogue to integrate the user’s emotional state into the robot’s reaction.
Several studies on inner speech have been conducted in the HRI field. Pipitone and Chella [40] investigated the potential of considering the inner speech of robots that cooperate with human partners. A domestic situation requiring several functional and moral requirements was simulated in a simple cooperative task. Their study was a novel endeavor, as only a few papers have analyzed the role of inner speech in robots, and most of them were theoretical in nature. Geraci et al. [41] investigated whether a robot’s internal conversation affects human trust and anthropomorphism when humans and robots collaborate together. The results suggest that a robot’s speech affects human trust. The results also indicated that participants’ perceptions of trust and anthropomorphism toward the robot improved after interacting with the robot in the experiment.
Empathy in human-agent interaction
In addition, the following research has been done in the field of HAI. McQuiggan et al. [42] proposed a unified inductive framework for modeling parallel and reactive empathy, which empathy models by choosing appropriate parallel or reactive empathy expressions. The framework was used to facilitate empathic behavior suitable for run-time situations. Leite et al. [43] conducted a long-term survey in an elementary school to present and evaluate an empathy model for social robots that is aimed for interactions with children that occur over a long period of time. They measured children’s perceptions of social presence, engagement, and social support.
Chen and Wang [44] hypothesized that empathy and anti-empathy were closely related to a creature’s inertial impression of coexistence and competition within a group, and they established a unified model of empathy and anti-empathy. They also presented the Adaptive Empathetic Learner (AEL), an agent training method that enables evaluation and learning procedures for emotional utilities in a multi-agent system. Perugia et al. [45] investigated which personality and empathy traits were related to facial mimicry between humans and artificial agents. They focused on the humanness and embodiment of agents and the influence that they have on human facial mimicry. As a result, mimicry was found to be affected by the embodiment that an agent has, but not by its humanness. It was also correlated with both individual traits indicating sociability and empathy and with traits favoring emotion recognition.
Paiva defined the relationship between humans and empathetic agents, called empathy agents in HAI and HRI studies. As a definition of empathy between agents/robots and humans, Paiva represents empathy agents in two different ways: targeting empathy and empathizing with observers [46–48].
Self-disclosure in psychology
Self-disclosure has also been a focus of research in the field of psychology. Jourard [49] presented the Jourard Self-Disclosure Questionnaire (JSDQ), a self-disclosure classification and questionnaire. Attitudes, opinions, interests, study and work, personality, economy, and body were listed as categories. Carpenter and Freese [50] measured self-presentation intimacy and internality, Derlega and Berg [51] focused on the association between responsiveness and self-disclosure, and Laurenceau JP [52] suggested that both self-disclosure and partner responsiveness contribute to the experience of intimacy in interactions.
One study related to self-disclosure is the study of inner speech. Morin [53] reviewed past and current literature on the link between self-awareness and inner speech. Among multidimensional views of self-knowledge, he showed that inner speech accounts for half of the linkages between various elements and plays a fundamental role. In addition, Morin [54] further studied internal speech. He considered inner speech as creating psychological distance between the self and the mental events experienced by the self, that the self represents a problem, that self-information functions as a problem-solving device for resolution, and that it is possible to label internal aspects of the self that are otherwise difficult to recognize objectively. We emphasize that inner speech and imagined interactions (IIs) are not identical and differ in important ways. Therefore, although IIs and inner speech intersect, their overlap is quite limited, so it is possible to investigate one over the other.
Lockwood et al. [55] used self-reported measurements of empathy and apathy motivation in a large sample of healthy people to test whether more empathic people were more motivated. The actual self-disclosure reflected in interpersonal relationships has been investigated in a few studies. Therefore, Kreiner and Levi-Belz [56] designed new objective and dynamic measurements to evaluate self-disclosure and stable self-disclosure characteristics. Oh Kruzic et al. [57] focused on how the face and upper-body nonverbal channels contribute individually and collaboratively via avatars in virtual environments. Lee et al. [58] found that including self-disclosure from chatbots when they interacted with humans had an effect on improving participants’ perceptions of intimacy and enjoyment. Pan et al. [59] examined the effect of exposure to online support-seeking posts containing different levels of self-disclosure depth (baseline, peripheral, core) affecting the quality (person-centeredness and politeness) of participants’ messages providing support.
Materials and methods
Ethics statement
The protocol was approved by the ethics committee of the National Institute of Informatics (13, April, 2020, No. 1). All studies were carried out in accordance with the recommendations of the Ethical Guidelines for Medical and Health Research Involving Human Subjects provided by the Ministry of Education, Culture, Sports, Science and Technology and Ministry of Health, Labour and Welfare in Japan. Written informed consent was provided by choosing one option on an online form: “I am indicating that I have read the information in the instructions for participating in this research. I consent to participate in this research.” All participants gave informed consent. After that, they were debriefed about the experimental procedures.
Hypotheses
The purpose of this study was to investigate whether it is possible to elicit more human empathy when an empathy agent performs self-disclosure related to a particular situation in an interaction with a human. In this experiment, three types of self-disclosure topics were prepared, work, hobby, and weather or land, in order of relevance to the situation during a conversation about work. In addition, the appearances of the agents were human-like and robot. This objective is an important condition for humans and agents to cooperate in society. If our hypothesis is supported, this research can help develop agents that are more acceptable to humans.
Based on the above, we considered two hypotheses. Experiments were conducted to investigate these hypotheses.
- H1: Of the three types of self-disclosure from empathy agents (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), high-relevance self-disclosure can facilitate empathy the best of them, and no self-disclosure suppresses empathy.
- H2: In interacting with agents, appearance factors have little impact on promoting empathy through self-disclosure.
Some papers have reported that the relevance of self-disclosure to the situation significantly influences others’ self-disclosure in social psychology literature [60, 61]. Also, previous studies have asserted the influence of self-disclosure on empathy [62, 63]. Because these previous studies pointed this out and we think that the agent’s self-disclosure can be controlled by its relevance to the situation, we introduced the relevance of the agent’s self-disclosure as an independent variable in this work. As we focused on a few representative levels in this first step to investigate the agent’s self-disclosure relevance, we introduced “high/low-relevant self-disclosure” and “non-self-disclosure” as a control condition.
There are reasons for the H2 hypothesis as well, and we derived our hypothesis from the following previous studies. Riek et al. [64] investigated robot appearance. They showed that there was no significant difference in empathy by appearance between humanoid robots and androids. In our study, we had two types of appearance, human-like and robot, which were classified as human-like appearance in Riek et al.’s study and both were observed in their study to have less influence on empathy through appearance. Okanda et al. [13] investigated animistic tendencies and empathy through robot appearance. They showed that these tendencies may be every similar for three types of appearance: human, dog, and egg-shaped. On the basis of their study, we also considered the influence of appearance on self-disclosed empathy to be small. Therefore, we considered the appearances used in our study to have less influence on empathy through self-disclosure because of the previous studies. On the basis of these previous studies, we introduced H2 and tested it experimentally.
Experimental procedure
The experiments were conducted in an online environment. The online environment used in this experiment has already been used as one experimental method [65–67]. The flowchart of this experiment is shown in Fig 1. Participants performed two tasks. Below, we describe the two tasks.
In the first task, the participants are asked to read a simple abstract prepared in advance in text format so that they could understand the relationship with the agent. They were only to read while imagining the agent. After they read the abstract, the empathy that they felt for the agent was tabulated in a questionnaire survey. In this task, participants did not judge the appearance of the agent or self-disclosure.
The first task is to show a simple abstract without self-disclosure to investigate the influence of only self-disclosure in the second task so that the influence of the factors can be investigated in terms of the differences in empathy after each task. Therefore, participants were not able to see the agent’s appearance in the first task. This is likewise because we wanted to see changes in empathy due to the appearance and self-disclosure factors after the second task. This method was designed to reduce external factors as much as possible, so that the change in empathy before and after the 2nd task was the effect of the factors.
In the second task, a three minute video that was made from the content of the first task was shown to the participants. The agent in the video spoke silently to the participants through a text box. The reason for the silence is that sound may affect the facilitation of empathy. In addition, gestures were performed at the same timing under all conditions. Participants interacted with the agent under any one of a total of six conditions that combined two factors: appearance (human, robot) and self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure). The control state was a condition of no self-disclosure. The content except for the case of self-disclosure was the same, so it was possible to investigate the promotion of empathy due to the difference in self-disclosure. After that, as with the first task, the empathy felt toward the agent was tabulated in a questionnaire survey. After completing all the tasks, we asked them to write their impressions of the experiment in a free description.
Thus, the independent variables were self-disclosure (high-relevance, low-relevance and no self-disclosure), agent’s appearance (human, robot), and before/after stimulation (before, after video). The dependent variables were human empathy and human empathic response.
The experiments were conducted with a three-factor mixed-plan. The number of between-participant factors was two, appearance and self-disclosure, and the within-participant factor was the empathy values before/after video stimulus used to measure the change in empathy. The number of levels of each factor was two for appearance (human, robot), three for self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and two for stimulation (before, after). Although there were 12 levels in total, participants were asked to join in only 1 of 6 different experiments due to the within-participant factor.
Participants
We used Yahoo! Crowdsourcing to recruit participants, and we paid 70 yen (= 0.67 dollars US) to each participant as a reward. We created web pages for the experiments by using Google Forms, and we uploaded the video created for the experiment to YouTube and embedded it.
All participants had an understanding of Japanese. There were a 1011 participants in total. However, since there were 32 participants who gave inappropriate answers, their data was excluded as erroneous, so the final total was 979. To judge whether answers were inappropriate in the experiments, we judged answers as inappropriate when the changes in the empathy values before/after video were the same for all items or when only one item changed [68, 69]. After that, as a result of using Cronbach’s α coefficient for the reliability of the questionnaire, the coefficient was determined to be 0.7155 to 0.8201 under all conditions.
For the analysis, 153 people were analyzed under each of six conditions in the order of participation. Therefore, the total number of participants used in the analysis was 918. The average age was 45.51 years (S.D. = 11.25), with a minimum of 15 years and a maximum of 86 years. In addition, there were 505 males and 413 females.
Questionnaire
In this study, we used a questionnaire related to empathy that has been used in previous psychological studies. To investigate the characteristics of empathy, we modified the Interpersonal Reactivity Index (IRI) to be an index for anthropomorphic agents. The main modifications were changing the target name and changing the question text to the past tense. In addition, the number of items on the IRI questionnaire was modified to 12; for this, items that were not appropriate for the experiment were deleted, and similar items were integrated. The same questionnaire was used for both tasks. Since both of the questionnaires used were based on IRI, a survey was conducted using a 5-point Likert scale (1: not applicable, 5: applicable).
The questionnaire used is shown in Table 1. Since Q4, Q9, and Q10 were reversal items, the points were reversed during analysis. Q1 to Q6 were related to affective empathy, and Q7 to Q12 were related to cognitive empathy. Only the second task had one additional question, which is shown in Table 1 as BeQ. This was an item for investigating the empathic response of the participants, and they answered this question with either yes or no. Participants answered a questionnaire after completing the first task and the second task.
Agents’ appearance
In this experiment, two types of agent appearances were prepared. These agents were run on MikuMikuDance (MMD) [https://sites.google.com/view/evpvp/]. MMD is a software program that runs 3D characters.
Figs 2 and 3 show robot-like and human-like appearances. The purpose of preparing two appearances was to investigate one of our hypotheses, that is, that appearance factors do not affect the promotion of empathy through self-disclosure. Agent gestures included tilting the left and right arms and neck, and both agents operated at the same timing in the scenario. As for facial expressions, the human slightly moved their eyes and mouth, but the robot moved only their eyes.
Part where human-like agent and participants interacted.
Part where robot agent and participants interacted.
Agent’s self-disclosure
The scenario was that the participants were chatting at a cafe during a lunch break as a colleague at the agent’s workplace. All scenarios started with a common content. After that, there was a content that included self-disclosure of the agents in each condition. Finally, the scenario ended with the common content. A flowchart of the scenario is shown in Fig 4.
The common scenario involved a conversation about the nature of the job. Self-disclosure in this experiment referred to personal information (e.g., work, hobby) about the agent. In this experiment, self-disclosure was classified into the above three types, and as shown in the figure, self-disclosure was defined in accordance with its relevance to the common scenario. Therefore, self-disclosure in this study was most relevant for content about work. Stories about hobbies were less relevant because they involved self-disclosure not related to the common scenario. Finally, to unify the participants’ interaction time, no self-disclosure was defined as talking about the weather or the land.
The agents’ self-disclosure was classified into three types: high-relevance, low-relevance, and no self-disclosure. At this time, all the content spoken by the agent was set to be neutral by sentiment analysis. The analysis was performed for all of the scenario in Python, and the numbers ranged from 0.075 to 0.190. Since this analysis ranges from -1 to 1, 0.075 to 0.190 can be classified as neutral.
The difference of the scenario was in the content of the self-disclosure of the agent. Since the agent spoke about his own work in the cafe scenario, the content in the case of high-relevance self-disclosure was related to work. The low-relevance self-disclosure was related to the hobby of the agent. To adjust the video time for the case of no self-disclosure, the agent spoke about trivial topics, such as the weather and local area information, to consume time. All of the videos were about 3 minutes. All scenarios are described in the S2 File. A manipulation check was performed to ensure that the self-disclosure used in this study was as expected.
Manipulation check: Relevance of self-disclosure and degree of self-disclosure
We created two versions of the scenario: a common scenario and ones in which each type of self-disclosure were performed under this common scenario. Please review the scenario in the S2 File. It was necessary to check that the types we prepared were what we had intended. By performing a manipulation check, we confirmed that the three types (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure) gave the intended impression in the cafe scenario we used.
As a manipulation check, we conducted an experiment to investigate the relationship between the scenarios and self-disclosure. In order to investigate whether the content of self-disclosure was perceived as self-disclosure by the participants, we also investigated the degree of self-disclosure. We asked the participants to read only the text of the common scenario (scenario 1) and then read the scenario for each self-disclosure condition (scenario 2). Afterward, they answered a questionnaire.
There were two questions (relevance of self-disclosure: Were the two scenarios related to each other?; degree of self-disclosure: How much self-disclosure was in scenario 2?). A 7-point Likert scale was used (1: unrelated, no self-disclosure, 7: related, high self-disclosure). This was a one-factor experiment between participants, and there were three levels of self-disclosure. The analysis was a one-way ANOVA among one-factor participants.
Manipulation check: Participants.
We used Yahoo! Crowdsourcing JAPAN to recruit participants, and we paid 32 yen (= 0.30 dollars US) to each participant as a reward. We created web pages for the experiment by using Google Forms, and we uploaded the videos created for the experiment to YouTube and embedded them. All participants had an understanding of Japanese. There was a total of 154 participants. The average age was 44.16 years (S.D. = 9.559), with a minimum of 20 years and a maximum of 63 years. In addition, there were 115 males and 39 females.
Manipulation check: Result of analysis.
For multiple comparisons, we used Holm’s multiple comparison test to examine whether the results were significant. Since the factors were significant in the results of each questionnaire, the main effect was investigated [relevance of self-disclosure: F(2,151) = 76.70, p = 0.0000, , degree of self-disclosure: F(2,151) = 102.44, p = 0.0000, ]. The results of the analysis indicated that the main effect was significant, so the results of the multiple comparisons were investigated.
The high-relevance self-disclosure conditions were found to be highly relevant to the most common scenario. In addition, relevance of self-disclosure showed a significant difference in the combination of all three levels, they were considered to be related to the common scenario in the order of high-relevance self-disclosure (mean = 5.920, S.D. = 1.426) >low-relevance self-disclosure (mean = 3.510, S.D. = 1.870) >no self-disclosure (mean = 2.132, S.D. = 1.359). From this, it was found that the cafe scenario designed by us had a degree of relevance to self-disclosure.
Next, it was observed that the degree of self-disclosure under each self-disclosure condition seemed to be in the order of high-relevance self-disclosure (mean = 5.820, S.D. = 0.9624) >low-relevance self-disclosure (mean = 5.010, S.D. = 0.9693) >no self-disclosure (mean = 2.585, S.D. = 1.550). In this experiment, the maximum evaluation that could be given was 7 points, so the average for high- and low-relevance self-disclosure was 4 points or more, and that for the case of no self-disclosure was less than 4 points. From the above results, it was judged that the content of the high- and low-relevance disclosure was self-disclosure.
Also, as a result of a post-hoc analysis, the effect size of the relevance of self-disclosure was 1.008, and the effect size of the degree of self-disclosure was 1.165. The power of the relevance of self-disclosure was 1.000, and the power of the degree of self-disclosure was 1.000. It was also found that both the degree of relevance to the scenario and the degree of self-disclosure were effective. This manipulation check was able to objectively confirm the relevance of self-disclosure and the degree of self-disclosure in the cafe scenario we created. Our study was conducted using this scenario.
Analysis method
We employed an ANOVA for a three-factor mixed-plan. ANOVA has been used frequently in previous studies and is an appropriate method of analysis with respect to the present study. The between-participant factors were two levels of appearance and three levels of self-disclosure. There were two levels for the within-participant factor, before/after video.
From the results of the participants’ questionnaires, we investigated how self-disclosure and appearance affected the promotion of empathy as factors that elicit human empathy. The values of empathy aggregated in the first task and the second task were used as the dependent variable. For the empathic response, the Yes/No answer was replaced with a 1/0 dummy variable, and an ANOVA between two factors was then performed. R (ver. 4.1.0), statistical software, was used for the ANOVA and multiple comparisons in all analyses in this paper.
Results
Table 2 shows the results of an ANOVA for the 12-item questionnaire. It also shows the results of an ANOVA for affective empathy (Q1-Q6) and cognitive empathy (Q7-Q12), which are classifications of empathy. The results are summarized in this paper, focusing on the areas with simple main effects where the interaction was significant. Also, we investigated the results of an analysis done to judge the empathic response, which was surveyed only after the video was watched. For multiple comparisons, we examined the existence of significant differences by using Holm’s multiple comparison test.
From the results of each questionnaire, a significant difference was found in the interaction between two factors, self-disclosure and before/after video. The results of the interaction are shown in Fig 5. This graph also shows the mean and S.D. for each condition. Also, there was no significant interaction between the appearance factor and the self-disclosure factor under all conditions. Below, items for which an interaction effect was found are not discussed even if a main effect was found. For items for which no interaction was found and a main effect was observed, the result of the main effect is shown. Therefore, we investigated the simple main effects for the factors of self-disclosure and before/after video watching. Table 3 shows the results of multiple comparison for the 12-item questionnaire.
Empathy value
The results for empathy (Q1–12) showed an interaction between the factors of self-disclosure factor and before/after video watching. The main effects of the self-disclosure factor and the before/after video factor were also significant, but they were omitted because of the interaction effect that self-disclosure and watching the video factor.
As a result of a multiple comparison, the simple main effect of the self-disclosure factor after watching the video showed a significant difference in the combination of all three levels as shown in Fig 6. In addition, the simple main effect before/after video for each self-disclosure condition was significantly different before/after video under all self-disclosure conditions. On the basis of the above results, it was suggested that self-disclosure facilitated empathy when the relevance was high and that empathy was suppressed when there was no self-disclosure. In addition, in the case of less relevant self-disclosure, empathy was suppressed. High-relevance self-disclosure was most likely to facilitate empathy, and no self-disclosure suppressed it. From the results of the post-hoc analysis, it was found that self-disclosure was effective for empathy.
Affective empathy
Similarly, the results for affective empathy (Q1-Q6) showed an interaction between the self-disclosure factor and before/after video. The main effects of the self-disclosure factor and the before/after video factor were also significant, but they were omitted because of the interaction effect that self-disclosure and watching the video factor.
As a result of a multiple comparison, the simple main effect of the self-disclosure factor after watching the video showed a significant difference in the combination of all three levels as shown in Fig 7. However, the simple main effect after watching the video for each self-disclosure condition was not significantly different between before/after video with high relevance. Under the other self-disclosure conditions, a significant difference was observed before/after video, and the result was that affective empathy was suppressed. This suggests that affective empathy is not suppressed only in the case of high-relevance self-disclosure. From the results of the post-hoc analyses, it was found that self-disclosure was effective for affective empathy.
Cognitive empathy
In addition, the results for cognitive empathy (Q7-Q12) showed an interaction between the self-disclosure factor and before/after video. The main effect of the self-disclosure factor was also significant but was omitted because of the interaction effect that self-disclosure and watching the video factor.
As a result of a multiple comparison, the simple main effect of the self-disclosure factor after watching the video showed a significant difference in the combination of all three levels as shown in Fig 8. However, the simple main effect after watching the video for each self-disclosure condition was not significantly different between before/after video with low-relevance. Under the other self-disclosure conditions, self-disclosure facilitated cognitive empathy when the relevance was high, and no self-disclosure suppressed cognitive empathy. From the above, it was suggested that high-relevance self-disclosure facilitated cognitive empathy and that no self-disclosure suppressed cognitive empathy. From the results of the post-hoc analyses, it was found that self-disclosure was effective for cognitive empathy.
Empathic response
Finally, the results for empathic response showed no interaction between the appearance and self-disclosure factors. The results of the ANOVA analysis of empathic responses showed in Table 2 (BeQ). The main effects of the appearance factor and the self-disclosure factor were also significant.
As a result, the main effect of the appearance factor showed a significant difference in the two levels (human-like: mean = 0.7691, S.D. = 0.4219, robot mean = 0.7124, S.D. = 0.4531). As a result of a multiple comparison, the main effect of the self-disclosure factor showed a significant difference between high-relevance and no self-disclosure as shown in Fig 9 (high-relevance: mean = 0.7974, S.D. = 0.4026, low-relevance: mean = 0.7288, S.D. = 0.4453, no self-disclosure: mean = 0.6961, S.D. = 0.4607).
Discussion
Supporting hypotheses
The way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. This idea is supported by several previous studies [20, 21]. Human empathy for agents is a necessary component for agents to be used in society. When agents are able to take an appropriate approach to human empathy, humans and agents can build a trusting relationship.
In this study, the experiments were conducted to investigate the conditions necessary for humans to empathize with anthropomorphic agents. We focused on agent appearance and agent self-disclosure as factors that influence human empathy. The purpose of this study is to investigate whether humans can elicit more empathy when they make a self-disclosure related to a particular situation in an interaction with an empathy agent. For this purpose, we formulated two hypotheses and analyzed the data obtained from the experiments.
The results supported H1 in that, among the three types of self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure) from the empathy agent, high-relevance self-disclosure is most likely to facilitate empathy, and no self-disclosure suppresses empathy. We hypothesized that empathy was facilitated only by high-relevance self-disclosure and that empathy suppressed no self-disclosure. However, low-relevance self-disclosure suppressed empathy.
Next, the experiments supported H2 in that in interacting with agents, appearance factors have little impact on promoting empathy through self-disclosure. So far, appearance and self-disclosure have been studied for human empathy. There is a reason for the choice of the appearances we used this time. For both of the agents, we adopted a body structure similar to that of humans on the premise that the agents were doing the same work as humans. It should be noted that there was no interaction between appearance and self-disclosure for an experiment in which different models were prepared for the appearance but the conditions for self-disclosure were set.
Comparison with previous studies
Shaffer et al. [9] asked participants to imagine a pregnant woman smoking and to write down the reasons why she smokes. The results showed that participants empathized with the pregnant woman more after writing than before. In our study, instead of writing, the agents self-disclosed. In addition, instead of the pregnant woman, we investigated the impact of human empathy on the agent’s appearance (human-like appearance or robot appearance). The results showed a similar trend to previous studies, as human empathy was affected by the agent’s self-disclosure regardless of appearance.
Pan et al. [59] investigates human impressions of machines by the level of self-disclosure of the machines. The self-disclosure efforts elicited information and politeness from the humans. Our study also showed a similar trend in that the promotion of empathy was observed. Similarly, Lee et al. [58] improved participants’ self-disclosure, intimacy, and enjoyment when the chatbot self-disclosed. In this study, agent self-disclosure promoted human empathy when it was relevant to the scenario. The results showed that the impact on empathy changed by the relevance of self-disclosure.
Riek et al. [64] investigated the appearance that a robot needs when interacting with a human. The results showed that the robot was more likely to be empathetic if it had a human-like shape. In this study, human empathy was similarly effective for human-like appearance and robot appearance because the robots had an appearance similar to human structure. However, since this experiment used two different human-like appearances, it is necessary to consider the influence of anthropomorphism. Anthropomorphism can affect interaction with humans and can affect trust [41, 70, 71].
In our study, it was found that the self-disclosure factor promotes human empathy toward an anthropomorphic agent. In addition, an interaction was not observed between the appearance factor and the self-disclosure factor. We believe that this study will be an important one that separates appearance and self-disclosure as separate factors.
Although this study focused on promoting empathy, it was confirmed from the results of that empathy was suppressed. This result has not been found in other studies. By properly using empathy depending on the situation, we think that the impact on humans can be adjusted for anthropomorphic agents introduced into human society in the future.
Empathic response
We discuss the results for behavior as an analysis of empathic response. In the experiments, participants played the role of observer for the target empathy agent. Observers responded empathetically to any information available from the target. The choice of whether to lend money was considered to be empathic response behavior. As a result of analyzing the behavior related to the empathic response after watching the video, a significant difference was found. However, unlike the other analyses, the effect size was small for empathic responses, and thus, the effect on empathic responses was small in the experiments. We think that this did not affect the behavior because the interaction time between the empathy agent and the participants was as short as three minutes.
Limitations
As a limitation of this study, participants interacted with the empathy agents by watching a video. The current results are not enough because the sense of distance is different from the case of agents actually introduced into society. We will proceed with research in an environment where participants and anthropomorphic agents actually interact with each other.
In addition, in this study, the appearance factors were roughly divided into two types. However, if a suitable appearance is prepared for each situation, it is possible that an interaction between the appearance factor and the self-disclosure factor may be observed. However, appearance factors vary greatly depending on human taste, and humans themselves do not have exactly the same appearance. Therefore, anthropomorphic agents should not be judged by their fixed appearance.
How to design the social relationship between a human and an agent to control human empathy toward the agent, and how to design an agent’s decision-making to use self-disclosure must be important research issues; thus, we would like to study them in the future. Also, designing an agent that can change its disclosure of personal information depending on its counterpart, as humans do, is an important issue. If agents could judge when to disclose the appropriate information depending on their counterparts, they may be viewed more favorably by humans than if their self-disclosure is pre-designed and operated.
However, since this study investigates the impact of empathy when an agent self-discloses, it was not necessary to design the agent itself to disclose personal information to humans. Another issue is that there is a relationship between personality and self-disclosure, and although this study dealt with agent self-disclosure, it is necessary to investigate the impact of human empathy by designing an agent’s “personality” characteristics. As a topic for future research, we will develop a model that allows agents to make judgments about self-disclosure and investigate whether it is more effective than designed self-disclosure. At the same time, we will also focus on the personality of an agent and investigate its relationship with the agent’s self-disclosure.
Conclusion
To solve the problem of agents not being accepted by humans, we hope that agents will be used more in human society in the future by having humans empathize with them. This study is an example of how empathy can be facilitated between humans and agents. The experiment was conducted with a three-factor mixed-plan, and the number of between-participant factors was two, appearance and self-disclosure, and the within-participant factor was the empathy values before/after video to measure the change in empathy. The number of levels of each factor was two for appearance (human, robot), three for self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and two for stimulation (before, after). The dependent variable was the empathy that the participants had. As a result, we found that the appearance factor did not have a main effect, and self-disclosure, which is highly relevant to the scenario used, facilitated more human empathy with statistically significant difference. We also found that no self-disclosure suppressed empathy. In addition, self-disclosure was found to be important for manipulating empathy toward the other party. These results support our hypotheses. Moreover, the empathic response was affected by appearance and self-disclosure factors. This study is an important example of how human empathy can work for artifacts. Agents, which are increasingly used in human society, have been found to gain empathy from humans through self-disclosure. As future research, we can develop empathy agents for various situations by considering cases in which we can strengthen or weaken a specific empathy element for affective empathy and cognitive empathy.
References
- 1.
Reeves B, Nass C. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places. USA: Cambridge University Press; 1996.
- 2. Ryan M. In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics. 2020;26(5):2749–2767. pmid:32524425
- 3. Hallamaa J, Kalliokoski T. AI Ethics as Applied Ethics. Frontiers in Computer Science. 2022;4.
- 4. Nomura T, Kanda T, Suzuki T. Experimental investigation into influence of negative attitudes toward robots on human-robot interaction. AI Soc. 2006;20:138–150.
- 5. Nomura T, Kanda T, Suzuki T, Kato K. Prediction of Human Behavior in Human–Robot Interaction Using Psychological Scales for Anxiety and Negative Attitudes Toward Robots. IEEE Transactions on Robotics. 2008;24(2):442–451.
- 6. Nomura T, Kanda T, Kidokoro H, Suehiro Y, Yamada S. Why do children abuse robots? Interaction Studies. 2016;17(3):347–369.
- 7.
Tsumura T, Yamada S. Agents Facilitate One Category of Human Empathy through Task Difficulty. In: 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE Press; 2022. p. 22–28.
- 8. Konrath S, Falk E, Fuhrel-Forbis A, Liu M, Swain J, Tolman R, et al. Can Text Messages Increase Empathy and Prosocial Behavior? The Development and Initial Validation of Text to Connect. PLOS ONE. 2015;10(9):1–27. pmid:26356504
- 9. Shaffer VA, Bohanek J, Focella ES, Horstman H, Saffran L. Encouraging perspective taking: Using narrative writing to induce empathy for others engaging in negative health behaviors. PLOS ONE. 2019;14(10):1–16. pmid:31613906
- 10.
Tahara S, Ikeda K, Hoashi K. Empathic Dialogue System Based on Emotions Extracted from Tweets. In: Proceedings of the 24th International Conference on Intelligent User Interfaces. IUI’19. New York, NY, USA: Association for Computing Machinery; 2019. p. 52–56.
- 11. Tisseron S, Tordo F, Baddoura R. Testing Empathy with Robots: A Model in Four Dimensions and Sixteen Items. International Journal of Social Robotics. 2015;7(1):97–102.
- 12.
Yoshioka G, Takeuchi Y. Inferring Affective States by Involving Simple Robot Movements. In: Proceedings of the 3rd International Conference on Human-Agent Interaction. HAI’15. New York, NY, USA: Association for Computing Machinery; 2015. p. 73–78.
- 13.
Okanda M, Taniguchi K, Itakura S. The Role of Animism Tendencies and Empathy in Adult Evaluations of Robot. In: Proceedings of the 7th International Conference on Human-Agent Interaction. HAI’19. New York, NY, USA: Association for Computing Machinery; 2019. p. 51–58.
- 14. O’Connell G, Christakou A, Haffey A, Chakrabarti B. The role of empathy in choosing rewards from another’s perspective. Frontiers in Human Neuroscience. 2013;7:174. pmid:23734112
- 15.
Tan XZ, Vázquez M, Carter EJ, Morales CG, Steinfeld A. Inducing Bystander Interventions During Robot Abuse with Social Mechanisms. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. HRI’18. New York, NY, USA: Association for Computing Machinery; 2018. p. 169–177.
- 16.
Richards D, Bilgin AA, Ranjbartabar H. Users’ Perceptions of Empathic Dialogue Cues: A Data-Driven Approach to Provide Tailored Empathy. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents. IVA’18. New York, NY, USA: Association for Computing Machinery; 2018. p. 35–42.
- 17. Stephan A. Empathy for Artificial Agents. International Journal of Social Robotics. 2015;7(1):111–116.
- 18.
Hosseinpanah A, Krämer NC, Straßmann C. Empathy for Everyone? The Effect of Age When Evaluating a Virtual Agent. In: Proceedings of the 6th International Conference on Human-Agent Interaction. HAI’18. New York, NY, USA: Association for Computing Machinery; 2018. p. 184–190.
- 19. Giannopulu I, Etournaud A, Terada K, Velonaki M, Watanabe T. Ordered interpersonal synchronisation in ASD children via robots. Scientific Reports. 2020;10(1):17380. pmid:33060720
- 20. Gaesser B. Constructing Memory, Imagination, and Empathy: A Cognitive Neuroscience Perspective. Frontiers in Psychology. 2013;3:576. pmid:23440064
- 21. Klimecki OM, Mayer SV, Jusyte A, Scheeff J, Schönenberg M. Empathy promotes altruistic behavior in economic interactions. Scientific Reports. 2016;6(1):31961. pmid:27578563
- 22.
Omdahl BL. Cognitive appraisal, emotion, and empathy. 1st ed. Lecture Notes in Computer Science. New York: Psychology Press; 1995.
- 23. Preston SD, de Waal FBM. Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences. 2002;25(1):1–20. pmid:12625087
- 24. Olderbak S, Sassenrath C, Keller J, Wilhelm O. An emotion-differentiated perspective on empathy with the emotion specific empathy questionnaire. Frontiers in Psychology. 2014;5:653. pmid:25071632
- 25.
Bloom P. Against Empathy: The Case for Rational Compassion. HarperCollins; 2016.
- 26. Gosling SD, Rentfrow PJ, Swann WB. A very brief measure of the Big-Five personality domains. Journal of Research in Personality. 2003;37(6):504–528.
- 27.
Davis MH, Davis MH. A multidimensional approach to individual difference in empathy. In: JSAS Catalog of Selected Documents in Psychology; 1980. p. 85.
- 28. Baron-Cohen S, Wheelwright S. The empathy quotient: an investigation of adults with Asperger syndrome or high functioning autism, and normal sex differences. J Autism Dev Disord. 2004;34(2):163–175. pmid:15162935
- 29. Lawrence EJ, Shaw P, Baker D, Baron-cohen S, David AS. Measuring empathy: reliability and validity of the Empathy Quotient. Psychological Medicine. 2004;34(5):911–920. pmid:15500311
- 30. Bertrand P, Guegan J, Robieux L, McCall CA, Zenasni F. Learning Empathy Through Virtual Reality: Multiple Strategies for Training Empathy-Related Abilities Using Body Ownership Illusions in Embodied Virtual Reality. Frontiers in Robotics and AI. 2018;5:26. pmid:33500913
- 31. Herrera F, Bailenson J, Weisz E, Ogle E, Zaki J. Building long-term empathy: A large-scale comparison of traditional and virtual reality perspective-taking. PLOS ONE. 2018;13(10):1–37. pmid:30332407
- 32.
Curran MT, Gordon JR, Lin L, Sridhar PK, Chuang J. Understanding Digitally-Mediated Empathy: An Exploration of Visual, Narrative, and Biosensory Informational Cues. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI’19. New York, NY, USA: Association for Computing Machinery; 2019. p. 1–13.
- 33.
Pfeil U, Zaphiris P. Patterns of Empathy in Online Communication. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI’07. New York, NY, USA: Association for Computing Machinery; 2007. p. 919–928.
- 34.
Bennett CL, Rosner DK. The Promise of Empathy: Design, Disability, and Knowing the “Other”. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI’19. New York, NY, USA: Association for Computing Machinery; 2019. p. 1–13.
- 35. Chella A, Pipitone A, Morin A, Racy F. Developing Self-Awareness in Robots via Inner Speech. Frontiers in Robotics and AI. 2020;7. pmid:33501185
- 36.
Beck A, Cañamero L, Bard KA. Towards an Affect Space for robots to display emotional body language. In: 19th International Symposium in Robot and Human Interactive Communication; 2010. p. 464–469.
- 37. Asada M. Towards Artificial Empathy. International Journal of Social Robotics. 2015;7(1):19–33.
- 38.
Dumouchel P, Damiano L, DeBevoise M. Living with Robots. Harvard University Press; 2017.
- 39.
Mollahosseini A, Abdollahi H, Mahoor MH. Studying Effects of Incorporating Automated Affect Perception with Spoken Dialog in Social Robots. In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN); 2018. p. 783–789.
- 40. Pipitone A, Chella A. What robots want? Hearing the inner voice of a robot. iScience. 2021;24(4):102371. pmid:33997672
- 41.
Geraci A, D’Amico A, Seidita V, Pipitone A, Chella A. Robot’s Inner Speech Effects on Trust and Anthropomorphic Cues in Human-Robot Cooperation. In: Trust, Acceptance and Social Cues in Human-Robot Interaction – SCRITA 2021; 2021. p. 1–6.
- 42.
McQuiggan SW, Robison JL, Phillips R, Lester JC. Modeling Parallel and Reactive Empathy in Virtual Agents: An Inductive Approach. In: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems—Volume 1. AAMAS’08. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems; 2008. p. 167–174.
- 43. Leite I, Castellano G, Pereira A, Martinho C, Paiva A. Empathic Robots for Long-term Interaction. International Journal of Social Robotics. 2014.
- 44.
Chen J, Wang C. Reaching Cooperation Using Emerging Empathy and Counter-Empathy. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS’19. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems; 2019. p. 746–753.
- 45.
Perugia G, Paetzel M, Castellano G. On the Role of Personality and Empathy in Human-Human, Human-Agent, and Human-Robot Mimicry. In: Wagner AR, Feil-Seifer D, Haring KS, Rossi S, Williams T, He H, et al., editors. Social Robotics. Cham: Springer International Publishing; 2020. p. 120–131.
- 46. Paiva A, Dias J, Sobral D, Aylett R, Sobreperez P, Woods S, et al. Caring for Agents and Agents that Care: Building Empathic Relations with Synthetic Agents. Autonomous Agents and Multiagent Systems, International Joint Conference on. 2004;2:194–201.
- 47. Paiva A. Empathy in Social Agents. International Journal of Virtual Reality. 2011;10(1):1–4.
- 48. Paiva A, Leite I, Boukricha H, Wachsmuth I. Empathy in Virtual Agents and Robots: A Survey. ACM Trans Interact Intell Syst. 2017;7(3).
- 49.
Jourard SM. Self-disclosure: An experimental analysis of the transparent self. John Wiley; 1971.
- 50. Carpenter JC, Freese JJ. Three Aspects of Self-Disclosure as They Relate to Quality of Adjustment. Journal of Personality Assessment. 1979;43(1):78–85. pmid:16367049
- 51.
Derlega VJ, Berg JH. Responsiveness and Self-Disclosure. Boston, MA: Springer US; 1987.
- 52. Laurenceau JP PP Barrett LF. Intimacy as an interpersonal process: the importance of self-disclosure, partner disclosure, and perceived partner responsiveness in interpersonal exchanges. J Pers Soc Psychol. 1998.
- 53. Morin A. Possible Links Between Self-Awareness and Inner Speech Theoretical background, underlying mechanisms, and empirical evidence. Journal of Consciousness Studies. 2005;12.
- 54. Morin A. When Inner Speech and Imagined Interactions Meet. Imagination, Cognition and Personality. 2020;39(4):374–385.
- 55. Lockwood PL, Ang YS, Husain M, Crockett MJ. Individual differences in empathy are associated with apathy-motivation. Scientific Reports. 2017;7(1):17293. pmid:29229968
- 56. Kreiner H, Levi-Belz Y. Self-Disclosure Here and Now: Combining Retrospective Perceived Assessment With Dynamic Behavioral Measures. Frontiers in Psychology. 2019;10:558. pmid:30984058
- 57. Oh Kruzic C, Kruzic D, Herrera F, Bailenson J. Facial expressions contribute more than body movements to conversational outcomes in avatar-mediated virtual environments. Scientific Reports. 2020;10(1):20626. pmid:33244081
- 58.
Lee YC, Yamashita N, Huang Y, Fu W. “I Hear You, I Feel You”: Encouraging Deep Self-Disclosure through a Chatbot. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery; 2020. p. 1–12.
- 59. Pan W, Feng B, Wingate VS, Li S. What to Say When Seeking Support Online: A Comparison Among Different Levels of Self-Disclosure. Frontiers in Psychology. 2020;11. pmid:32581910
- 60. Cayanus JL, Martin MM. Teacher Self-Disclosure: Amount, Relevance, and Negativity. Communication Quarterly. 2008;56(3):325–341.
- 61.
Masur PK. In: The Theory of Situational Privacy and Self-Disclosure. Cham: Springer International Publishing; 2019. p. 131–182.
- 62. Brems C. Dimensionality of Empathy and Its Correlates. The Journal of Psychology. 1989;123(4):329–337.
- 63.
Higashinaka R, Dohsaka K, Isozaki H. Effects of self-disclosure and empathy in human-computer dialogue. In: 2008 IEEE Spoken Language Technology Workshop; 2008. p. 109–112.
- 64.
Riek LD, Rabinowitch TC, Chakrabarti B, Robinson P. Empathizing with robots: Fellow feeling along the anthropomorphic spectrum. In: 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops; 2009. p. 1–6.
- 65. Davis RN. Web-based administration of a personality questionnaire: Comparison with traditional methods. Behavior Research Methods, Instruments, & Computers. 1999;31:572–577. pmid:10633976
- 66. Crump MJC, McDonnell JV, Gureckis TM. Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. PLOS ONE. 2013;8(3):1–18.
- 67. Okamura K, Yamada S. Adaptive trust calibration for human-AI collaboration. PLOS ONE. 2020;15(2):1–20. pmid:32084201
- 68. Schonlau M, Toepoel V. Straightlining in Web survey panels over time. Survey Research Methods. 2015;9(2):125–137.
- 69. Leiner DJ. Too Fast, too Straight, too Weird: Non-Reactive Indicators for Meaningless Data in Internet Surveys. Survey Research Methods. 2019;13(3):229–248.
- 70. Damiano L, Dumouchel P. Anthropomorphism in Human-Robot Co-evolution. Frontiers in Psychology. 2018;9. pmid:29632507
- 71. Geraci A, D’Amico A, Pipitone A, Seidita V, Chella A. Automation Inner Speech as an Anthropomorphic Feature Affecting Human Trust: Current Issues and Future Directions. Frontiers in Robotics and AI. 2021;8. pmid:33969001