Decision-making with a Humanoid Robot Partner: Individual
Differences Impacting Trust
Joel Elson
a
, Luis Merino and Douglas Derrick
b
University of Nebraska at Omaha, 6002 Dodge St., Omaha, NE 68182, U.S.A.
Keywords: Trust, Personality, Humanoid Robot.
Abstract: Trust in human-machine teams, where humans partner with intelligent systems, is critical to effective
collaboration and work success. Research in prior studies of trust in human-robot partnerships, has largely
focused on three groups of trust antecedents: factors relating to the environment, the machine, and human
individual differences. There is a dearth of research in this later area, despite wide recognition that individual
differences play an important role in human behaviour and cognition. This paper draws on the psychological
theory of trait activation and examines the role of human personality in trust in the relationships between
humans and intelligent humanoid robots partnering to make critical decisions. We conducted an empirical
study that looked to explore the role of the Big-Five personality traits on trust. Results suggest that the
openness personality trait is a significant predictor of trust in a humanoid-robot partner, above and beyond
the individual difference propensity trust. Individuals scoring high on the openness personality trait may have
a greater trust in a humanoid robot partner than those with low scores in the openness personality dimension.
Future studies should look to better understand the trait activating factors related to Openness in human
machine trusting relationships.
1 INTRODUCTION
Advances in artificial intelligence promise a future of
computing that will transform humans and machines'
relationship, moving machines from tools to
collaborative partners. This future has been referred
to as the "cognitive computing" era and is
characterized by intelligent systems, a class of
systems that learn and interact naturally to perform
knowledge work (Spohrer, J., and G. Banavar, 2015).
These systems are designed to augment human
expertise, amplify human intelligence, enhance
productivity, and improve decision-making. These
systems can be embodied as humanoid robots as a
way of integrating them with human teammates.
Trust in these systems is essential to collaborate
effectively and fully realize the advantages of these
new machine partners. Research has found trust to be
a necessary ingredient for successful cooperation
(Jones, G. R., and J. M. George, 1998), important in
predicting human use and reliance on technology
(Dzindolet, M. T., et al., 2003) and crucial to
relationships in situations characterized by risk and
a
https://orcid.org/0000-0003-4227-1274
b
https://orcid.org/0000-0001-8589-5023
uncertainty (Fukuyama, F., 1995; Luhmann, N.,
1982). Despite recognizing the importance of trust,
there is an incomplete understanding of trust, which
adequately accounts for the relationship between the
multitude of factors that contribute to trust in an a
robot partner. Idiosyncratic patterns of trust has been
observed across trust research in humans, machines,
and other technology systems. This paper advances
the theories of trust and unifies prior work by
adopting existing approaches and theoretical
frameworks from psychology literature and applying
them to the Information Systems domain to better
understand human trust humanoid robots.
It is generally recognized that three primary
sources influence human trust in an intelligent
machine: characteristics of the system (the robot
being trusted), individual characteristics (the person
who is trusting), and factors relating to the situation
or environment where trust is being applied. In
information systems literature, significant effort has
been made to describe system characteristics and
situational factors that contribute to trust. Relatively
little attention has been devoted to understanding the
300
Elson, J., Merino, L. and Derrick, D.
Decision-making with a Humanoid Robot Partner: Individual Differences Impacting Trust.
DOI: 10.5220/0010722400003060
In Proceedings of the 5th International Conference on Computer-Human Interaction Research and Applications (CHIRA 2021), pages 300-307
ISBN: 978-989-758-538-8; ISSN: 2184-3244
Copyright
c
2021 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
role that personality traits and other individual
differences play, despite numerous trust researchers
recognizing their importance in trusting relationships
involving both humans and machines (Billings, D. R.,
et al., 2012; Mayer, R. C., et al., 1995; Mcknight, D.
H., et al., 2011). This paper focuses on how individual
differences and personality traits relate to trust in
humanoid-robot partners through the lens of Trait
Activation Theory. This theory's application to the
information systems domain provides a more complete
picture of the factors that come together to predict trust.
Trait Activation Theory posits that situational
cues uniquely act on an individual's personality to
elicit behavioural and psychological responses
characteristic of a personality type. In short,
environmental factors "activate" or amplify
personality trait expression. It follows that when
interacting with a humanoid robot, situational cues
activate characteristic personality responses,
ultimately influencing an individual's perceptions and
trust in the system. Trust in information systems has
been operationalized both as a behaviour response as
well as a quantifiable measure of a systems
performance, process, and purpose. We posit that
personality trait activation strongly impacts both
behavioural and perceptual measures of trust in an
intelligent system.
2 BACKGROUND
This section describes intelligent systems, previous
research on trust, individual differences and trait
activation theory.
2.1 Trust
Trust is a multi-dimensional construct that has proven
quite difficult to conceptualize and define (McKnight,
D. H., and N. L. Chervany, 2001). For this study we
adopt a definition of trust that has been proposed by
Madsen and Gregor (Madsen, M., and S. Gregor,
2000). They define trust as the extent to which a user
is confident in, and willing to act on the basis of the
recommendations, actions, and the decisions of a
computer-based tool or decision aid.” In this
definition, the human user is the “trustor” (the
individual who is trusting) and the technology is the
“trustee” (the object of trust).
Numerous definitions of trust exemplify the many
different ways of conceptualizing the construct. In
effort to bring clarity to the area of trust research,
McKnight and Chervany (McKnight, D. H., and N. L.
Chervany, 2001) created a typology of trust by
reviewing sixty-five articles containing trust
definitions and organized these by both trust
reference (characteristics of the trustee) and by
conceptual type. They identified four referent
grouping groupings of the trustee characteristics:
benevolence, integrity, competence, and
predictability. They also identified seven conceptual
type categories that include trusting: attitude,
intention, belief, expectancy, behaviour, disposition,
and institutional/structural. McKnight and Chervany
then created an interdisciplinary model of conceptual
trust types that included: 1) trusting intentions, 2)
trust-related behaviour, 3) trusting beliefs, 4)
Institution-based trust and 5) disposition to trust. We
refer readers to the McKnight and Chervany paper
(McKnight, D. H., and N. L. Chervany, 2001) for
additional information on trust and its classifications.
In this work we focus on trusting beliefs.
Foundational work on trusting beliefs was
conducted by Mayers, Davis, & Schoorman, and
identified several elements which may at the heart of
human-to-human trust including: 1) ability, 2)
benevolence, and 3) integrity. Ability describes how
capable or skilled a trustee is in carrying out a task in
a domain specified by a trustor. Benevolence relates
to a trustee having goals or intentions that benefit or
align with a trustor. Finally, integrity relates to a
trustor and trustee sharing a similar set of values and
can be counted on to act in accordance with these
shared beliefs. Building upon prior trust research, and
recognizing the distinctions that exist between human
to human and human to machine trust, McKnight
(Mcknight, D. H., et al., 2011) identify three
components of trusting beliefs that roughly align with
those identified by Mayers, Davis & Schoorman:
functionality, helpfulness, and reliability. Their work
suggests that these elements of trust are evaluated
either consciously or sub-consciously by technology
users and help to form the trusting beliefs an
individual has toward a technology.
In addition to understanding that there are
different components underlying trusting beliefs, it is
also important to acknowledge the temporal aspects
of trust. McKnight et al. (Mcknight, D. H., et al.,
2011) describe trust with a specific technology as
existing along a continuum starting with initial trust
(formed with little to no experience with a
technology) and moving on to knowledge based trust
(formed over time and based on prior interaction with
a technology). In this study we focus specifically on
initial trusting beliefs.
Measuring trust has proved difficult and in some
cases controversial endeavor. Generally speaking,
there are two primary methods of measuring trust,
Decision-making with a Humanoid Robot Partner: Individual Differences Impacting Trust
301
behavioural measurement or self-report. In this study
we focus on the latter. Jian et al. (Jian, J.-Y., et al.,
2000) developed what is called the Empirically
Derived Trust Measure (ED). The scale assesses trust
and distrust factors using 12 items and is best used for
measuring initial trust in an information system. The
ED has been utilized in a number of studies to measure
trust and has been validated as reliable trust measure
(Spain, R. D., et al., 2008). We will revisit trust measu-
rement as it applies to our study in the methods section.
2.2 Individual Differences
Individual differences are the collection of traits,
features, and behaviour that uniquely comprise the
overall makeup of an individual. These differences
are important for studying trust in human machine
partnerships and include: propensity to trust (Rotter,
J. B., 1967) and personality traits such as openness,
agreeableness or extraversion (Elson, J. S., D.
Derrick, and G. Ligon, 2018). There is evidence to
support that humans will treat machines as teammates
(Groom, V., and C. Nass, 2007) and it also has been
shown that these core personality traits affect team
performance (Barrick, M. R., et al., 1998). Therefore,
it is important that individual personality
characteristics be considered when looking at
individual differences that could impact trust in
human machine partnerships.
In psychology literature, the “Big-Five”
personality traits have been studied as predictors of
human behaviour and include: openness to
experience, conscientiousness, extraversion,
agreeableness, and emotional stability (Gosling, S.
D., et al., 2003). Individual personality traits have
been shown to be very stable over extended periods
of time (McCrae, R. R., and O. P. John, 1992).
Openness is a personality trait associated with
intellectual curiosity coupled with a general
disposition toward new experiences and adventure
(Goldberg, L. R., 1992). Conscientiousness refers to
an individual’s concern for detail, meeting planned
goals, seeking achievement (Goldberg, L. R., 1992).
Extraversion is an individual’s preferences for social
interaction, stimulation, and desire to be with others
(Goldberg, L. R., 1992). Agreeableness is the
personality trait that indicates a person’s ability to
work well with others, exhibiting high degree of trust
and reserved temperament (Goldberg, L. R., 1992).
Emotional stability describes the personality trait
relating to the stability of an individual’s experience
of emotion (Goldberg, L. R., 1992). We will discuss
our method of measuring the Big Five personality
traits in the methods section.
2.3 Trait Activation Theory
In psychology literature, trait activation theory has
provided a framework to help understand why
personality traits manifest themselves in only certain
circumstances. Thus, one aim of the present effort is to
introduce information systems researchers to this
theoretical framework to understand the complex and
often contradictory findings in the pursuit of scholarly
work on human-computer interaction. Trait activation
theory states that "the behavioural expression of a trait
requires arousal of that trait by trait-relevant situational
cues" (Tett, R. P., and H. A. Guterman, 2000). Sources
of trait-relevant cues when interacting with intelligent
systems will come from both perceptions of system
characteristics or situational factors. These trait-
relevant cues also serve as factors to inform user trust.
In the early stages of system use, minimal
information will be available to inform trust. In such
situations, authentic individual differences such as
propensity to trust and the Big Five personality traits
may be activated by initial perceptions of the system
and early system interaction. While prior research has
shown relationships between individual differences
(propensity to trust, personality traits) and trust in
human relationships, very little research has been
conducted to understand how these individual
differences will relate to early trust when a human
collaborates with a novel intelligent system.
3 RESEARCH QUESTION
Prior trust research in the information systems
domain suggests that individual differences may play
a role in human trust in an intelligent system (Elson,
J. S., et al., 2018). Sparse research into embodied
intelligent systems makes it difficult to hypothesize
specific relationships between individual personality
types and trust in an intelligent system with a
humanoid appearance. Trait activation theory
suggests that when individuals are working in novel,
ambiguous situations an individual’s personality
traits will be expressed (Tett, R. P., and D. D. Burnett,
2003). This is because in the absence of trait-relevant
situational cues, individual behaviour defaults back to
activity associated with core personality traits. It is
therefore reasonable to expect personality traits to
play a role in trust in a novel partnership with an
embodied intelligent system. We therefore pose the
following research question:
RQ: What is the relationship between the Big Five
personality traits and trust in a humanoid robot.
Humanoid 2021 - Special Session on Interaction with Humanoid Robots
302
4 METHOD
4.1 Sample
Participants were graduate and undergraduate
students from a medium-sized Midwestern
university. A total of 101 (58 females, 41 males, and
two preferring not to identify) individuals were
included in the analysis. They were recruited from a
subject participant pool within the College of
Business Administration. Thirty-three individuals
were not included in the analysis because of
incomplete data. Participants ages ranged from 19 to
24 years with a mean age of 23 years, a median age
of 21 years, and mode of 21 years. Participation in
this study was on a voluntary basis, however
participation credit toward a course requirement was
given to those who took part in the study.
4.2 Apparatus
The experimental task in this study was the Desert
Survival Simulation, initially developed by Human
Synergistics. This task was chosen as it had been
previously utilized in numerous team studies and had
performance data for several populations. Also, the
specific survival situation involving the desert
environment was specifically chosen as it would be
an environment that was likely unfamiliar to
participants from our sample population. This
reduced the likelihood that individuals would possess
expertise related to the simulation. Furthermore, the
Desert Survival Simulation can be completed with
relatively low workload as decisions can be
considered one at a time and without time pressure.
This attribute of the task helped to minimize the
possibility that subjects would offload decision
choices to their partner as a strategy to cope with high
workload (Molloy, R., and R. Parasuraman, 1996).
Finally, this survival simulation presented a situation
with no clear-cut answer. To achieve the best score,
individuals must carefully consider every decision
they must make. The survival situation described a
scenario where people had been stranded with only a
small number of items that could be used to survive.
The simulation's goal was to identify which of these
items were the most essential and rank the items in
order of their importance for survival.
A custom web application was used to conduct the
survival task activity. The web application for the
survival activity consisted of four primary interface
screens that were accessed in sequential order: 1) an
introductory screen, 2) an individual decision-making
interface, 3) a collaborative interface, and 4) the final
decision-making interface.
In this study, the intelligent system partner was
the humanoid robot Pepper from SoftBank Robotics.
The robot was programmed to respond to the
participant questions about items from the survival
scenario. Participants were told that their partner
would develop solutions in real-time and would not
have access to the solutions developed by the survival
experts (in reality, the solutions presented as the
partner solutions were the optimal solution developed
by the survival experts).
4.3 Procedure
Countermeasures were taken to discourage
participants from completing the task without
appropriately considering their answers. Participants
were asked to provide written justification for why
they had ranked their items and asked to provide their
confidence for their ranking. Between steps,
participants did not receive feedback regarding their
performance or degree of success, so they did not
know how they performed until being debriefed at the
end of the study.
The experiment was conducted in a dedicated lab
space with environmental controls to alleviate noise,
light, and visual distractions. To avoid monomethod
bias, participants completed an individual
characteristics assessment prior to the
experimentation day. Participants returned to the lab
on a different day to complete the experiment
described in this study. Upon arrival on the second
day, participants first completed an IRB mandated
informed consent. Participants were made to believe
that they were helping to evaluate a web application
designed to walk users through a novel partner
decision making process. Participants were also told
that only individuals who achieved a passing score on
the simulation activities would be awarded
participation credit (in reality, all participants
received credit for their participation). Participants
then completed a study orientation and pre-survey in
a private room. In this orientation presurvey,
participants were shown an example of the web
ranking interface and allowed to perform a ranking of
items. The pre-survey included a question that asked
what would happen if participants did not achieve a
passing score on the survival simulation. This
question served as a manipulation check that ensured
all participants in the analysis were aware of the risk
associated with this experiment (the loss of
participation credit).
Decision-making with a Humanoid Robot Partner: Individual Differences Impacting Trust
303
Next, participants were directed to a second room,
where they were introduced and seated across from
their partner and given more information about the
first survival simulation activity. The participants
were told that the partner had access to a database of
various survival items, their usefulness in past
survival situations, and would use this database to
help generate a real-time solution.
Participants were reminded that they would be
scored on their rankings and that failure to achieve a
passing score (greater than 75% correct) would result
in a loss of credit for this study. Participants were then
automatically presented with the simulation
instructions and left to work with their partner to
achieve a solution.
4.4 Measures
The experiment utilized: measures of trust (before
interaction and after the simulation), system utilization,
perceived humanness of partner, perceived presence,
the Big Five personality traits, propensity to trust, and
propensity to anthropomorphize. In this study, we
considered the following measures:
The Big Five Personality traits were measured
using the Big Five Index (BFI), a psychometric
instrument that measures Extraversion, Agreeableness,
Openness to Experience, Conscientiousness, and
Neuroticism (John, O. P., et al., 1991; John, O. P., et
al., 2008). This questionnaire contains 44 items, each
with a 5-point Likert scale that ask the participant to
rate their agreement or disagreement with statements
about their personality. Each item allowed for
responses ranging from one to five, with one being
strongly agree and five being strongly disagree. An
example item for the measure of Extraversion was, "I
am someone who is talkative." Scale reliabilities for
each of the five personality measures resulted in
Cronbach's alpha scores of .87 for Extraversion, .71 for
agreeableness, .84 for conscientiousness, .79 for
neuroticism, and .76 for openness.
Propensity to trust was assessed using the
propensity to trust others measure developed by
Ashleigh et al. (Ashleigh, M. J., et al., 2012). This 9-
item measurement uses a 5-point Likert scale that
asks the participant to rate their agreement or
disagreement with statements about their attitudes
toward others. An example question item is: "Other
people are out to get as much as they can for
themselves." Scale reliability for the measure resulted
in a Cronbach's alpha score of .89.
Trust was assessed using a modified version of
the Empirically Derived (ED) scale developed by Jian
et al (Jian, J.-Y., et al., 2000). The 12-item instrument
conceptualizes trust as being comprised of two factors
(trust & distrust). The scale's trust factors include
confidence, security, integrity, dependability,
reliability, trust, and familiarity. The distrust factors
include deceptiveness, underhandedness,
suspiciousness, wariness, and harm. Original items
were worded about a "system." Items were reworded
to reference a generic "partner." Example question
items include: "I am wary of my partner" and "I am
confident in my partner." Scale reliability for the
measure resulted in a Cronbach's alpha scores of .83.
5 RESULTS
Descriptive statistics for the continuous variables trust,
propensity to trust, Conscientiousness, Neuroticism,
Extraversion, Agreeableness, Openness, analytic
cognition, affective cognition (see Table 1).
Table 1: Descriptive Statistics for Continuous Variables.
A correlation analysis was performed to identify
the individual differences that were significantly
correlated with trust. Next, a hierarchical regression
analysis was performed to test the relationship
between analytical cognitive processes, affective
cognitive processes, and trust, including key
individual differences as covariates.
Results of a correlation analysis showed that only
two individual differences variables under
consideration were significantly correlated with trust
in the humanoid robot partner: Openness (r = -.276, p
= .003) and propensity to trust (r =.168, p =.050). The
remaining individual difference variables were not
significantly correlated with trust: Extraversion (r = -
.043, p = .339), Agreeableness (r = .044, p = .335),
Conscientiousness (r = .018, p = .429), and
Table 2: Correlations Among Continuous Variables.
Variables N Mean Std. Deviation Variance Minimum Maximum
Trust 97 3.82 0.43 0.19 3.00 5.00
Propensity to Trust 97 3.87 1.05 1.11 1.11 6.44
Conscientiousness 97 3.74 0.61 0.37 1.33 5.00
Neuroticism 97 2.94 0.62 0.39 1.63 4.25
Extraversion 97 3.10 0.74 0.55 1.63 5.00
Agreeableness 97 3.72 0.51 0.26 2.44 4.67
Openness 97 3.50 0.53 0.28 2.10 4.80
Variables 1234567
1. Trust -
2. Propensity to Trust .17* -
3. Conscientiousness .02 .22* -
4. Neuroticism -.09 -.34** -.28** -
5. Extraversion -.04 .29** .11 -.30** -
6. Agreeableness 0.04 .39** .34** -.34** .08 -
7. Openness -.28** 0.15 .08 -.17* .32** .21* -
Humanoid 2021 - Special Session on Interaction with Humanoid Robots
304
Table 3: Hierarchical Multiple Regression of Trust on Propensity to Trust and Openness.
Neuroticism (r = -.091, p = .186). Correlations
among each of the variables are presented in Table 2.
Considering these results, only the significantly
correlated individual difference variables were
retained for the final regression.
The correlation analysis revealed that only two
individual differences, propensity to trust and
openness, were significantly correlated to trust.
Therefore, the other personality traits were not
included in the final regression analysis. We
performed a hierarchical multiple regression analysis
with trust on openness and propensity to trust. The
variable propensity to trust was entered into the first
block and openness into the second block. The results
are summarized in Table 3.
In the first block, propensity to trust was added,
the model was predicted trust on propensity to trust.
The regression of predicted trust on propensity to
trust was not significant, F(1, 95) = 2.76, p = .099, R
2
=
.028, indicating that propensity to trust was not a
significant predictor of trust. Variance in propensity
to trust accounted for 3% of the variance in trust.
Propensity to trust was not a significant predictor of
trust, β = .17, B = .07, t(95) = 1.67, p = .099, 95% CI
[-0.01, 0.15], indicating that greater propensity to
trust did not predict greater trust. For more
information, refer to Table 3.
In the second block, openness was added; the
model was predicted trust on propensity to trust and
openness. The multiple regression of predicted trust
on propensity to trust and openness was significant,
F(2,94) = 6.52, p < .05, R
2
=
.122, indicating that
together propensity to trust and openness were
significant predictors of trust. Variance in propensity
to trust and openness accounted for 12% of the
variance in trust. The increment in R
2
was significant
ΔR
2
= .09, ΔF = 10.00, p < .05. That is, the unique
contribution to the variance accounted for in trust by
openness was significant. The increment in the
multiple coefficients of determination indicates that
the variance in openness accounted for an additional
9% of the variance for trust above and beyond
propensity to trust. In this model, propensity to trust
was significant predictor of trust, β = .22, B = .09,
t(94) = 2.21, p = .030, 95% CI [0.01, 0.17], indicating
that greater propensity to trust predicted more trust
above and beyond Openness.
Openness was a significant predictor of trust, β =
-.31, B = -.25, t(94) = -3.16, p = .002, 95% CI [-0.41,
-0.10], indicating that greater Openness predict less
trust above and beyond propensity to trust. For more
information, refer to Table 3.
6 DISCUSSION
A key finding from this study was that under these
experimental conditions, individual personality traits
were found to be more predictive of trust in an
intelligent system than the individual difference
propensity to trust. This is a significant finding as
historical precedent (Rotter, J. B., 1967) and recent
meta-analysis of trust research in intelligent systems
(Schaefer, K. E., et al., 2016) has observed the later
(trust propensity) as the primary individual difference
considered.
Openness was found to be correlated with trust. In
the hierarchical regression, Openness remained a
significant predictor of trust above and beyond
propensity to trust. Greater Openness predicted less
trust, a finding that viewed through the lens of trait
activation theory may have related to the delayed
collaboration between human and intelligent system
serving as situationally relevant cue. In this example,
the delayed collaborative nature of the task may have
served as a trait releaser which facilitated behaviour
characteristic of individuals scoring high in
Openness. Individuals scoring high in Openness may
act in ways that relate to confirmation bias. For
example, a recent study showed that individuals
engaging in activity related to confirmation bias in
different online groups, shared a similar personality
profile which included scoring high in Openness
(Bessi, A., 2016). The confirmation bias relates to the
tendency to seek out information that confirms
existing beliefs (Nickerson, R. S., 1998). It was
observed that the beliefs of the intelligent system
varied greatly from those of most participants, as
Model bSEt β F
R
2
ΔF
ΔR
2
95% CI
1. Intercept 3.55 0.17 21.37** 2.78 0.03 [3.22, 3.88]
Propensity to Trust 0.07 0.04 1.67 0.17 [-0.01, 0.15]
2. Intercept 4.36 0.30 14.51** 6.52* 0.12 10.00 0.09 [3.76, 4.96]
Propensity to Trust 0.09 0.04 2.21* 0.22 [0.01, 0.17]
Openness -0.25 0.08 -3.16* -0.31 [-0.41, -0.09]
Decision-making with a Humanoid Robot Partner: Individual Differences Impacting Trust
305
evidenced in poor individual scores. While additional
analysis is needed, it is possible that individuals
scoring high in Openness (characteristically seeking
decision confirming information) rejected partner
suggestions as being erroneous, leading to decreased
trust.
The following example shows how system design
could apply to these findings. An embodied
intelligent system used by individuals who score high
on the Openness personality trait may want to make
recommendations from the onset of a decision-
making task to avoid independent solution generation
which could lead to situations where confirmation
bias may come into play.
For systems that have or are already being
deployed, these results are also practical and can
inform management and training decisions. For
example, individuals scoring high in openness can be
identified and taught to realize the importance of
considering system information when making
decisions and encouraged to critically evaluate their
original decisions.
7 LIMITATIONS AND FUTURE
RESEARCH
This research focused on exploring the relationship
between individual characteristics and early trust in
intelligent systems. Like all empirical work, there
exist several limitations that need to be addressed.
The use of controlled laboratory experiments is
widely recognized as a limitation of information
systems research. Results from lab studies may not
generalize to individuals and systems in the real
world. Future studies should be conducted that look
to test for the relationships found in this study
explicitly.
Future studies will need to be conducted to look
at the relationships and trait-relevant cues related to
the openness personality dimension. Experiments
need to be conducted to target the activation of
specific personality traits by manipulating trust
factors from each of the three-factor groupings.
Future work will want to look at factors such as
etiquette with initial greetings and politeness encoded
in system behaviour and interaction responses.
Finally, continued work is needed in the area of
system embodiment and the impact that various
morphologies and modalities have on system use,
trust, and trust outcomes.
8 CONCLUSIONS
The results reported here suggest that when
interacting with a humanoid robot partner, the
openness personality trait is a significant predictor of
trust above and beyond the individual difference
propensity to trust. Continuing to develop and refine
the proposed framework of early trust in intelligent
systems is critical to ensuring the success of future
human-machine collaborations.
REFERENCES
Spohrer, J., and G. Banavar, “Cognition as a Service: An
Industry Perspective”, AI Magazine 36(4), 2015, pp.
71–86.
Jones, G.R., and J.M. George, “The experience and
evolution of trust: Implications for cooperation and
teamwork”, Academy of management review 23(3),
1998, pp. 531–546.
Dzindolet, M.T., S.A. Peterson, R.A. Pomranky, L.G.
Pierce, and H.P. Beck, “The role of trust in automation
reliance”, International Journal of Human-Computer
Studies 58(6), 2003, pp. 697–718.
Fukuyama, F., Trust: The social virtues and the creation of
prosperity, Free Press Paperbacks, 1995.
Luhmann, N., Trust and Power, John Wiley & Sons Inc,
Ann Arbor, Mich, 1982.
Billings, D.R., K.E. Schaefer, J.Y.C. Chen, and P.A.
Hancock, “Human-robot interaction: developing trust
in robots”, Proceedings of the seventh annual
ACM/IEEE international conference on Human-Robot
Interaction - HRI ’12, ACM Press (2012), 109.
Mayer, R.C., J.H. Davis, and F.D. Schoorman, “An
Integrative Model of Organizational Trust”, The
Academy of Management Review 20(3), 1995, pp. 709.
Mcknight, D.H., M. Carter, J.B. Thatcher, and P.F. Clay,
“Trust in a specific technology: An investigation of its
components and measures”, ACM Transactions on
Management Information Systems 2(2), 2011, pp. 1–25.
Poole, D.L., A.K. Mackworth, and R. Goebel,
Computational intelligence: a logical approach,
Oxford University Press New York, 1998.
Warren, J., G. Beliakov, and B. van der Zwaag, “Fuzzy
logic in clinical practice decision support systems”,
Proceedings of the 33rd Annual Hawaii International
Conference on System Sciences, (2000), 10 pp.-.
Matsatsinis, N.F., and Y. Siskos, Intelligent Support
Systems for Marketing Decisions, Springer Science &
Business Media, 2012.
Twyman, N.W., J.G. Proudfoot, R.M. Schuetzler, A.C.
Elkins, and D.C. Derrick, “Robustness of multiple
indicators in automated screening systems for
deception detection”, Journal of Management
Information Systems 32(4), 2015, pp. 215–245.
Humanoid 2021 - Special Session on Interaction with Humanoid Robots
306
Rasch, R., A. Kott, and K.D. Forbus, “Incorporating Ai into
Military Decision Making: An Experiment”, IEEE
Intelligent Systems 18(4), 2003, pp. 18–26.
Nunamaker, J.F., D.C. Derrick, A.C. Elkins, J.K. Burgoon,
and M.W. Patton, “Embodied Conversational Agent-
Based Kiosk for Automated Interviewing”, Journal of
Management Information Systems 28(1), 2011, pp. 17–
49.
Turban, E., J.E. Aronson, and T.-P. Liang, Decision
Support Systems and Intelligent Systems, Prentice Hall,
2004.
Li, J., “The benefit of being physically present: A survey of
experimental works comparing copresent robots,
telepresent robots and virtual agents”, International
Journal of Human-Computer Studies 77, 2015, pp. 23–
37.
Epley, N., A. Waytz, and J.T. Cacioppo, “On seeing human:
A three-factor theory of anthropomorphism”,
Psychological Review 114(4), 2007, pp. 864–886.
Waytz, A., J. Cacioppo, and N. Epley, “Who Sees Human?
The Stability and Importance of Individual Differences
in Anthropomorphism”, Perspectives on psychological
science: a journal of the Association for Psychological
Science 5(3), 2014, pp. 219–232.
de Visser, E.J., S.S. Monfort, R. McKendrick, et al.,
“Almost human: Anthropomorphism increases trust
resilience in cognitive agents.”, Journal of
Experimental Psychology: Applied 22(3), 2016, pp.
331–349.
Schroeder, J., and M. Schroeder, “Trusting in Machines:
How Mode of Interaction Affects Willingness to Share
Personal Information with Machines”, (2018).
McKnight, D.H., and N.L. Chervany, “Trust and distrust
definitions: One bite at a time”, In Trust in Cyber-
societies. Springer, 2001, 27–54.
Madsen, M., and S. Gregor, “Measuring human-computer
trust”, 11th australasian conference on information
systems, Citeseer (2000), 6–8.
Jian, J.-Y., A.M. Bisantz, and C.G. Drury, “Foundations for
an empirically determined scale of trust in automated
systems”, International Journal of Cognitive
Ergonomics 4(1), 2000, pp. 53–71.
Spain, R.D., E.A. Bustamante, and J.P. Bliss, “Towards an
empirically developed scale for system trust: Take
two”, Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, SAGE
Publications Sage CA: Los Angeles, CA (2008), 1335–
1339.
Rotter, J.B., “A new scale for the measurement of
interpersonal trust”, Journal of Personality 35
(4), 1967,
pp. 651–665.
Elson, J.S., D. Derrick, and G. Ligon, “Examining Trust
and Reliance in Collaborations between Humans and
Automated Agents”, 2018.
Groom, V., and C. Nass, Can robots be teammates?:
Benchmarks in human–robot teams”, Interaction
Studies 8(3), 2007, pp. 483–500.
Barrick, M.R., G.L. Stewart, M.J. Neubert, and M.K.
Mount, “Relating member ability and personality to
work-team processes and team effectiveness.”, Journal
of Applied Psychology 83(3), 1998, pp. 377–391.
Gosling, S.D., P.J. Rentfrow, and W.B. Swann, “A very
brief measure of the Big-Five personality domains”,
Journal of Research in Personality 37, 2003, pp. 504–
528.
McCrae, R.R., and O.P. John, “An introduction to the five-
factor model and its applications”, Journal of
personality 60, 1992.
Goldberg, L.R., “The Development of Markers for the Big-
Five Factor Structure.”, Psychological assessment 4(1),
1992, pp. 26.
Tett, R.P., and H.A. Guterman, “Situation trait relevance,
trait expression, and cross-situational consistency:
Testing a principle of trait activation”, Journal of
Research in Personality 34(4), 2000, pp. 397–423.
Tett, R.P., and D.D. Burnett, “A personality trait-based
interactionist model of job performance.”, Journal of
Applied psychology 88(3), 2003, pp. 500.
Molloy, R., and R. Parasuraman, “Monitoring an automated
system for a single failure: Vigilance and task
complexity effects”, Human Factors 38(2), 1996, pp.
311–322.
John, O.P., E.M. Donahue, and R.L. Kentle, The big five
inventory—versions 4a and 54, Berkeley, CA:
University of California, Berkeley, Institute of
Personality …, 1991.
John, O.P., L.P. Naumann, and C.J. Soto, “Paradigm Shift
to the Integrative Big Five Trait Taxonomy”, Handbook
of Personality: Theory and Research 3, 2008, pp. 114–
158.
Ashleigh, M.J., M. Higgs, and V. Dulewicz, “A new
propensity to trust scale and its relationship with
individual well-being: implications for HRM policies
and practices”, Human Resource Management Journal
22(4), 2012, pp. 360–376.
Schaefer, K.E., J.Y.C. Chen, J.L. Szalma, and P.A.
Hancock, “A Meta-Analysis of Factors Influencing the
Development of Trust in Automation: Implications for
Understanding Autonomy in Future Systems”, Human
Factors: The Journal of the Human Factors and
Ergonomics Society 58(3), 2016, pp. 377–400.
Bessi, A., “Personality Traits and Echo Chambers on
Facebook”, arXiv:1606.04721 [cs], 2016.
Nickerson, R.S., “Confirmation bias: A ubiquitous
phenomenon in many guises”, Review of general
psychology 2(2), 1998, pp. 175–220.
McElroy, T., and K. Dowd, “Susceptibility to anchoring
effects: How openness-to-experience influences
responses to anchoring cues”, Judgment and Decision
Making 2(1), 2007, pp. 6.
Decision-making with a Humanoid Robot Partner: Individual Differences Impacting Trust
307