tentially important, plausible, and thought-provoking
scenarios in a concrete, easily-understood way, as
well as to test some part of the challenges that emerge
in a simplified user study (Dunne and Raby, 2013; En-
gelberg and Seffah, 2002).
Thus, in line with the “How Might We” design
method, rapid ideation sessions were first conducted
within the group, asking the question: How might a
robot help people by sending secret messages? Brain-
storming ideas were recorded without judgement,
then blended and grouped into short written narra-
tive scenarios. The aim was to capture a wide range
of ideas in a small number of potentially high-value,
plausible, uncertain, and different scenarios. Feasi-
bility from the perspective of current technology was
not used as a filter, given the speculative approach;
i.e., our initial concern was not how robot capabilities
could be implemented (such as the rich recognition
capabilities that will be required) but what could be
useful. This resulted in a total of eight initial sce-
narios, which were then analyzed, yielding insight
into some core themes: the kinds of problems that
would be useful to design solutions for, commonali-
ties, venues, interactive roles, cues to detect, and ac-
tions a robot could take, as well as some unique chal-
lenges.
Furthermore, two example scenarios were se-
lected, for two kinds of robot: a socially assistive hu-
manoid robot (SAR) and an autonomous vehicle (AV).
The former is an indoor robot with a focus on social
communication, especially for healthcare, whereas
the latter is an outdoor robot with a focus on locomo-
tion and transport; both offer exciting possibilities for
improving quality of life in interacting persons. The
example scenarios are presented below:
SAR. “Howdy!” called Alice, the cleaning robot at
the care center, as she entered Charlie’s room. Her
voice trailed off as she took in the odd scene in front
of her: Charlie appeared agitated, and she could see
bruises on his arms. The room was cold from an open
window, which had probably been opened hours ago,
and yesterday’s drinks had not been cleared away–
there was no sign that anything had been provided
for breakfast. Closing the window, Alice noticed a
spike of “worry” in her emotion module, directed to-
ward Charlie, whom she knew had a troubled rela-
tionship with Oliver, his main caregiver. The other
day, Charlie had acted disruptively due to his late-
stage dementia. To this, Oliver had expressed frus-
tration and threatened punishment; with his history
of crime, substance abuse, unemployment, and men-
tal health problems, this might not be merely an idle
threat. But, there might be some explanation that Al-
ice didn’t know about, and she didn’t have permission
to contact authorities, since a false report could have
highly negative consequences. Sending a digital mes-
sage would also probably not be wise, since the mat-
ter was urgent, and Oliver and the rest of the group
had access to her logs. When she headed over to
the reception, there was Oliver talking to Bob. Alice
wanted to let Bob know as soon as possible without
alerting Oliver, so she surreptitiously waved to Bob
behind Oliver’s back to get his attention and flashed
a message on her display that she would like to ask
him to discreetly check in on Charlie as soon as pos-
sible. Bob nodded imperceptibly, and Alice went back
to cleaning. With Bob’s help, Alice was sure that
Charlie would be okay.
AV. “Hey!” KITTEN, a large truck AV, inadvertently
exclaimed. “Are you watching the road?” Her driver,
Oscar, ignored KITTEN, speeding erratically down
the crowded street near the old center of the city with
its tourist area, market, station, and school, which
were not on his regular route. KITTEN was wor-
ried about Oscar, who had increasingly been show-
ing signs of radicalization–meeting with extremists
such as Mallory–and instability, not listening to vari-
ous warnings related to medicine non-adherence, de-
pression, and sleep deprivation. But she wasn’t com-
pletely sure if Oscar was currently dangerous or im-
paired, as his driving was always on the aggressive
side; and, KITTEN didn’t want to go to the police–if
she were wrong, Oscar might lose his job. Or, even if
she were right and the police didn’t believe her, Os-
car could get angry and try to bypass her security fea-
ture, or find a different car altogether, and then there
would be no way to help anymore. At the next in-
tersection, KITTEN decided to use steganography to
send a quick “orange” warning to nearby protective
infrastructure, comprising a monitoring system and
anti-tire spikes that can be raised to prevent vehicles
from crashing into crowds of pedestrians–while plan-
ning to execute an emergency brake and call for help
if absolutely required.
The scenarios suggested that RS might be useful
when two conditions hold:
• There is a High Probability of Danger. If the
robot is not completely sure about the threat, or
has not been given the right to assess such a threat
as the consequences of a mistake could be ex-
tremely harmful, the robot could require another
opinion, possibly through escalation to a human-
in-the-loop. In particular, this could occur when
there is a possibility of an accident or crime: Traf-
fic accidents are globally the leading killer of peo-
ple aged 5-29 years, with millions killed and in-
jured annually
6
, and crimes are estimated to cost
6
www.who.int/publications/i/item/9789241565684
ICAART 2022 - 14th International Conference on Agents and Artificial Intelligence
202