lose their lives, but on the complete avoidance of the 
accident.  Indeed,  perhaps  the  most  serious  concern 
against these  dilemmas  is  the fact that  they involve 
the assumption that one will survive and one will be 
killed, based on criteria which ignore that all people 
are equal. In the simplest case a comparison is made 
between  different  sizes  of  groups  of  people,  but 
many  scenarios  suggest  making  decisions  based  on 
age, gender or social class of people. After all, if 
decision-making  on  autonomous  vehicles  required 
personal  data  to  be  taken  into  account,  there  would 
arise an additional  problem  of privacy and personal 
data protection, as a vehicle would require access to 
all  personal  data  (Holstein  and  Dodig-Crnkovic, 
2018). 
Even if  the  driverless  dilemma  could  be  solved, 
another  factor  which  nevertheless  renders  it 
ineffective  is  the  fact  that  there  is  no  overall 
established  infrastructure  that  allows  autonomous 
vehicles to function properly yet. Whereas in a smart 
city  the  autonomous  vehicle  will  be  able  to  obtain 
detailed  information  about  its  environment  and 
choose  the  solution  with  the  best  result  that 
maximizes  the  benefit  and/  or  minimizes  the 
damage,  one  must  consider  that,  until  all  cities 
become  smart  cities,  autonomous  vehicles  involved 
in  traffic  will  have  to  interact  with  human  drivers. 
However, the current mixed environment of vehicles 
(smart and not) or locations (with and without smart 
infrastructure) means that the decision-making of the 
autonomous vehicle cannot  be well-founded, due  to 
the fact that there is insufficient data. Therefore, the 
inequality problem would include even more aspects 
than it would have if there were already established 
smart cities (Holstein and Dodig-Crnkovic, 2018). 
In  any  case,  these  mental  experiments  are  not 
really  intended  to  examine  every  aspect  of  a  road 
accident,  but  to  focus  only  on  ethical  aspects  in 
order  to  investigate  which  extreme  behaviours  of  a 
vehicle  would  be  accepted  by  the  general  public. 
This goal is best achieved if the dilemmas are more 
simply  formulated,  even  if  that  means  they  become 
less  realistic.  It  should  be  borne  in  mind  that  non-
experts in artificial intelligence or ethical philosophy 
are  the  majority  and  are  the  future  buyers  of 
autonomous  vehicles.  Therefore,  it  is  important  to 
find a way of communication between scientists and 
the  general  public,  which  makes  the  simplicity  of 
these  mental  experiments  a  positive  element.  In 
addition, the  dilemmas  manage  to  draw the  public's 
attention  to  the  ethics  of  autonomous  vehicles, 
which is desirable, since progress in a field can only 
take  place  if  a  corresponding  interest  exists.  (De 
Freitas et al., 2020). 
2.2  Responses to the “Driverless 
Dilemma” 
Ethics  of  autonomous  vehicles  have  attracted  the 
attention  of  many  researchers,  who  seek  to  define 
how  such  a  vehicle  should  be  designed.  In  theory 
this  subject  has  been  approached  among  others  by 
studies such as Shariff et al. (2017) and Bissell et al. 
(2018). 
The  study  by  Liu  et  al.  (2019)  shows  that, 
although  the  consequences  of  the  crashes  involving 
an  autonomous  vehicle  and  a  conventional  vehicle 
were  identical,  the  crash  involving  the  autonomous 
vehicle was perceived as more severe, regardless of 
whether it was caused by the autonomous vehicle or 
by  others  and  whether  it  resulted  in  an  injury  or  a 
fatality.  The  research  by  De  Freitas  and  Cikara 
(2020) revealed more negative reactions towards the 
manufacturer  of  the  autonomous  vehicle,  when  a 
vehicle caused damage deliberately. 
According  to  the  study  by  Gao  et  al.  (2020), 
most of the participants wanted to minimize the total 
number  of  people  who  would  be  injured  in  a  road 
accident.  It  has  also  been  concluded  that  most 
drivers  consider  not  only  their  own  safety,  but  also 
the  safety  of  pedestrians,  as  they  chose  to  hit  an 
obstacle  rather  than  hit  pedestrians.  Choosing  a 
course with obstacles in order to protect a pedestrian 
could  also  be  considered  as  a  way  to  minimize  the 
overall damage caused. Bonnefon et al. (2016) have 
also noted that participants strongly agreed it would 
be more  moral for autonomous vehicles to  sacrifice 
their  own  passengers  when  this  sacrifice  would 
result in minimizing the number of casualties on the 
road.  However,  the  same  participants  showed  an 
inclination  to  ride  in  autonomous  vehicles  that  will 
protect  them  at  all  costs.  According  to  Liu  and  Liu 
(2021)  participants  perceived  more  benefits  from 
selfish  autonomous  vehicles  which  protect  the 
passenger  rather  than  the  pedestrian,  showing  a 
higher intention to use and greater willingness to pay 
extra money for these autonomous vehicles. 
The  results  of  the  research  by  Tripat  (2020) 
showed  that,  due  to  the  shift  in  accountability, 
autonomous  vehicles  seem  to  have  also  shifted 
people's moral principles towards self-interest. In the 
case  of  an  autonomous  vehicle,  the  control  of  the 
actions of the vehicle by the human driver is limited, 
so  the  responsibility  for  any  harmful  consequences 
can  be  attributed  to  the  autonomous  vehicle.  As  a 
result, it is possible for the passenger to ensure their 
self-protection while exempting themselves from the 
moral  cost  of  causing  damage  to  a  pedestrian. 
Therefore, it  is expected that most  people would  be