Human-robot systems facing ethical conflicts: A preliminary experimental protocol
- Publication Type:
- Conference Proceeding
- Citation:
- AAAI Workshop - Technical Report, 2015, WS-15-02 pp. 38 - 44
- Issue Date:
- 2015-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
![]() | 10120-45894-1-PB.pdf | Published version | 2.66 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. This paper focuses on a preliminary experimental protocol that aims at assessing a robot operator's behavior when the robot is equipped with what appears as moral decision capabilities. The protocol is derived from the trolley dilemma, a well-known decision making paradigm. Indeed the participants, acting as operators of simulated aerial robots via a computer screen, are faced to impersonal moral dilemmas, i.e. decide to crash a damaged robot on one of two inhabited areas, and to non-moral choices, i.e. decide to crash a damaged robot on one of two uninhabited areas. In each situation, the robot has a default crash behavior which is displayed to the participant who will have to decide whether to follow it or not. The participants are equipped with fNIRS and eye-tracking and answer a post-experimental questionnaire. As some of the behavioral and physiological results do not match the hypotheses we had set, we give the features of the further experiments that we are planning.
Please use this identifier to cite or link to this item: