An Ethically-Guided Domain-Independent Model of Computational Emotions

Publication Type:
Thesis
Issue Date:
2020
Full metadata record
Advancement of artificial intelligence research has supported the development of intelligent autonomous agents. Such intelligent agents, like social robots, are already appearing in public places, homes and offices. Unlike the robots intended for use in factories for mechanical work, social robots should not only be proficient in capabilities such as vision and speech, but also be endowed with other human skills in order to facilitate a sound relationship with human counterparts. Phenomena of emotions is a distinguishable human feature that plays a significant role in human social communication because ability to express emotions enhances the social exchange between two individuals. As such, artificial agents employed in social settings should also exhibit adequate emotional and behavioural abilities to be easily adopted by people. A critical aspect to consider when developing models of artificial emotions for autonomous intelligent agents is the likely impact that the emotional interaction can have on the human counterparts. For example, an ๐˜ฆ๐˜ฎ๐˜ฐ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ข๐˜ญ ๐˜ณ๐˜ฐ๐˜ฃ๐˜ฐ๐˜ต that shows an angry expression along with a loud voice may scare a young child more than a ๐˜ฏ๐˜ฐ๐˜ฏ-๐˜ฆ๐˜ฎ๐˜ฐ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ข๐˜ญ ๐˜ณ๐˜ฐ๐˜ฃ๐˜ฐ๐˜ต that only denies a request. Indeed, most modern societies consider a strong emotional reaction towards a young child to be unacceptable and even unethical. How can a robot select a socially acceptable emotional state to express while interacting with people? I answer this question by providing an association between emotion theories and ethical theories โ€“ which has largely been ignored in the existing literature. A regulatory mechanism for artificial agents inspired by ethical theories is a viable way to ensure that the emotional and behavioural responses of the agent are acceptable in a given social context. As such, an intelligent agent with emotion generation capability can establish social acceptance if its emotions are regulated by ethical reasoning mechanism. In order to validate the above statement, in this work, I provide a novel computational model of emotion for artificial agents โ€“ EEGS (short name for ๐—˜thical ๐—˜motion ๐—šeneration ๐—ฆystem) and evaluate it by comparing the emotional responses of the model with emotion data collected from human participants. Experimental results support that ๐˜ฆ๐˜ต๐˜ฉ๐˜ช๐˜ค๐˜ข๐˜ญ ๐˜ณ๐˜ฆ๐˜ข๐˜ด๐˜ฐ๐˜ฏ๐˜ช๐˜ฏ๐˜จ ๐˜ฎ๐˜ฆ๐˜ค๐˜ฉ๐˜ข๐˜ฏ๐˜ช๐˜ด๐˜ฎ ๐˜ค๐˜ข๐˜ฏ ๐˜ช๐˜ฏ๐˜ฅ๐˜ฆ๐˜ฆ๐˜ฅ ๐˜ฉ๐˜ฆ๐˜ญ๐˜ฑ ๐˜ข๐˜ฏ ๐˜ข๐˜ณ๐˜ต๐˜ช๐˜ง๐˜ช๐˜ค๐˜ช๐˜ข๐˜ญ ๐˜ข๐˜จ๐˜ฆ๐˜ฏ๐˜ต ๐˜ต๐˜ฐ ๐˜ณ๐˜ฆ๐˜ข๐˜ค๐˜ฉ ๐˜ต๐˜ฐ ๐˜ข ๐˜ด๐˜ฐ๐˜ค๐˜ช๐˜ข๐˜ญ๐˜ญ๐˜บ ๐˜ข๐˜ค๐˜ค๐˜ฆ๐˜ฑ๐˜ต๐˜ข๐˜ฃ๐˜ญ๐˜ฆ ๐˜ฆ๐˜ฎ๐˜ฐ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ข๐˜ญ ๐˜ด๐˜ต๐˜ข๐˜ต๐˜ฆ.
Please use this identifier to cite or link to this item: