Human-Anatomy Teaming with a Supportive Situation Awareness Model

Publication Type:
Issue Date:
Full metadata record
Coordination abilities are required for artificial agents with a high level of autonomy (also called autonomy agents) to perform collaborative work with humans in human-autonomy teaming (HAT). This study focuses on the development of the autonomy agent’s coordination abilities for a HAT context in which team members have superior subordinate relations using collaborative driving as a study case. The key challenge in such a HAT context is that the agent as the subordinate lacks the cognitive ability to coordinate with its superior. To meet this challenge, this study addresses the following critical issues: (i) how to model the autonomy agent’s situation awareness (SA), (ii) how to enable the agent to self-explain its behaviours, and (iii) how to enable the agent to effectively help its superior to maintain their SA. This study makes four novel proposals: (i) a situation awareness modelling framework for teaming situations, (ii) a new self-explanation method based on the autonomy agent’s artificial situation awareness states (ASAS), (iii) a transfer learning approach with a directive offline augmentation technique for the autonomy agent to monitor its human superior’s distraction using on-board camera-captured data, and (iv) a time-constraint-driven transparency model of the agent’s SA in collaborative driving. The evaluation of these proposals was carried out using two typical scenarios in collaborative driving, namely passing traffic lights and overtaking vehicles. The evaluation results show that (i) the autonomy agent’s situation awareness modelling framework for teaming situations can guide the modelling and identification of situation awareness requirements, (ii) the self-explanation method can overcome the existing process-based method in term of reducing the search space in generating explanations, (iii) the new transfer learning approach can effectively cope with the geometrical relations constraints in classification of human driver’s distraction, and (iv) the time-constraint-driven transparency model of agent’s SA can control the visibility of the agent’s behaviours and their explanation. These proposals are significant to enhance the coordination performance in HAT and to calibrate the human member’s trust in their autonomy agent partner and to avoid human driver distraction.
Please use this identifier to cite or link to this item: