Yael Edan, Industrial Engineering and Management
Suna Bensch, Computer Science UMEA
Thomas Hellstrom, Computer Science UMEA
This work aims to develop novel techniques for understandable robots, by a framework that combines learning and natural language processing. We intend to develop a method to enable robots to learn the right utterances during an interaction in a collaborative task thereby leading to create a human mental model. Furthermore, the robot will explain to the user its doings. We premise the explanation will be based on three questions i.e., what needs to be explained, when should it be explained and why an action is being taken by the robot. With these three questions, we will focus on developing a model for the explanations which includes clarity, a pattern in which explanations are being communicated to the user, and justification for a particular option. These three questions will provide a basis for defining different levels of understanding (corresponding to levels of automation). This study would also employ the combination of verbal and non-verbal communication which would increase the modalities of the communicating agents.
Not knowing the prior information about the action of the robot in human-robot collaboration has negative impact on user perception. By creating a human mental model, we will increase the understandability of the robot's actions leading to improved human-robot collaboration.
Specifically, we propose a framework in which the mental model of the human contained in the robot's state of mind (Mr) would act as a state. A reward function would be derived from the human response. The robot would generate explanation for its task according to the current state i.e., the human's mental model. If there is a mismatch between the current state of the human's mental model and the user's response, the framework (implemented in the robotic system) would move toward the next state and generate another explanation with another complexity. This way the framework would incorporate a reinforcement learning algorithm to generate the optimum natural language utterance for better understanding in human-robot collaboration.
Evaluation will be conducted through a series of user studies performed on different robotic platforms and tasks. The key performance indicators will include satisfaction, trust, curiosity, fluency, the 'goodness' of the explanation and additional measures which will be assessed through objective and subjective measures. We aim to define and evaluate the quality of understanding.
This proposed work paves the way to developing robot understandability for nonprofessional users and bystanders, by combining learning and natural language processing. This is expected to increase trust and acceptability of robots that collaborate with humans, which is especially important for these populations.
Hellström, T., S. Bensch. 2018. Understandable Robots - What, Why, and How, Paladyn - Journal of Behavioural Robotics 9 (1): 110-123.
Singh, A.K., N. Baranwal, K.-F. Richter, T. Hellström, S. Bensch. 2019. Verbal explanations by collaborating robot teams. Paladyn. Journal of Behavioural Robotics.