Transparency Mechanisms in HRI : Improving an observer’s understanding of social robots

Sammanfattning: During an interaction between a robot and its user, a robot may sometimes do things that the user finds unintuitive. This often happens because the user does not understand the robot’s intent, state, or policy well enough. In such cases, users would benefit if the robot had the ability to reveal this hidden information; a property which is called transparency. Transparency is also desirable because it helps robots comply with ethical guidelines, makes interactions more robust, and increases users' trust in the robot. Here, we investigate how robots can be made transparent, and our first step towards this is a literature review of the area. After completing the review, we suggest using available robot modalities and information content as features to find suitable technical approaches (frameworks) for transparency which we identify and categorize. In addition, we use these features to break transparency down into more manageable pieces, which we call types of transparency, and also find that situatedness, i.e., if the interaction takes place in a physical or virtual space, changes the effect of the robot’s communications. We then narrow our attention and focus on legibility, a type of transparency that uses movement to communicate the robot’s intent. Here, we investigate when to use which legibility framework, and propose a novel approach to benchmark them. Leveraging these findings, we then propose our own machine-learning-based legibility framework, which is general enough to be able to imitate several existing legibility frameworks and which can learn a user’s expectations from data. 

  KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)