Material perception and action : The role of material properties in object handling
Sammanfattning: This dissertation is about visual perception of material properties and their role in preparation for object handling. Usually before an object is touched or picked-up we estimate its size and shape based on visual features to plan the grip size of our hand. After we have touched the object, the grip size is adjusted according to the provided haptic feedback and the object is handled safely. Similarly, we anticipate the required grip force to handle the object without slippage, based on its visual features and prior experience with similar objects. Previous studies on object handling have mostly examined object characteristics that are typical for object recognition, e.g., size, shape, weight, but in the recent years there has been a growing interest in object characteristics that are more typical to the type of material the object is made from. That said, in a series of studies we investigated the role of perceived material properties in decision-making and object handling, in which both digitally rendered materials and real objects made of different types of materials were presented to human subjects and a humanoid robot. Paper I is a reach-to-grasp study where human subjects were examined using motion capture technology. In this study, participants grasped and lifted paper cups that varied in appearance (i.e., matte vs. glossy) and weight. Here we were interested in both the temporal and spatial components of prehension to examine the role of material properties in grip preparation, and how visual features contribute to inferred hardness before haptic feedback has become available. We found the temporal and spatial components were not exclusively governed by the expected weight of the paper cups, instead glossiness and expected hardness has a significant role as well. In paper II, which is a follow-up on Paper I, we investigated the grip force component of prehension using the same experimental stimuli as used in paper I. In a similar experimental set up, using force sensors we examined the early grip force magnitudes applied by human subjects when grasping and lifting the same paper cups as used in Paper I. Here we found that early grip force scaling was not only guided by the object weight, but the visual characteristics of the material (i.e., matte vs. glossy) had a role as well. Moreover, the results suggest that grip force scaling during the initial object lifts is guided by expected hardness that is to some extend based on visual material properties. Paper III is a visual judgment task where psychophysical measurements were used to examine how the material properties, roughness and glossiness, influence perceived bounce height and consequently perceived hardness. In a paired-comparison task, human subjects observed a bouncing ball bounce on various surface planes and judged their bounce height. Here we investigated, what combination of surface properties, i.e., roughness or glossiness, makes a surface plane to be perceived bounceable. The results demonstrate that surface planes with rough properties are believed to afford higher bounce heights for the bouncing ball, compared to surface planes with smooth properties. Interestingly, adding shiny properties to the rough and smooth surface planes, reduced the judged difference, as if surface planes with gloss are believed to afford higher bounce heights irrespective of how smooth or rough the surface plane is beneath. This suggests that perceived bounce height involves not only the physical elements of the bounce height, but also the visual characteristics of the material properties of the surface planes the ball bounces on. In paper IV we investigated the development of material knowledge using a robotic system. A humanoid robot explored real objects made of different types of materials, using both camera and haptic systems. The objects varied in visual appearances (e.g., texture, color, shape, size), weight, and hardness, and in two experiments, the robot picked up and placed the experimental objects several times using its arm. Here we used the haptic signals from the servos controlling the arm and the shoulder of the robot, to obtain measurements of the weight and hardness of the objects, and the camera system to collect data on the visual features of the objects. After the robot had repeatedly explored the objects, an associative learning model was created based on the training data to demonstrate how the robotic system could produce multi-modal mapping between the visual and haptic features of the objects. In sum, in this thesis we show that visual material properties and prior knowledge of how materials look like and behave like has a significant role in action planning.
KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)