The goal of this workshop is to explore the possibility of robots developing models inspired by the mechanisms of human body representations. In this way, they can on one hand become new modeling tools for empirical sciences – expanding the domain of computational modeling by anchoring it to the physical environment and a physical body. Consequently, complete sensorimotor loops can be instantiated and not only algorithms but whole behaviors validated. On the other hand, robot controllers endowed with multimodal whole-body awareness and plasticity typical of humans should give rise to – in robotics unprecedented – autonomy, robustness, and resilience.
In order to achieve a variety of goals, humans and animals seamlessly command their highly complex bodies in space and concurrently integrate multimodal sensory information. To support these capabilities, it seems that knowledge or representation of the body and the surrounding space is necessary. In this regard, a number of concepts like body schema and body image were proposed. However, a growing number of studies from psychology and the neurosciences suggests that body schema or image are in fact mere umbrella terms, encompassing a multitude of different body representations that may be partially overlapping, partially dissociable, and partially organized in a hierarchy. Yet, empirical knowledge regarding the workings of these representations of our bodies in the brain is fragmented and sometimes contradictory and a coherent understanding of the key mechanisms is missing. Furthermore, computational models are scarce and address at the most isolated subsystems. Humanoid robots possess morphologies – physical characteristics as well as sensory and motor apparatus – that are in some respects akin to human bodies. However, their body representations, so-called plant models, have largely opposite properties to those that we suspect in humans – robot models are typically centralized, fixed, explicit, and amodal.
This workshop will capitalize on the ICDL-Epirob focus that lies at the intersection of scientific and engineering disciplines that are concerned with development, but provide a more specific theme, namely body representations and their development. The workshop will be divided into two main sessions. In the first, the invited speakers (already confirmed) will provide their perspectives on the topic and set the stage for a discussion. The speakers were carefully chosen, such that they cover all the disciplines that are at the core of the topic: developmental psychology (Jeffrey Lockman, Kevin O'Regan), computational neuroscience (Stuart Wilson), and cognitive developmental robotics (Minoru Asada). All of the speakers are specifically concerned with body representations in their current research. A second block will consist of short presentations / flash talks and a poster session. This will be open to the community at large – we will advertise the workshop in appropriate channels and solicit 1-page abstracts, which can report on recent or ongoing work, or even future work and ideas. These will be reviewed by the organizers (we will organize the review process, such that each submission is reviewed by at least one expert from the empirical sciences and one from a synthetic discipline). The abstracts will be made accessible on the workshop website. Based on the number and quality of submissions, we will consider organizing a follow-up special issue of a journal such as IEEE Transactions on Autonomous Mental Development or Adaptive Behavior.
The target audience overlaps with that of ICDL-Epirob itself. We want to primarily attract researchers from developmental psychology, computational and cognitive neuroscience, machine learning, and cognitive and developmental robotics – united by the theme of body representations and their development.
Ph.D., Professor of Osaka University, Dept. of Adaptive Machine Systems.
He is one of the founders of RoboCup (president from 2002-2008), and his team got the championship in the humanoid league (adult size) and the best humanoid award (Louis Vuitton Cup) in 2013. His group has been advocating "Cognitive Developmental Robotics" which aims at understanding the process of human's cognitive development by synthetic and constructive approaches such as computer simulations and real robot experiments.
Ph.D., ex-director of the Laboratoire Psychologie de la Perception, CNRS, Université Paris Descartes
After early work on eye movements in reading, he was led to question established notions of the nature of visual perception, and to discover, with collaborators, the phenomenon of "change blindness". In 2011 he published a book with Oxford University Press: "Why red doesn't sound like a bell: Understanding the feel of consciousness". In 2013 he obtained a five year Advanced ERC grant to explore his "sensorimotor" approach to consciousness in relation to sensory substitution, pain, color, space perception, developmental psychology and robotics.
Ph.D., Professor of Psychology at Tulane University, New Orleans, LA, USA
His work centers on perception-action and cognitive development in young children, with a focus on the development of tool use. His research has been supported by the National Science Foundation and currently by the National Institutes of Health and has served on grant review panels for both of these agencies. He is the former Chair of the Tulane Psychology Department and is the immediate past Editor of the journal, Child Development. He is presently Co-Program Chair of the 2015 Biennial meetings of the Society for Research in Child Development.
Ph.D., Lecturer at University of Sheffield, UK
The aim of his research is to construct theoretical, computational, experimental, and robotic models of the development of brain function, and specifically to develop models of how the brain learns to represent physical spaces, such as that occupied by the body.
October 13, 2014
3:30 pm Poster Teaser Presentations
4:00 pm Poster Session + Coffee
5:45 pm Discussion + Conclusions
6:00 pm Closing
Deducing abstract concepts without knowing what you’re looking for - the example of space.
Kevin O’Regan and Alexander Terekhov
Space pervades our thinking. When we study child development, or when we construct robots to live in our world, we implicitly assume that space exists, and we presuppose that cognitive processing is geared to take account of that fact. But could this be cheating? Somehow space must have been discovered over years of evolution, or else it must be rediscovered by each individual’s brain during maturation. How could this come about without knowing a priori that space exists? I shall present work that suggests that space may be a way of economically accounting for unknown and unlabeled sensory input and output. Space can be discovered without first knowing its existance, if brains are provided with an algorithm that attempts to abstract simple laws accounting for the effect of their actions. I shall illustrate the idea using a simple artificial agent and show how the agent will come to behave as though it possesses spatial notions like path integration and shape constancy. I shall try to link the work to the development of spatial abilities in infants and animals.
What, if anything, are cortical maps for?
Self-organising maps can recreate many of the essential features of the known functional organisation of primary cortical areas in the mammalian brain. According to such models, cortical maps represent the spatial-temporal structure of sensory and/or motor input patterns registered during the early development of an animal, and this structure is determined by interactions between the neural control architecture, the body morphology, and the environmental context in which the animal develops. I will try to answer the often overlooked question ‘what, if anything, are cortical maps for?’, in the context of my current work, which aims to develop motor cortex maps by simulating interactions between the brain, body, and environment, in a behaviourally relevant context for the developing animal (thermoregulatory huddling in rodents).
Body Mapping in Infancy
Jeffrey Lockman and Jackleen Leed
Knowledge about the one’s own body can take several forms, from being able to recognize the self in a mirror to understanding how different parts of the body are arranged and typically configured relative to one another. These forms of body knowledge are more visual or conceptual in nature. It is known, for instance, that by 18 months of age, young children begin to recognize themselves in mirrors. Infants, however, may possess earlier forms of body knowledge that are more functionally based, enabling them to localize targets on their bodies with their hands regardless of where the target is positioned on the body and regardless where the hands are positioned in space. This type of functional map of the body has critical adaptive value: It aids in protection, removal of foreign stimuli from the body, grooming, problem solving and many other basic life skills. Yet little is known about when or how this type of body knowledge develops in humans. Additionally, such developmental information might inform the design of artificial agents where knowledge of body layout is crucial for effective functioning in an environment.
To address these issues, we have developed a new body localization task with human infants. The task assesses what types of functional maps infants may have of their bodies. In this task, a small vibrating buzzer is placed on different parts of the head (forehead, ears, mouth) and arms (shoulder, elbow, hands). We examine whether infants from 7 to 24 months of age (N=60) are able to localize and remove the buzzer with one of their hands. Depending on the buzzer’s location on the body, we also examine whether infants use the ipsilateral or contralateral hand to do so. Our results indicate that a functional map of the body is not immediately present in infants, but develops gradually from the second half of the first year throughout the second year. Infants become able to localize targets at the mouth earlier than other parts of the face. Likewise, they are better able to localize targets on the hand than the upper part of the arm (e.g., shoulder). Infants also show remarkable accuracy in selecting an appropriate arm to localize the target: they more often use the ipsilateral arm for targets that can be reached with either arm (e.g., targets on the face), but almost always use the contralateral arm for targets that can only be reached with that arm (e.g., a target on the right forearm can only be reached with the left hand). More generally, we consider these findings in relation to what it can tell us about the experiential mechanism through which a functional map of the body can be built and how this knowledge can inform the design of artificial agents that possess learning mechanisms to promote the development of a functional map of the body.
Body Representation for Ecological Self
Body representation is one of the most important component to establish the concept of self of which developmental process shows how a variety of body representations are acquired and shaped into higher concept of the self. In my talk, I focus on the first stage of the self conceptualization, that is, ecological self. Beginning with a brief introduction of my project entitled "Constructive Developmental Science Based on Understanding the Process from Neuro-Dynamics to Social Interaction", I show a classical work on self/nonself/others discrimination in the context of RoboCup, then, two other studies on body representation: body representations in tool use, and multi-model body representation. Finally, fetal simulation is introduced as more biological body representation. Future issues are discussed at the end of my talk.
The poster stands will have a fixed dimension of 80 cm * 150 cm [width * height] - portrait orientation. Note that a standard A0 poster will not fit (is slightly wider). A1 will fit without problems.
Spatial impairments in blind children and adults [PDF]
Istituto Italiano di Tecnologia, Genova, Italy
Gain-Fields Neural Networks for Body Image and Embodied Simulation in Robots [PDF]
Université de Cergy-Pontoise, France
Latent Goal Analysis: Learning goals and body schema from generic rewards [PDF]
Osaka University, Japan
Convergence Divergence Zone Framework: extracting sensory-motor contingencies [PDF]
Universitat Pompeu Fabra, Barcelona, Spain
Learning Sensorimotor Maps with Dynamic Neural Fields [PDF]
Ruhr-Universit ̈at Bochum, Germany
Sensory‐motor map fragments are the substrate of body models? [PDF]
Aberystwyth University, Wales
Inria/ENSTA-Paristech, Bordeaux, France
Curiosity driven motor babbling for Body Map acquisition [PDF]
Istituto Italiano di Tecnologia, Genova, Italy
Body-Representations for Sensorimotor Coordination and Tool-Use [PDF]
Humboldt-Universität zu Berlin, Germany
Eye-hand online adaptation during reaching tasks in a humanoid robot [PDF]
Instituto Superior Técnico, Lisbon, Portugal
Karlsruhe Institute of Technology, Germany
Questions to the speakers
· What is a sensible “starting point” for the development of body representations — both in humans and in robots. Can we start “from scratch”? How much is innate in humans? What shall we “design-into” our robots? [Storck, Rudolph and Sandamirskaya]
· What are the mechanisms for online adaptation of body schema (on different time scales)? [Schillaci and Hafner]
· Do we have conscious access to body schema? [Schillaci and Hafner]
· Can body schemas be transferred, e.g. for imitation? [Schillaci and Hafner]
· How can we scale up the complexity of behaviours, from low-level sensorimotor schemes to higher cognitive processes? [Schillaci and Hafner]
· How far up can the tools and framework of embodied cognitive robotics shed light on the understanding of human cognition? [Schillaci and Hafner]
· Does the extraction of a body schema requires to be embodied, therefore they could emerge only in the self, or could we as well extract and maintain the body schema of others? [Lallée]
· Some studies sugest that a "common body schema is used to represent both one’s own body, and the bodies of others individuals." What do you think about this? They said also that "the body schema could be a basis for social cognition." [Vicente, Ferreira, Jamone and Bernardino]
Proposed Body Schema definitions
The neural representation of the body [Head & Holmes, 1911]
Implicit knowledge structure that encodes the body’s form, the constraints on how the body’s parts can be configured, and the consequences of this configuration on touch, vision, and movement [Graziano & Botvinick, 2002]
In my opinion, a body schema is a neuronal structure, which is an integral part of the sensorimotor loop and which couples the perceptual sensations to the motor intentions, and back. In particular, the body schema includes affordances (which actions are appropriate or possible towards a perceived objects) that are mappings from sensations to actions, as well as sensory-motor anticipations (which consequences the actions may have) that are mappings from actions to sensations. [Storck, Rudolph and Sandamirskaya]
Body schema is a sensorimotor mapping learned by an agent throughout its experiencing the world. This schema is an adaptive model which copes with changes in the agent's body. It is necessary for building up the cognitive capabilities of an agent. [Schillaci and Hafner]
My definition of a body schema would be that it is simply the statistical properties of the sensory-motor interactions within the acting body of an animal. (Therefore making it a very personal property). [Lallée]
Body schema is an internal model that humans and animals have of themselves in the world. Their limbs' poses are estimated according to this model. Moreover, Body schema is updated during life span and, more importantly, it is updated during movements (with our sensory capabilities, for instance: vision and touch). [Vicente, Ferreira, Jamone and Bernardino]
Registration is available on the ICDL-EPIROB 2014 website. It is possible to either register for the whole conference or to apply for individual workshops.
Please notice that participation in the workshop is not limited to contributors: any registered attendant will be more than welcome!
Address: Piazza Giacomo Matteotti, 9, 16123 Genova, Italy
Google Maps: https://goo.gl/maps/gQPn0
Ph.D. student, iCub Facility, Istituto Italiano di Tecnologia, Genova, IT
He is the designer and developer of this website.