VVV18
International Winter School on Humanoid Robot Programming
>

Data Driven Robotics and Reinforcement Learning at Deepmind

Francesco Nori, Google DeepMind


Abstract

An excursus throughout the research that has been conducted at Deepmind in the context of robotics.

Biography

Francesco was born in Padova in 1976. He received his D.Eng. degree (highest honors) from the University of Padova (Italy) in 2002. During the year 2002 he was a member of the UCLA Vision Lab as a visiting student under the supervision of Prof. Stefano Soatto, University of California Los Angeles. During this collaboration period he started a research activity in the field of computational vision and human motion tracking. In 2003 Francesco Nori started his Ph.D. under the supervision of Prof. Ruggero Frezza at the University of Padova, Italy. During this period the main topic of his research activity was modular control with special attention on biologically inspired control structures. Francesco Nori received his Ph.D. in Control and Dynamical Systems from the University of Padova (Italy) in 2005. In the year 2006 he moved to the University of Genova and started his PostDoc at the laboratory for integrated advanced robotics (LiraLab), beginning a fruitful collaboration with Prof. Giorgio Metta and Prof. Giulio Sandini. In 2007 Francesco Nori has moved to the Italian Institute of technology where in 2015 he was appointed Tenure Track Researcher of the Dynamic and Interaction Control research line. His research interests are currently focused on whole-body motion control exploiting multiple (possibly compliant) contacts. With Giorgio Metta and Lorenzo Natale he is one of the key researchers involved in the iCub development, with specific focus on control and whole-body force regulation exploiting tactile information. Francesco is currently coordinating the H2020-EU project An.Dy (id. 731540); in the past he has been involved in two FP7-EU projects: CoDyCo as coordinator and Koroibot as principal investigator. In 2017 Francesco joined Deepmind where he is collaborating with Martin Riedmiller, Jonas Buchli and Dan Belov. His current interestes seamlessly span robotics and reinforcment learning, with applications in both manipulation and locomotion.