讲座时间:2022年12月14日晚7:00-8:30
方式:线上线下结合
线下地点:创新港2-2141医工交叉研究平台实验室展示区
腾讯会议:781-860-118
讲座人及内容介绍:
罗山,英国伦敦国王学院高级讲师(副教授)。2016 年获得伦敦国王学院机器人学博士学位,曾任麻省理工学院计算机与人工智能实验室访问学者,利兹大学和哈佛大学博士后研究员,并于2018-2021年担任利物浦大学计算机科学系smARTLab 主任。他围绕机器人视触感知展开了一系列相关的研究,研究成果已在Autonomous Robots、ICRA、IROS、ICML 和 AAMAS等机器人顶级期刊和国际会议上发表论文50余篇,作为 PI获得了包括 EPSRC、Innovate UK、Royal Society和联合利华等超过 130 万英镑的资助,曾获得EPSRC 新研究员奖、利物浦大学卓越教学奖、BCS 新任员工奖,担任 Robotics and Automation Magazine, Frontiers in Robotics and AI 特邀副主编。
Title: Robot Visuo-Tactile Multimodal and Crossmodal Learning
In our daily life, both vision and touch are indispensable sensing methods for us to interact with the surrounding environment, and the two often assist each other to complete some tasks, such as grasping objects. Similarly, visual and tactile perception can also assist each other as the main channel for robots to understand the surrounding environment. Inspired by this, we developed a new type of touch sensor named GelTip, which is shaped like a human finger. It uses a vision camera at the bottom to capture the contact between the sensor surface elastomers and the contacting object, and extracts information about the contacting object based on the obtained tactile image. We have also designed a series of learning algorithms for extracting information from both visual and tactile images. Through the integration of vision and touch (both multi- and cross-modal), it can effectively improve the robot's object perception ability, so that the robot can effectively improve its capability of object recognition and localisation. Recently, we have also developed a simulation model of high-resolution tactile sensors that can be applied in the widely used ROS simulation environments and enables the Sim2Real learning with tactile sensing. Our research can be used to improve the dexterous manipulation of robots, suitable for warehouse robotics and automated assessments of new products in advanced manufacturing.