Past Events

Robotics Colloquium: Speaker Tucker Hermans

Title: Out of Sight, Still in Mind: Contending with Hidden Objects in Multi-Object Manipulation

Abstract: Our daily lives are filled with crowded and cluttered environments. Whether getting a bowl out of the cabinet, food out of a refrigerator, or a book off a shelf we are surrounded by groups and collections of objects when acting in the built world. For robots to act as caregivers and assistants in human spaces they must contend with more than one object at a time.

In this talk, I will present our recent efforts in the manipulation of multiple objects as groups. I will start with a brief description of what we’ve learned in creating successful learning-based tools for the manipulation of isolated unknown objects. I will then discuss how we’ve extended these approaches to plan interactions with object collections, where multiple objects move at once. Key to these approaches is the use of logical representations to represent and communicate robot tasks. I will then discuss further extensions to our core multi-object manipulation framework including receiving natural language commands and incorporating memory models to handle long-term object occlusion.

Bio: Tucker Hermans is an associate professor in the School of Computing at the University of Utah and a senior research scientist at NVIDIA. Hermans is a founding member of the University of Utah Robotics Center. Professor Hermans is a 2021 Sloan Fellow and recipient of the NSF CAREER award and the 3M Non-Tenured Faculty Award. His research with his students has been nominated for and won multiple conference paper awards including winning the Best Systems Paper at CoRL 2019.

Previously, Professor Hermans was a postdoc at TU Darmstadt working with Jan Peters. He was at Georgia Tech from 2009 to 2014 in the School of Interactive Computing where he earned his Ph.D. in Robotics and his M.Sc. in Computer Science under the supervision of Aaron Bobick and Jim Rehg. He earned his A.B. in German and Computer Science from Bowdoin College in 2009.

Website

Robotics Colloquium: Guest Ju Sun

TITLE: Robustness in deep learning: where are we? 

ABSTRACT: Deep learning (DL) models are not robust: adversarially constructed and irrelevant natural perturbations can break them abruptly. Despite intensive research in the past few years, surprisingly, there have yet to be tools for reliable robustness evaluation in the first place. I’ll describe our recent efforts toward building such a reliable evaluation package. This new computational capacity leads to more concerns than hopes: we find that adversarial training, a predominant framework toward achieving robustness, is fundamentally flawed. On the other hand, before we can obtain robust DL models, or trustworthy DL models in general, we must safeguard our models against making severe mistakes to make imperfect DL models deployable. A promising approach is to allow DL models to restrain from making predictions on uncertain samples. I’ll describe our recent lightweight, universal selective classification method that performs excellently and is more interpretable. 

BIO: Ju Sun is an assistant professor at the Department of Computer Science & Engineering, the University of Minnesota at Twin Cities. His research interests span computer vision, machine learning, numerical optimization, data science, computational imaging, and healthcare. His recent efforts are focused on the foundation and computation for deep learning and applying deep learning to tackle challenging science, engineering, and medical problems. Before this, he worked as a postdoc scholar at Stanford University (2016-2019), obtained his Ph.D. degree from Electrical Engineering of Columbia University in 2016 (2011-2016), and B.Eng. in Computer Engineering (with a minor in Mathematics) from the National University of Singapore in 2008 (2004-2008). He won the best student paper award from SPARS'15, honorable mention of doctoral thesis for the New World Mathematics Awards (NWMA) 2017, and AAAI New Faculty Highlight Programs 2021.

Robotics Colloquium: Speaker Naveen Kuppuswamy

Talk title: Robust contact-rich manipulation using tactile diffusion policies

Abstract: Achieving robust manipulation in unstructured real-world environments like homes is a hard open challenge. While, a diverse array of manipulation skills may be required, contact-rich/forceful manipulation tasks have proven to be particularly difficult to execute. In this context, camera-based tactile sensors have shown great promise in enhancing a robot’s ability to perceive touch; however, finding tractable high-fidelity analytical / data-driven models has proven challenging.  In this talk, I will detail how a recently achieved breakthrough in visuomotor policy learning using generative AI techniques - the diffusion policy - might be leveraged towards overcoming these challenges. Our approach directly incorporates haptic feedback into a diffusion policy by simply conditioning on tactile signals as an additional input. This tactile-diffusion policy can be trained on arbitrary tasks by utilizing human/expert demonstration and directly incorporates raw images from both traditional vision sensors and camera-based tactile sensor fingers - the TRI Soft-bubble Punyo sensor as an example. We use this framework to realize a wide array of challenging real-world kitchen manipulation tasks using a Franka Panda robot; highlights include constrained manipulation of visually diverse and challenging objects (wine glasses, dishes, bottle caps), handling deformable objects (dough, paper), and forceful tool manipulation (spatula). Our results indicate that tactile-diffusion policies outperform vision-only diffusion policies in both robustness and generalization abilities by significant magnitudes. I will conclude with a discussion on the implications of this approach towards building a more general-purpose foundation for robot manipulation - the TRI large behavior model effort.


 

Brief Bio:

Naveen Kuppuswamy is a Senior Research Scientist and Tactile Perception and Control Lead at the Toyota Research Institute, Cambridge, MA, USA. He holds a Bachelor of Engineering from Anna University, Chennai, India, an MS in Electrical Engineering from the Korea Advanced Institute for Science and Technology (KAIST), Daejeon, South Korea, and a Ph.D. in Artificial Intelligence from the University of Zurich, Switzerland. Naveen has several years of academic and industry robotics research experience and has authored several publications in leading peer-reviewed journals and conferences on themes of manipulation, tactile sensing, and robot learning & control. His research has been recognized through multiple publications and grant awards. He is also keenly interested in the STEM education of under-represented communities worldwide. Naveen is deeply passionate about using robots to assist and improve the quality of life of those in need.

Bio: Roboticist – tactile perception and control lead – senior research scientist

 

Robotics Colloquium: Guest Speaker-Jeannette Bohg

Title: Enabling Cross-Embodiment Learning

Abstract: In this talk, I will investigate the problem of learning manipulation skills across a diverse set of robotic embodiments. Conventionally, manipulation skills are learned separate for every task, environment and robot. However, in domains like Computer Vision and Natural Language Processing we have seen that one of the main contributing factor to generalisable models is large amounts of diverse data. If we were able to to have one robot learn a new task even from data recorded with a different robot, then we could already scale up training data to a much larger degree for each robot embodiment. In this talk, I will present a new, large-scale datasets that was put together across multiple industry and academic research labs to make it possible to explore the possibility of cross-embodiment learning in the context of robotic manipulation, alongside experimental results that provide an example of effective cross-robot policies. Given this dataset, I will also present multiple alternative ways to learn cross-embodiment policies. These example approaches will include (1) UniGrasp - a model that allows to synthesise grasps with new hands, (2) VICES - a systematic study of different action spaces for policy learning and (3) XIRL - an approach to automatically discover and learn vision-based reward functions from cross-embodiment demonstration videos.

Bio: Assistant Professor for Robotics at Stanford

Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.

Developing Enabling Robotic Systems for High-throughput Plant Phenotyping By Dr. Tang

Join us for a presentation with Dr. Lie Tang from Iowa State University.
Dr. Tang will be presenting: Developing Enabling Robotic Systems for High-throughput Plant Phenotyping
Date: October 6th
Time: 10:00 AM - 11:00 AM
Location: BAE 106 or via Zoom (Meeting ID: 967 0425 4507)
 
Bio: Dr. Tang's research has been concerned with agricultural automation, optimization, machine intelligence and robotics. He has many years of international research experience in Europe and the US. He has developed an advanced real-time machine vision system for automated behavior monitoring for group-housed pigs in KULeuven (Belgium). During his PhD study he developed a sensing and control system for variable rate and selective weed control. He also developed an automated sensing system for corn plant spacing measurement for Deere Company. While he was on faculty in both Denmark and the Netherlands, he has been working on agricultural robotics and intelligent systems. Dr. Tang is currently continuing his research in developing advanced sensing, optimization and robotic technologies for agricultural production systems in the 21st century.

Faculty Meet and Greet with the MSR Program Students

Please join us at the MnRI Fall semester Meet and Greet with the MSR students. 

Light refreshments will be served. 

Robotics Colloquium: Guest Speaker- Ce Yang

Title: Drone and Ground-Based Remote Sensing for Precision Agriculture and Phenotyping
 

Abstract: Minnesota is a lead-producing state for several staple crops, including corn, soybean, and wheat. AI and remote sensing applied to agricultural fields enhance variable rate technology, which addresses field spatial variations and improves crop production by eliminating diseases and managing stresses more efficiently. Deep learning disease/stress detection modeling using remote sensing data and AI can also provide agricultural researchers with high-throughput solutions in field studies, e.g., scoring for diseases, pests, and stresses. This talk focuses on drone and ground-based remote sensing and AI modeling in Ce Yang's group for precision agriculture and high-throughput phenotyping.

 
Bio: Ce Yang is an associate professor working on remote sensing for precision agriculture and high-throughput phenotyping. She leads the Agricultural Robotics Lab at the University of Minnesota to work on nutrient management, yield prediction, and disease detection of staple food crops. The Ag Robotics Lab’s mission is to apply advanced ideas of robotics, remote sensing, data mining, and information technology to precision agriculture. Their core techniques include multispectral/hyperspectral imaging, spectroscopy, machine learning, geographic information systems (GIS), digital mapping, biochemical sensing, etc. The tools available for carrying out her research are unmanned aerial vehicles, unmanned ground vehicles, video cameras, multispectral cameras, hyperspectral cameras, DGPS, and various electrical, optical, and chemical sensors. Ce Yang obtained her Ph.D. in Agricultural Engineering and MS in Computer Science and Engineering at the University of Florida.

Practice Makes Perfect? Coaching by Observation and Simulation for Robots in Austere Environments

Bio: Dr. Voyles, the Daniel C. Lewis Professor of the Polytechnic, received a B.S. in Electrical Engineering from Purdue University in 1983, an M.S. from Mechanical Engineering at Stanford University in 1989, and a Ph.D. in Robotics from the School of Computer Science at Carnegie Mellon University in 1997. He was at the University of Minnesota as an Assistant Professor and then a tenured Associate Professor from 1997 - 2007, was a tenured Associate Professor of Electrical and Computer Engineering at the University of Denver from 2006 - 2013, NSF Program Director in CISE from 2010 - 2013, Assistant Director of Robotics and Cyber-Physical Systems at the White House Office of Science and Technology Policy from 2014 - 2015 and, since 2013, is Professor of Engineering Technology at Purdue University. He runs the Collaborative Robotics Lab, is the Director of the Purdue Robotics Accelerator, and was the Site Director of the NSF Center for Robotics and Sensors for Human Well-Being (RoSe-HUB). He is an IEEE Fellow.

Dr. Voyles' research interests are in the areas of miniature, constrained robots, mobile manipulation, Form + Function 4D Printing, learning from observation, robot-to-robot skill transfer for medical robotics, precision animal agriculture, and haptic sensors and actuators.

MnRI Master In Robotics Fall 2023 Admitted Students Welcoming Event

 

An in-person welcoming event where you will tour the building, meet current students and faculty, and complete administrative items for the program.

 

 

How the program can help in professional placement.

Join us on June 9, 2023, from 2:30 PM to 3:30 PM for an In-person event where the MSR program director talks about job placements over cookies and coffee.