Past Events
Robotics Colloquium: Speaker Chad Jenkins
Friday, Dec. 1, 2023, 2:30 p.m. through Friday, Dec. 1, 2023, 4 p.m.
In-Person: Drone Lab: 164 Shepherd
Topic: Defining the Discipline of Robotics for Excellence and Equity through Bipedal Mobile Manipulation
Abstract: Start with a simple question: What is the best major for a student to become a roboticist? In general, an undergraduate major defines the intellectual organization for its academic discipline to produce “people and ideas.” In my role leading the Robotics Undergraduate Program at Michigan, we tackled this question through the curricular challenge of how to both: 1) educate people to put ideas of the robotics discipline into practice and 2) endow them with the intellectual lens for creating new ideas that extend the frontiers of the robotics discipline -- including research into mobility and manipulation in the real world.
As part of our larger Robotics Pathways model, The Robotics Major at the University of Michigan was successfully launched in the 2022-23 academic year as an innovative step forward to better serve students, our communities, and our society. Building on our guiding principle of "Robotics with Respect" -- the Michigan Robotics Major was designed to define robotics as a true academic discipline with both equity and excellence as our highest priorities. The Michigan Robotics Major has embraced an adaptable curriculum that is accessible through a diversity of student pathways and enables successful and sustained career-long participation in robotics, AI, and automation professions.
In this talk, I will present our design, launch, and innovations for the Michigan Robotics Major for undergraduates and our research progress toward humanoid mobile manipulation systems. A number of curricular innovations will be presented such as: bringing mathematics to life through computational linear algebra (before calculus!), elevating core robotics concepts into compelling sophomore and junior-level courses, creating our affordable and accessible MBot platform for capable of fully autonomous navigation, and Distributed Teaching Collaboratives with Minority Serving Institutions. I will also present our work for perception and planning with the Agility Robotics Digit robot towards realizing the long-standing vision of taskable autonomous humanoid robots capable of mobile manipulation tasks in common human environments.
Bio: Chad Jenkins is a Professor of Robotics and a Professor of Electrical Engineering and Computer Science at the University of Michigan. Prof. Jenkins is the inaugural Program Chair of the Robotics Major Degree Program launched in 2022 for undergraduates at the University of Michigan. Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science (AAAS) and the Association for the Advancement of Artificial Intelligence (AAAI).
Professor of Robotics; Professor of EECS (courtesy) at the University of Michigan
Host: Karthik Desingh
MnRI Showcase-Guest Speaker Henrik Christensen
Friday, Nov. 17, 2023, 8 a.m. through Friday, Nov. 17, 2023, 5:30 p.m.
Morning Session: Drone Lab - 164 Shepherd Labs:
08:00 Breakfast, Check-in, Poster Setup
08:30 Welcome - Nikolaos Papanikolopoulos
09:00 Student Poster Session
11:30 Demos & Lab Tours
12:30 Lunch Break
Afternoon Session: 4th floor of Walter Library:
13:30 MnRI Faculty Talk Session 1
14:30 Plenary Talk
15:30 MnRI Faculty Talk Session 2
16:30 MnRI Social
About the speaker:
Henrik I Christensen is the Qualcomm Chancellor's Chair of Robot Systems and the director of the Contextual Robotics Institute at UC San Diego, and also a Distinguished Professor of Computer Science in the Department of Computer Science and Engineering. Dr. Christensen was initially trained in Mechanical Engineering and worked subsequently with MAN/BW Diesel. He earned M.Sc. and Ph.D. EE degrees from Aalborg University, 1987 and 1990, respectively. Upon graduation, Dr. Christensen has participated in many international research projects across four continents. He has held positions at Aalborg University, Oak Ridge National Laboratory, Royal Institute of Technology and Georgia Tech before joining UC San Diego. Dr. Christensen does research on robotics, with a particular emphasis on a systems perspective to the problem. Solutions must have a strong theoretical basis, a corresponding well-defined implementation, and it must be evaluated in realistic settings. There is a strong emphasis on "real systems for real applications!"
The research has involved collaborations with ABB, Electrolux, Daimler-Chrysler, KUKA, iRobot, Apple, Partek Forest, Volvo, SAIC, Boeing, GM, PSA Peugeot, BMW, Yujin, Qualcomm, ...
Dr. Christensen has published more than 400 contributions across robotics, vision and artificial intelligence. Dr. Christensen served as the Founding Chairman of EURON (1999-2006) and research coordinator for ECVision (2000-2004). He has led and participated in a many of EU projects, such as VAP, CoSy, CogVis, SMART, CAMERA, EcVision, EURON, Cogniron, and Neurobotics. He served as the PI for the CCC initiative on US Robotics. He is a Co-PI on ARL DCIST RCA, TILOS, the Robotics-VO, and several projects with industry. He was awarded the Joseph Engelberger Award 2011 and also named a Boeing Supplier of the Year 2011. He is a fellow of AAAS (2013) and IEEE (2015). He was awarded an honorary doctorate in engineering (Dr. Techn. h.c.) from Aalborg University, 2014. Dr. Christensen has served / serves on the editorial board for many of the most prestigious journals in the field, incl. Intl. Jour. of Robotics Research (IJRR), Autonomous Robots, Robotics and Autonomous Systems (RAS), IEEE Pattern Analysis and Machine Intelligence (PAMI), and Image & Vision Computing. In addition, he serves on the editorial board of the MIT Series on Intelligent Robotics and Autonomous Agents. He was the founding co-editor-in-chief of Trends and Foundations in Robotics.
This research showcase aims to make up for the missed in-person networking due to the pandemic. Students, postdocs, industry researchers, and faculty are all encouraged to participate. The event is free.
The symposium will be an all-day affair featuring a plenary talk, faculty talks, student posters, and social events.
Please consider presenting your past and current research in the showcase.
If you are a faculty interested in showcasing your research with a pitch talk ~10 mins, please fill out this form.
If you are a student or a postdoc interested in showcasing your research with a poster, please fill out this form.
The deadline for indicating your interest is October 31, 2023. We plan to archive the recordings of the presentations (talks and posters) after the event. We will also welcome unpublished as we can selectively avoid archiving them.
Visit MnRI Research Showcase 2023 for more details.
Robotics Colloquium: Guest Speaker Ognjen Ilic
Friday, Nov. 10, 2023, 2:30 p.m. through Friday, Nov. 10, 2023, 3:30 p.m.
In-person: Drone Lab: 164 Shepherd Lab
Title: Metamaterials in Motion: Manipulating the Energy and the Momentum of Waves at the Subwavelength Scale
Abstract: The transport of waves, such as light and sound, can be radically transformed when waves interact with metamaterial structures with engineered subwavelength features. My group aims to
understand and develop electromagnetic and acoustic metamaterials that can control wave-matter interactions in ways that are impossible with conventional materials. In the first part of my talk, I will present our work on acousto-mechanical metamaterials that can steer ultrasonic waves for contactless and programmable actuation. This versatile concept enables new actuation functions, including autonomous path following and contactless tractor beaming, that are made possible by anomalous scattering and are beyond the limits of traditional wave-matter interactions. In the second part, I will discuss how the same ideas carry over naturally to optical systems. Light is a powerful tool to manipulate matter without contact, with concepts such as optical traps and tweezers widely used across biology and bioengineering to microfluidics and quantum sensing, but typically limited to small objects and short distances. In contrast, our approach to designing nanoscale elements to control the momentum of light could open new frontiers in optomechanics, such as macroscale optical levitation and long-range optical actuation. These concepts of nanoscale light-matter interactions could lead to ultralightweight and multi-functional structures and coatings with unique new terrestrial and space applications.
Bio: Ognjen Ilic is a Benjamin Mayhugh Assistant Professor of Mechanical Engineering at the University of Minnesota, Twin Cities. He completed his Ph.D. in physics at MIT and was a postdoctoral scholar in applied physics and materials science at Caltech. His research themes encompass light-matter and wave-matter interactions in nanoscale and metamaterial structures. He received the Air Force Office of Scientific Research (AFOSR) Young Investigator Award, the 3M Non-Tenured Faculty Award, the Bulletin Prize of the Materials Research Society, and a University of Minnesota McKnight Land-Grant Professorship. He holds graduate faculty appointments in the Department of Electrical and Computer Engineering and the School of Physics and Astronomy at the University of Minnesota.
Robotics Colloquium: Speaker Ryan Caverly
Friday, Nov. 3, 2023, 2:30 p.m. through Friday, Nov. 3, 2023, 4 p.m.
In-person: Drone Lab: 164 Shepherd Lab
Title: Modeling, Pose Estimation, and Control of Cable-Driven Robots
Abstract:
Cable-driven robots are a relatively new class of robotic manipulators that have intriguing features, including a large workspace and high payload-to-weight-ratio, which have the potential to enable exciting new robotic applications. While these features are promising, high-acceleration maneuvers that take advantage of these properties are challenging and can even cause instability of the system if the end-effector pose is not known accurately and the feedback controller is not robust to large amounts of model uncertainty. The first part of this talk will focus on dynamic modeling, pose estimation, and robust control methods developed by the Aerospace, Robotics, Dynamics, and Control (ARDC) Lab to help enable cable-driven robotic applications. The second part of the talk will introduce the Cable-Actuated Bio-inspired Lightweight Elastic Solar Sail (CABLESSail) concept being developed by the ARDC Lab for space exploration.
Bio:
Ryan Caverly is an Assistant Professor in the Department of Aerospace Engineering and Mechanics at the University of Minnesota. He received his B.Eng. degree in Honours Mechanical Engineering from McGill University, and his M.Sc. and Ph.D. degrees in Aerospace Engineering from the University of Michigan, Ann Arbor. From 2017 to 2018 he worked as an intern and then a consultant for Mitsubishi Electric Research Laboratories in Cambridge, MA. Dr. Caverly is the recipient of a Department of Defense (DoD) Defense Established Program to Stimulate Competitive Research (DEPSCoR) Award and a NASA Early Career Faculty award. His research interests include dynamic modeling and control systems, with a focus on robotic, mechanical, and aerospace applications, as well as robust and optimal control techniques.
Assistant Professor, Department of Aerospace Engineering and Mechanics
Robotics Colloquium: Speaker Tucker Hermans
Friday, Oct. 27, 2023, 2:30 p.m. through Friday, Oct. 27, 2023, 4 p.m.
In-person: Drone Lab: 164 Shepherd Lab
Title: Out of Sight, Still in Mind: Contending with Hidden Objects in Multi-Object Manipulation
Abstract: Our daily lives are filled with crowded and cluttered environments. Whether getting a bowl out of the cabinet, food out of a refrigerator, or a book off a shelf we are surrounded by groups and collections of objects when acting in the built world. For robots to act as caregivers and assistants in human spaces they must contend with more than one object at a time.
In this talk, I will present our recent efforts in the manipulation of multiple objects as groups. I will start with a brief description of what we’ve learned in creating successful learning-based tools for the manipulation of isolated unknown objects. I will then discuss how we’ve extended these approaches to plan interactions with object collections, where multiple objects move at once. Key to these approaches is the use of logical representations to represent and communicate robot tasks. I will then discuss further extensions to our core multi-object manipulation framework including receiving natural language commands and incorporating memory models to handle long-term object occlusion.
Bio: Tucker Hermans is an associate professor in the School of Computing at the University of Utah and a senior research scientist at NVIDIA. Hermans is a founding member of the University of Utah Robotics Center. Professor Hermans is a 2021 Sloan Fellow and recipient of the NSF CAREER award and the 3M Non-Tenured Faculty Award. His research with his students has been nominated for and won multiple conference paper awards including winning the Best Systems Paper at CoRL 2019.
Previously, Professor Hermans was a postdoc at TU Darmstadt working with Jan Peters. He was at Georgia Tech from 2009 to 2014 in the School of Interactive Computing where he earned his Ph.D. in Robotics and his M.Sc. in Computer Science under the supervision of Aaron Bobick and Jim Rehg. He earned his A.B. in German and Computer Science from Bowdoin College in 2009.
Robotics Colloquium: Guest Ju Sun
Friday, Oct. 20, 2023, 2:30 p.m. through Friday, Oct. 20, 2023, 3:30 p.m.
In-person: Drone Lab: 164 Shepherd Lab
TITLE: Robustness in deep learning: where are we?
ABSTRACT: Deep learning (DL) models are not robust: adversarially constructed and irrelevant natural perturbations can break them abruptly. Despite intensive research in the past few years, surprisingly, there have yet to be tools for reliable robustness evaluation in the first place. I’ll describe our recent efforts toward building such a reliable evaluation package. This new computational capacity leads to more concerns than hopes: we find that adversarial training, a predominant framework toward achieving robustness, is fundamentally flawed. On the other hand, before we can obtain robust DL models, or trustworthy DL models in general, we must safeguard our models against making severe mistakes to make imperfect DL models deployable. A promising approach is to allow DL models to restrain from making predictions on uncertain samples. I’ll describe our recent lightweight, universal selective classification method that performs excellently and is more interpretable.
BIO: Ju Sun is an assistant professor at the Department of Computer Science & Engineering, the University of Minnesota at Twin Cities. His research interests span computer vision, machine learning, numerical optimization, data science, computational imaging, and healthcare. His recent efforts are focused on the foundation and computation for deep learning and applying deep learning to tackle challenging science, engineering, and medical problems. Before this, he worked as a postdoc scholar at Stanford University (2016-2019), obtained his Ph.D. degree from Electrical Engineering of Columbia University in 2016 (2011-2016), and B.Eng. in Computer Engineering (with a minor in Mathematics) from the National University of Singapore in 2008 (2004-2008). He won the best student paper award from SPARS'15, honorable mention of doctoral thesis for the New World Mathematics Awards (NWMA) 2017, and AAAI New Faculty Highlight Programs 2021.
Robotics Colloquium: Speaker Naveen Kuppuswamy
Friday, Oct. 13, 2023, 2:30 p.m. through Friday, Oct. 13, 2023, 4 p.m.
In-person: Drone Lab: 164 Shepherd Lab
Talk title: Robust contact-rich manipulation using tactile diffusion policies
Abstract: Achieving robust manipulation in unstructured real-world environments like homes is a hard open challenge. While, a diverse array of manipulation skills may be required, contact-rich/forceful manipulation tasks have proven to be particularly difficult to execute. In this context, camera-based tactile sensors have shown great promise in enhancing a robot’s ability to perceive touch; however, finding tractable high-fidelity analytical / data-driven models has proven challenging. In this talk, I will detail how a recently achieved breakthrough in visuomotor policy learning using generative AI techniques - the diffusion policy - might be leveraged towards overcoming these challenges. Our approach directly incorporates haptic feedback into a diffusion policy by simply conditioning on tactile signals as an additional input. This tactile-diffusion policy can be trained on arbitrary tasks by utilizing human/expert demonstration and directly incorporates raw images from both traditional vision sensors and camera-based tactile sensor fingers - the TRI Soft-bubble Punyo sensor as an example. We use this framework to realize a wide array of challenging real-world kitchen manipulation tasks using a Franka Panda robot; highlights include constrained manipulation of visually diverse and challenging objects (wine glasses, dishes, bottle caps), handling deformable objects (dough, paper), and forceful tool manipulation (spatula). Our results indicate that tactile-diffusion policies outperform vision-only diffusion policies in both robustness and generalization abilities by significant magnitudes. I will conclude with a discussion on the implications of this approach towards building a more general-purpose foundation for robot manipulation - the TRI large behavior model effort.
Brief Bio:
Naveen Kuppuswamy is a Senior Research Scientist and Tactile Perception and Control Lead at the Toyota Research Institute, Cambridge, MA, USA. He holds a Bachelor of Engineering from Anna University, Chennai, India, an MS in Electrical Engineering from the Korea Advanced Institute for Science and Technology (KAIST), Daejeon, South Korea, and a Ph.D. in Artificial Intelligence from the University of Zurich, Switzerland. Naveen has several years of academic and industry robotics research experience and has authored several publications in leading peer-reviewed journals and conferences on themes of manipulation, tactile sensing, and robot learning & control. His research has been recognized through multiple publications and grant awards. He is also keenly interested in the STEM education of under-represented communities worldwide. Naveen is deeply passionate about using robots to assist and improve the quality of life of those in need.
Bio: Roboticist – tactile perception and control lead – senior research scientist
Robotics Colloquium: Guest Speaker-Jeannette Bohg
Friday, Oct. 6, 2023, 2:30 p.m. through Friday, Oct. 6, 2023, 4 p.m.
In-person: Drone Lab: 164 Shepherd Lab
Title: Enabling Cross-Embodiment Learning
Abstract: In this talk, I will investigate the problem of learning manipulation skills across a diverse set of robotic embodiments. Conventionally, manipulation skills are learned separate for every task, environment and robot. However, in domains like Computer Vision and Natural Language Processing we have seen that one of the main contributing factor to generalisable models is large amounts of diverse data. If we were able to to have one robot learn a new task even from data recorded with a different robot, then we could already scale up training data to a much larger degree for each robot embodiment. In this talk, I will present a new, large-scale datasets that was put together across multiple industry and academic research labs to make it possible to explore the possibility of cross-embodiment learning in the context of robotic manipulation, alongside experimental results that provide an example of effective cross-robot policies. Given this dataset, I will also present multiple alternative ways to learn cross-embodiment policies. These example approaches will include (1) UniGrasp - a model that allows to synthesise grasps with new hands, (2) VICES - a systematic study of different action spaces for policy learning and (3) XIRL - an approach to automatically discover and learn vision-based reward functions from cross-embodiment demonstration videos.
Bio: Assistant Professor for Robotics at Stanford
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several Early Career and Best Paper awards, most notably the 2019 IEEE Robotics and Automation Society Early Career Award and the 2020 Robotics: Science and Systems Early Career Award.
Developing Enabling Robotic Systems for High-throughput Plant Phenotyping By Dr. Tang
Friday, Oct. 6, 2023, 10 a.m. through Friday, Oct. 6, 2023, 11 a.m.
Location: BAE 106 or via Zoom (Meeting ID: 967 0425 4507)
Faculty Meet and Greet with the MSR Program Students
Friday, Sept. 22, 2023, 3:30 p.m. through Friday, Sept. 22, 2023, 4:30 p.m.
In-person: Drone Lab: 164 Shepherd Lab
Please join us at the MnRI Fall semester Meet and Greet with the MSR students.
Light refreshments will be served.