Past Events

Robotics Colloquium: Guest Speaker Gregory Dudek

Topic:  My robot photographer knows what I want to see!

Abstract:
In this talk, I will discuss a series of projects focused on developing autonomous robots to collect photographic data in various settings, from scientific surveys to social gatherings. Our initial efforts involved hand-crafted algorithms supported by basic computational complexity analysis. We then progressed to using topic models to guide an autonomous swimming robot with six degrees of freedom (6DOF). We have also explored predicting human trajectories to better capture subjects photographically.  In all these case, we deployed robots in the field to aid our methods, typically using navigation policies based on a combination of model-based and model-free reinforcement learning. In recent years, our research has advanced to employing large language models to determine appropriate scenarios for indoor photography. I will provide a brief overview of these methods and some of the trade-offs associated with deploying field robots that must anticipate human needs.

 

Bio:
 
Gregory Dudek is a Professor with the School of Computer Science and a member of the McGill Research Centre for Intelligent Machines (CIM) and an Associate member of the Dept. of Electrical Engineering at McGill University. In 9/2008 he became the Director of the McGill School of Computer Science. Since 2012 he has been the Scientific Director of the NSERC Canadian Field Robotics Network (NCFRN): http://ncfrn.mcgill.ca He is the former Director of McGill's Research Center for Intelligent Machines, a nearly 40 year-old inter-faculty research facility. In 2002 he was named a William Dawson Scholar. In 2008 he was made James McGill Chair, and subsequently Distinguished James McGill Professor.  In 2017 he was awarded an IEEE Gold Medal. He directs the McGill Mobile Robotics Laboratory. He has been on the organizing and/or program committees of Robotics: Systems and Science, the IEEE International Conference on Robotics and Automation (ICRA), the IEEE/RSJ International Conference on Intelligent Robotics and Systems (IROS), the International Joint Conference on Artificial Intelligence (IJCAI), Computer and Robot Vision, IEEE International Conference on Mechatronics and International Conference on Hands-on Intelligent Mechatronics and Automation among other bodies. He is president of CIPPRS, the Canadian Information Processing and Pattern Recognition Society, an ICPR national affiliate. He has authored and co-authored over 350 research publications on subjects including visual object description, recognition, RF localization, robotic navigation and mapping, distributed system design, 5G telecommunications, and biological perception. This includes a book entitled "Computational Principles of Mobile Robotics" co-authored with Michael Jenkin and published by Cambridge University Press. He has chaired and been otherwise involved in numerous national and international conferences and professional activities concerned with Robotics, Machine Sensing and Computer Vision. His research interests include perception for mobile robotics, navigation and position estimation, environment and shape modelling, computational vision and collaborative filtering.


Host: Junaed Sattar

 
 

 

Robotics Colloquium: Guest Speaker Trevor Stephens

Title: Advancing the Autonomy of Flight through Resilient Navigation Solutions

Abstract: Autonomous flight for a wide variety of platforms is a technically challenging task. Execution of multiple technical areas between takeoff and landing is required to achieve successful flight autonomy. One specific research area of interest for our team is resilient navigation. Specifically, we focus on alternative navigation, which is navigating without the aid of GNSS signals. GNSS may be unavailable for multiple reasons, such as jamming/spoofing or being in urban canyons. Without GNSS updates, an inertial navigation filter is susceptible to drift. Our research focuses on alternative aiding sources to help bound inertial drift and provide absolute position updates to our navigation filters. These aiding sources include a wide variety of sources, including vision-aided navigation, magnetic anomaly-aided navigation, celestial-aided navigation, and more. This talk will provide an overview of these navigation modalities and will include real flight test results conducted on multiple Honeywell-operated aircraft. This talk provides an example of how research is conducted within an industry setting. 

Bio: Trevor Stephens is a Sr. Engineering Manager at Honeywell Aerospace Technologies. He leads the Navigation, Controls, & Surveillance group in Advanced & Applied Technology. Since joining Honeywell, he has worked on multiple navigation technologies across a wide variety of aerospace platforms. His expertise is in alternative navigation, which aims at providing a navigation solution in the absence of GNSS signals. Trevor has a background in robotics research and graduated from Brigham Young University with a BS in mechanical engineering, and MS and PhD degrees in mechanical engineering from the University of Minnesota. He conducted his PhD research in the Medical Devices and Robotics Lab under Dr. Timothy Kowalewski. He enjoys a wide variety of hobbies outside of work, including basketball, drumming, and playing with his four kids.

 
 

 

Robotics Colloquium: Guest Speaker Kostas Bekris

Title: “Towards Closing the Perception-Planning and Sim2Real Gaps in Robotics”

 

Abstract:  There is now wider deployment of robotics solutions across applications, such as logistics, service and field robotics. There are still critical gaps, however, that limit the adaptability, robustness and safety of robots at: (a) the interface of domains, such as perception, planning/control and learning, that must be viewed in a holistic manner, and (b) the sim2real gap, i.e., the deviation between internal models and the real world.
This talk will first describe efforts in tighter integration of perception and planning for vision-driven robot manipulation. We have developed high-fidelity open-vocabulary 3D segmentation as well as high-frequency tracking of rigid bodies’ 6D poses - without needing CAD models or human annotations - by utilizing progress both in deep learning, LVMs and pose graph optimization. These solutions together with appropriate shared representations, tighter closed-loop operation and – critically - compliant end-effectors are unblocking the deployment of full-stack robot manipulation systems. This talk will provide examples of using such tools for robotic packing, assembly under tight tolerances as well as constrained placement given a single demonstration that generalizes across an object category.
The second part of the talk is motivated by tensegrity robots, which combine rigid and soft elements, to achieve safety and adaptability. They also complicate, however, modeling and control given their high-dimensionality and complex dynamics. This sim2real gap of analytical models made us look into reinforcement learning (RL) for controlling robot tensegrities. RL is challenging in this domain, however, due to data requirements. Training RL in simulation is promising but is blocked again by the sim2real gap. For this reason, we are developing a differentiable physics engines for tensegrity robots based on first principles. It is trained with few example ground truth trajectories from the real robot. It then provides accurate-enough simulations to train a controller that is directly transferrable back to the real system. We report our success in such a real2sim2real transfer for a 3-bar tensegrity robot.

 

Bio: Kostas Bekris is a Professor of Computer Science at Rutgers University in New Jersey and an Amazon Scholar with Amazon Robotics since 2019. He is working in algorithmic robotics, where his group is developing algorithms for robot planning, learning and perception especially in the context of robot manipulation problems, robots with significant dynamics and a focus on taking advantage of novel soft, adaptive mechanisms. Applications include logistics and manufacturing as well as field robotics. He is serving as Editor of the Intern. Journal of Robotics Research (IJRR) and has served as Program Chair for the Robotics: Science and Systems (RSS) and the Workshop on the Algorithmic Foundations of Robotics (WAFR) conferences. His research has been supported by NSF, DHS, DOD and NASA, including a NASA Early Career Faculty award.

 
 

 

Robotics Colloquium: Guest Speaker Brian Glass

Automation and AI for Planetary Drilling 

If humans look for extant or anicient life, or access subsurface resources on other Solar System bodies, drilling will be the likely route to access any biosignatures or resources. Beyond the Moon, lightspeed time delays to Earth preclude teleoperation, so robotic drilling and sampling on Mars and beyond require automation (or else nearby humans). For the past two decades, our lab at NASA Ames has looked at drilling autonomy issues. In this talk, we will start with how humans do these tasks in terrestrial exploration and look for ways to replicate or mimic these perceptive and decision-making processes. We will follow a narrative from 2004 to the present on how our fielded approach has evolved in lab tests and remote analog drilling sites, ending with our current software. With stops along the way in Antarctica, with the Icebreaker Mars software requirements, with the ARADS drilling life-detection rover tested in the Atacama Desert (adding in sample handing robotics and planetary protection/contamination issues) up to recent tests in the desert and Arctic in 2023 at analog sites.

 
short bio: 

Dr. Glass is a research group lead in the Exploration Systems Directorate at NASA Ames Research Center. He received an ScB from MIT (Aero and Astro) in 1982, a PhD (robotics) from Georgia Tech in 1987, and later an MS from Stanford in 1992 (Geophysics). In addition to serving as a PI of several recent research projects in robotic drilling, sample acquisition, and handling, his background in space systems programs includes Space Station networks (1990s), AI/robotics space technology development (2000s), and sampling payloads integration in three “Icebreaker” Discovery mission proposals (2010, 2015, 2019). Dr. Glass is also a pilot who led the inter-agency FAA-NASA Surface Movement Advisor air traffic ground system's rapid 2-year development and deployment (1994-96). He has research interests in robotic sample acquisition and processing, impact crater geophysics, and structural modeling, and he chaired NASA’s Space Sample Acquisition Workshop in 2013. For the past two decades, Dr. Glass has been involved in the development and testing of NASA planetary surface engineering prototypes at remote field analog sites, including the Canadian Arctic, Antarctic Dry Valleys, Rio Tinto (Spain), Mauna Kea, the Atacama Desert (Chile) and the US Southwest. He has a share of nine NASA Group Achievement Awards and two US Patents and received a NASA Exceptional Technology Achievement Medal (for robotic drilling) in 2019.

Robotics Colloquium: Guest Speaker Victoria Interrante

Virtual Reality Nature Immersion for Stress Reduction and Enhanced Wellbeing 

The potential health benefits of contemplative immersion in the natural world have been cited in the scientific literature for well over a century.  Several theories have been advanced to explain the basis for the observed beneficial outcomes, including Stephen & Rachel Kaplan’s Attention Restoration Theory (ART), which posits that immersion in nature supports the restoration of depleted attentional capacity by evoking effortless ‘soft fascination’ while at the same time providing affordances for quiet reflection, and Edward Wilson’s Biophilia hypothesis, which advances the theory that human beings have an innate affinity for living things, including natural environments that support life in an evolutionary sense.  The vast literature on beneficial outcomes from exposure to nature includes a wide range of findings, from reduced recovery time among surgical patients whose hospital rooms offer a view of nature, to increased cognitive abilities and lower physiological measures of stress after brief periods of walking in nature, to reduced risk of psychiatric disorders among people who had continuous access to green space as children.  Given all of these benefits, there is increasing interest within multiple research communities to more clearly understand the conditions under which and mechanisms through which time spent in nature translates to increased well-being.  In this talk, I will outline a nascent research agenda aimed at exploring the potential of using immersive virtual reality technology to extend the benefits of nature immersion to populations who due to either temporary or chronic circumstances cannot access real natural spaces, and to help elucidate the optimal design features of a virtual nature immersion experience to best support restorative outcomes.
 
short bio: 
Victoria Interrante is a Professor in the Department of Computer Science at the University of Minnesota, where her research focuses on improving the human experience in immersive virtual reality environments.  She also directs the university-wide Center for Cognitive Sciences and its accompanying interdisciplinary graduate program. In addition to her work with virtual nature, she and her students are pursuing projects on improving spatial understanding and reducing cybersickness in immersive virtual environments, and on enhancing the outcomes of potential bias mitigation interventions through the use of VR technology.  She is a recipient of the 2020 IEEE VGTC Virtual Reality Career Award for her lifetime contributions to visualization and visual perception for augmented and virtual reality, and in 2022 was inducted into the inaugural class of the IEEE VGTC Virtual Reality Academy, which, among other things, recognizes individuals for their “cumulative and momentous contributions to research and/or development; broader influence on the field, the community, and on the work of others; and significant service and/or active participation in the community”

Robotics Colloquium: Speaker Chad Jenkins

Topic: Defining the Discipline of Robotics for Excellence and Equity through Bipedal Mobile Manipulation

Abstract: Start with a simple question: What is the best major for a student to become a roboticist?  In general, an undergraduate major defines the intellectual organization for its academic discipline to produce “people and ideas.”  In my role leading the Robotics Undergraduate Program at Michigan, we tackled this question through the curricular challenge of how to both: 1) educate people to put ideas of the robotics discipline into practice and 2) endow them with the intellectual lens for creating new ideas that extend the frontiers of the robotics discipline -- including research into mobility and manipulation in the real world.

As part of our larger Robotics Pathways model, The Robotics Major at the University of Michigan was successfully launched in the 2022-23 academic year as an innovative step forward to better serve students, our communities, and our society. Building on our guiding principle of "Robotics with Respect" -- the Michigan Robotics Major was designed to define robotics as a true academic discipline with both equity and excellence as our highest priorities. The Michigan Robotics Major has embraced an adaptable curriculum that is accessible through a diversity of student pathways and enables successful and sustained career-long participation in robotics, AI, and automation professions.  

In this talk, I will present our design, launch, and innovations for the Michigan Robotics Major for undergraduates and our research progress toward humanoid mobile manipulation systems.  A number of curricular innovations will be presented such as: bringing mathematics to life through computational linear algebra (before calculus!), elevating core robotics concepts into compelling sophomore and junior-level courses, creating our affordable and accessible MBot platform for capable of fully autonomous navigation, and Distributed Teaching Collaboratives with Minority Serving Institutions.  I will also present our work for perception and planning with the Agility Robotics Digit robot towards realizing the long-standing vision of taskable autonomous humanoid robots capable of mobile manipulation tasks in common human environments.

Bio: Chad Jenkins is a Professor of Robotics and a Professor of Electrical Engineering and Computer Science at the University of Michigan.  Prof. Jenkins is the inaugural Program Chair of the Robotics Major Degree Program launched in 2022 for undergraduates at the University of Michigan. Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science (AAAS) and the Association for the Advancement of Artificial Intelligence (AAAI). 

Professor of Robotics; Professor of EECS (courtesy) at the University of Michigan

Host: Karthik Desingh

 

MnRI Showcase-Guest Speaker Henrik Christensen

Plenary Speaker: Henrik Christensen - UC San Diego
 

About the speaker: 

Henrik I Christensen is the Qualcomm Chancellor's Chair of Robot Systems and the director of the Contextual Robotics Institute at UC San Diego, and also a Distinguished Professor of Computer Science in the Department of Computer Science and Engineering. Dr. Christensen was initially trained in Mechanical Engineering and worked subsequently with MAN/BW Diesel. He earned M.Sc. and Ph.D. EE degrees from Aalborg University, 1987 and 1990, respectively. Upon graduation, Dr. Christensen has participated in many international research projects across four continents. He has held positions at Aalborg University, Oak Ridge National Laboratory, Royal Institute of Technology and Georgia Tech before joining UC San Diego. Dr. Christensen does research on robotics, with a particular emphasis on a systems perspective to the problem. Solutions must have a strong theoretical basis, a corresponding well-defined implementation, and it must be evaluated in realistic settings. There is a strong emphasis on "real systems for real applications!"

The research has involved collaborations with ABB, Electrolux, Daimler-Chrysler, KUKA, iRobot, Apple, Partek Forest, Volvo, SAIC, Boeing, GM, PSA Peugeot, BMW, Yujin, Qualcomm, ...

Dr. Christensen has published more than 400 contributions across robotics, vision and artificial intelligence. Dr. Christensen served as the Founding Chairman of EURON (1999-2006) and research coordinator for ECVision (2000-2004). He has led and participated in a many of EU projects, such as VAP, CoSy, CogVis, SMART, CAMERA, EcVision, EURON, Cogniron, and Neurobotics. He served as the PI for the CCC initiative on US Robotics. He is a Co-PI on ARL DCIST RCA, TILOS, the Robotics-VO, and several projects with industry. He was awarded the Joseph Engelberger Award 2011 and also named a Boeing Supplier of the Year 2011. He is a fellow of AAAS (2013) and IEEE (2015). He was awarded an honorary doctorate in engineering (Dr. Techn. h.c.) from Aalborg University, 2014. Dr. Christensen has served / serves on the editorial board for many of the most prestigious journals in the field, incl. Intl. Jour. of Robotics Research (IJRR), Autonomous Robots, Robotics and Autonomous Systems (RAS), IEEE Pattern Analysis and Machine Intelligence (PAMI), and Image & Vision Computing. In addition, he serves on the editorial board of the MIT Series on Intelligent Robotics and Autonomous Agents. He was the founding co-editor-in-chief of Trends and Foundations in Robotics.

Calls for Presentations

This research showcase aims to make up for the missed in-person networking due to the pandemic. Students, postdocs, industry researchers, and faculty are all encouraged to participate. The event is free.

The symposium will be an all-day affair featuring a plenary talk, faculty talks, student posters, and social events. 

Please consider presenting your past and current research in the showcase. 

The deadline for indicating your interest is October 31, 2023. We plan to archive the recordings of the presentations (talks and posters) after the event. We will also welcome unpublished as we can selectively avoid archiving them.

Visit MnRI Research Showcase 2023 for more details. 

Robotics Colloquium: Guest Speaker Ognjen Ilic

Title: Metamaterials in Motion: Manipulating the Energy and the Momentum of Waves at the Subwavelength Scale

Abstract: The transport of waves, such as light and sound, can be radically transformed when waves interact with metamaterial structures with engineered subwavelength features. My group aims to
understand and develop electromagnetic and acoustic metamaterials that can control wave-matter interactions in ways that are impossible with conventional materials. In the first part of my talk, I will present our work on acousto-mechanical metamaterials that can steer ultrasonic waves for contactless and programmable actuation. This versatile concept enables new actuation functions, including autonomous path following and contactless tractor beaming, that are made possible by anomalous scattering and are beyond the limits of traditional wave-matter interactions. In the second part, I will discuss how the same ideas carry over naturally to optical systems. Light is a powerful tool to manipulate matter without contact, with concepts such as optical traps and tweezers widely used across biology and bioengineering to microfluidics and quantum sensing, but typically limited to small objects and short distances. In contrast, our approach to designing nanoscale elements to control the momentum of light could open new frontiers in optomechanics, such as macroscale optical levitation and long-range optical actuation. These concepts of nanoscale light-matter interactions could lead to ultralightweight and multi-functional structures and coatings with unique new terrestrial and space applications. 

Bio: Ognjen Ilic is a Benjamin Mayhugh Assistant Professor of Mechanical Engineering at the University of Minnesota, Twin Cities. He completed his Ph.D. in physics at MIT and was a postdoctoral scholar in applied physics and materials science at Caltech. His research themes encompass light-matter and wave-matter interactions in nanoscale and metamaterial structures. He received the Air Force Office of Scientific Research (AFOSR) Young Investigator Award, the 3M Non-Tenured Faculty Award, the Bulletin Prize of the Materials Research Society, and a University of Minnesota McKnight Land-Grant Professorship. He holds graduate faculty appointments in the Department of Electrical and Computer Engineering and the School of Physics and Astronomy at the University of Minnesota.

Robotics Colloquium: Speaker Ryan Caverly

Title: Modeling, Pose Estimation, and Control of Cable-Driven Robots

Abstract:
Cable-driven robots are a relatively new class of robotic manipulators that have intriguing features, including a large workspace and high payload-to-weight-ratio, which have the potential to enable exciting new robotic applications.  While these features are promising, high-acceleration maneuvers that take advantage of these properties are challenging and can even cause instability of the system if the end-effector pose is not known accurately and the feedback controller is not robust to large amounts of model uncertainty.  The first part of this talk will focus on dynamic modeling, pose estimation, and robust control methods developed by the Aerospace, Robotics, Dynamics, and Control (ARDC) Lab to help enable cable-driven robotic applications. The second part of the talk will introduce the Cable-Actuated Bio-inspired Lightweight Elastic Solar Sail (CABLESSail) concept being developed by the ARDC Lab for space exploration.

Bio:
Ryan Caverly is an Assistant Professor in the Department of Aerospace Engineering and Mechanics at the University of Minnesota.  He received his B.Eng. degree in Honours Mechanical Engineering from McGill University, and his M.Sc. and Ph.D. degrees in Aerospace Engineering from the University of Michigan, Ann Arbor.  From 2017 to 2018 he worked as an intern and then a consultant for Mitsubishi Electric Research Laboratories in Cambridge, MA.  Dr. Caverly is the recipient of a Department of Defense (DoD) Defense Established Program to Stimulate Competitive Research (DEPSCoR) Award and a NASA Early Career Faculty award.  His research interests include dynamic modeling and control systems, with a focus on robotic, mechanical, and aerospace applications, as well as robust and optimal control techniques.

Assistant Professor, Department of Aerospace Engineering and Mechanics

 

Robotics Colloquium: Speaker Tucker Hermans

Title: Out of Sight, Still in Mind: Contending with Hidden Objects in Multi-Object Manipulation

Abstract: Our daily lives are filled with crowded and cluttered environments. Whether getting a bowl out of the cabinet, food out of a refrigerator, or a book off a shelf we are surrounded by groups and collections of objects when acting in the built world. For robots to act as caregivers and assistants in human spaces they must contend with more than one object at a time.

In this talk, I will present our recent efforts in the manipulation of multiple objects as groups. I will start with a brief description of what we’ve learned in creating successful learning-based tools for the manipulation of isolated unknown objects. I will then discuss how we’ve extended these approaches to plan interactions with object collections, where multiple objects move at once. Key to these approaches is the use of logical representations to represent and communicate robot tasks. I will then discuss further extensions to our core multi-object manipulation framework including receiving natural language commands and incorporating memory models to handle long-term object occlusion.

Bio: Tucker Hermans is an associate professor in the School of Computing at the University of Utah and a senior research scientist at NVIDIA. Hermans is a founding member of the University of Utah Robotics Center. Professor Hermans is a 2021 Sloan Fellow and recipient of the NSF CAREER award and the 3M Non-Tenured Faculty Award. His research with his students has been nominated for and won multiple conference paper awards including winning the Best Systems Paper at CoRL 2019.

Previously, Professor Hermans was a postdoc at TU Darmstadt working with Jan Peters. He was at Georgia Tech from 2009 to 2014 in the School of Interactive Computing where he earned his Ph.D. in Robotics and his M.Sc. in Computer Science under the supervision of Aaron Bobick and Jim Rehg. He earned his A.B. in German and Computer Science from Bowdoin College in 2009.

Website