Past Events

Robotics 8970 Colloquium: Professor Michael McAlpine

3D Printing Functional Materials and Devices

The ability to three-dimensionally interweave biological and functional materials could enable the creation of devices possessing personalized geometries and functionalities. Indeed, interfacing active devices with biology in 3D could impact a variety of fields, including biomedical devices, regenerative biomedicines, bioelectronics, smart prosthetics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature-sensitive. This renders most biological platforms incompatible with the fabrication and material processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid, and brittle. A number of strategies have been developed to overcome these dichotomies.

Our approach is to utilize extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers freeform, autonomous fabrication. This approach addresses the challenges presented above by (1) using 3D printing and imaging for personalized device architectures; (2) employing ‘nano-inks’ as an enabling route for introducing a diverse palette of functionalities; and (3) combining 3D printing of biological and functional inks on a common platform to enable the interweaving of these two worlds, from biological to electronic. 3D printing is a multiscale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, functional materials, and ‘living’ inks may enable next-generation 3D printed devices.

About Dr. Michael McAlpine

Michael McAlphine

Michael C. McAlpine is the Kuhrmeyer Family Chair Professor of Mechanical Engineering at the University of Minnesota. He received a B.S. in Chemistry with honors from Brown University (2000), and a Ph.D. in Chemistry from Harvard University (2006).

His current research is focused on 3D printing functional materials and devices for biomedical applications, with recent breakthroughs in 3D printed deformable sensors and 3D printed bionic eyes (one of National Geographic’s 12 Innovations that will Revolutionize the Future of Medicine). He has received several awards for this work, including the Presidential Early Career Award for Scientists and Engineers (PECASE), and the National Institutes of Health Director’s New Innovator Award.

 

Natural Language Processing Seminar Series: Katie Stasaski

Katie Stasaski is a Ph.D. student at University of California, Berkeley.

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration.

Contact Dongyeap Kang (dongyeap@umn.edu) for any questions or inquiries.

Robotics 8970 Colloquium: Stephen Guy

Simulating Human Motions for Social AI

About Stephen Guy
Stephen J. Guy is an associate professor in the Department of Computer Science and Engineering at the University of Minnesota. His research focuses on the development of artificial intelligence for use in autonomous robotics (e.g., collision avoidance and path planning under uncertainty) and computer simulations of human movement and behavior (e.g., crowd simulation and virtual characters). Stephen's work has had a wide influence in games, VR, and real-time graphics industries: his work on motion planning has been licensed by Relic Entertainment, EA, and other digital entertainment companies; he has been a speaker in the AI Summit at GDC, the leading conference in the games development industry. He is the recipient of several awards including the Charles E. Bowers Faculty Teaching Award and multiple best paper awards for his research work in simulation and planning. Stephen's academic work has appeared in top venues for robotics, AI and computer graphics including SIGGRAPH, IJRR, IEEE Trans. on Robotics, AAMAS, AAAI, and IJCAI. His work on simulating virtual humans has been widely covered in popular media including newspapers, magazines, documentaries, and late-night TV. Prior to joining Minnesota, he received his Ph.D. in Computer Science in 2012 from the University of North Carolina - Chapel Hill with support from fellowships from Google, Intel, and the UNCF, and his B.S. in Computer Engineering with honors from the University of Virginia in 2006.

Robotics 8970 Colloquium: Dongyeop Kang

Human and Data in the loop of NLP Pipeline

NLP systems trained on standard machine learning pipelines; annotation, learning, and evaluation, are limited to causing various problems; for instance, the dataset collected from crowd workers often contains annotation artifacts or repeating patterns; as the systems are deployed to real-world users, they are not well controlled, interpreted, or interacted with real users. To address these problems caused by the ML pipeline, I will discuss recent work from the Minnesota NLP group on human-centric and data-centric approaches. For the human-centric aspect, we collect human’s perception on linguistic styles and then make the model to mimic how humans perceive styles. Then we develop interactive NLP systems that help scholars better read and write academic papers. In the data-centric NLP, we model data informativeness based on various training dynamics and then use them to find new important data points for data augmentation and annotation. We believe more involvement of humans and consideration of data dynamics transforms the traditional ML-driven NLP pipeline to be more robust, interactive, and information-effective.

Dongyeop Kang headshot

About Dongyeop Kang
Dongyeop Kang is an assistant professor in the Computer Science Engineering department at the University of Minnesota, Twin Cities. He leads the Minnesota Natural Language Processing (NLP) group that aims to develop human-centered language technologies. His group's research lies at the intersection of computational linguistics, machine learning, and human-computer interaction.

He completed a postdoc at the University of California, Berkeley, and obtained a Ph.D. in the Language Technologies Institute of the School of Computer Science at Carnegie Mellon University. During his Ph.D. study, he interned at Facebook AI research, Allen Institute for AI (AI2), and Microsoft Research. He has been awarded the AI2 fellowship, CMU Presidential fellowship, and ILJU Ph.D. fellowship.

Robotics 8970 Colloquium: Parikshit Maini

Mobile Robots in Agriculture

The presentation will start with an overview of some of the recent projects and research in agricultural robotics in the Robotic Sensor Networks Lab. Dr. Parikshit will then talk about our recent work on weed removal in organic dairy pastures using autonomous robots. The carbon footprint of using diesel-run farm vehicles for weed removal and other agricultural tasks has been a cause of concern, especially in the case of organic farms that do not use chemicals. Combined with the knowledge that one third of all land in the mainland US is used for cattle grazing, this problem holds considerable significance. The lab has designed an autonomous battery-powered mobile robot, called Cowbot, for weed control in the rough and challenging environment on cow pastures. Cow pastures are usually open fields and there is large variation in weed population with geographic location and time of the year. He will then present their work on two interesting research questions: budget-aware weed detection using aerial imagery and online trajectory planning for the Cowbot to efficiently use weed detection information.

Traditionally, detection and planning have been addressed as separate problems that do not account for the range of operation of mobile robots. This separation leads to mobile robots either completing only a part of the operation or needing to refuel and resume operations. He will will present our work on weed detection from aerial imagery that accounts for the available planning budget of the autonomous mower. The second problem addresses online trajectory planning for the Cowbot with a limited field of view of onboard sensors and a finite turning radius. Given an onboard weed detection module, efficiently using detection information in real time to plan robot trajectories is challenging. Due to the unknown and variable weed density on pastures, coverage paths can lead to large wastage of resources. I will present reactive planning algorithms to compute efficient robot trajectories that utilize detection information from onboard sensing systems. They have deployed these algorithms on the Cowbot and have evaluated them in large scale experiments on cow pastures. He will then show videos of the Cowbot in action and talk about future directions that we are pursuing in this space.

About Parikshit Maini

Parikshit Maini is a Post-Doctoral Associate in the Department of Computer Science and Engineering at University of Minnesota and a member of  the Robotic Sensor Networks lab headed by Prof. Volkan Isler. He works in the area of field robotics and applied AI with a focus on environmental and agricultural applications for mobile robot systems. He is leading the planning and navigation team on the "Cowbot - autonomous weed mower" project that has been covered in multiple news media stories (PBS, Star Tribune, Rural Media Group) and was recently showcased in live demos at the Minnesota FarmFest 2021. He also works on cooperative planning for heterogeneous multi-robot systems. He has developed planning algorithms for large-scale area coverage, persistent monitoring and visibility-based monitoring on terrains using cooperative aerial and ground robotic sensor nodes.

He holds a PhD in Computer Science and Engineering from Indraprastha Institute of Information Technology-Delhi, India. He also holds a M.Tech. degree in Computer Science and Engineering from IIIT-Delhi and a B.Tech. in Information Technology from Guru Gobind Singh Indraprastha University, Delhi in India.

Natural Language Processing Seminar Series: Dheeraj Rajagopal

Dheeraj Rajagopal is a Ph.D. student at Carnegie Mellon University.

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration.

Contact Dongyeap Kang (dongyeap@umn.edu) for any questions or inquiries.

Inside the Minnesota Robotics Institute: At the Heart of a Growing Robotics Industry

The Minnesota Robotics Institute (MnRI) is an outcome of the University of Minnesota’s Discovery, Research, and InnoVation Economy (MnDRIVE) initiative that brings together interdisciplinary researchers to solve grand challenges and increase Minnesota’s position as a worldwide leader in robotics research and education. Join MnRI Director Nikos Papanikoloupoulous, faculty members Hyun Soo Park and Maria Gini, and Graduate Program Advisor Travis Henderson to learn about the Institute’s mission, hear about the new M.S. in Robotics program, and take a virtual tour inside the MnRI’s world-class facilities. The presentation will also highlight recent research projects, including 3D reconstruction of dynamic human geometry, and how to allocate tasks depending on the number of robots available.

Robotics 8970 Colloquium: Dr. Andrew Hansen (MADE)

Development and Translation of Products for Veterans – Made by MADE

 This presentation will provide an overview of the Minneapolis Adaptive Design & Engineering (MADE) Program’s history and development of products for Veterans. The MADE Program specializes in the development of rehabilitation technologies such as lower-limb prostheses, wheelchairs, exercise equipment, and skin screening systems. We utilize a stage-gate model for product development and work on projects that aim to improve the participation of Veterans in important life activities regardless of their physical abilities. MADE is also a site for the Technology Transfer Assistance Program, which serves to prototype clinician-driven ideas throughout the country.

Andrew Hanson

About Andrew Hansen

Hansen received a bachelor’s degree in biomedical engineering from the University of Iowa in 1995, preceding his master’s and Ph.D. degrees in biomedical engineering from Northwestern University in 1998 and 2002. In 2010, Dr. Hansen and Dr. Gary Goldish founded the MADE Program, which has grown to over 25 multidisciplinary personnel in 2021. Dr. Hansen directs the MADE Program at the Minneapolis VA as a Research Biomedical Engineer, and is also a Professor of Rehabilitation Science and Biomedical Engineering at the University of Minnesota.

Robotics 8970 Colloquium: Junaed Sattar (Fall 2021)

 

From ideas to implementations: challenges of robot deployment in the field

Field robotics is all about deploying robotic systems in natural, and often hostile, conditions to evaluate their performance in realistic settings. In the case of our Interactive Robotics and Vision Lab, it involves deploying autonomous underwater robots in open-water environments -- open seas and lakes. This talk will try to give some insights into the journey from the drawing board to the dive board, with a focus on highlighting the process of conceiving algorithms for underwater robotics, specifically for visual perception, learning, human-robot interaction, and navigation, to field testing the entire system.

About Junaed Sattar
Junaed Sattar is an assistant professor at the Department of Computer Science and Engineering at the University of Minnesota and a MnDrive (Minnesota Discovery, Research, and Innovation Economy) faculty, and a member of the Minnesota Robotics Institute. He is the founding director of the Interactive Robotics and Vision Lab, where he and his students investigate problems in field robotics, robot vision, human-robot communication, assisted driving, and applied (deep) machine learning, and develop rugged robotic systems. His graduate degrees are from McGill University in Canada, and he has a BS in Engineering degree from the Bangladesh University of Engineering and Technology. Before coming to the UoM, he worked as a post-doctoral fellow at the University of British Columbia where his research focused on human-robot dialog and assistive wheelchair robots, and at Clarkson University in New York as an Assistant Professor. Find him at junaedsattar.info, and the IRV Lab at irvlab.cs.umn.edu, @irvlab on Twitter, and their YouTube page at https://www.youtube.com/channel/UCbzteddfNPrARE7i1C82NdQ.

 

Natural Language Processing Seminar Series: Shirley Anugrah Hayati

Shirley Anugrah Hayati is a Ph.D. student at the Georgia Institute of Technology.

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration.

Contact Dongyeap Kang (dongyeap@umn.edu) for any questions or inquiries.