Events

Upcoming Events

Robotics 8970 Colloquium: Stephen Guy

About Stephen Guy
Stephen J. Guy is an associate professor in the Department of Computer Science and Engineering at the University of Minnesota. His research focuses on the development of artificial intelligence for use in autonomous robotics (e.g., collision avoidance and path planning under uncertainty) and computer simulations of human movement and behavior (e.g., crowd simulation and virtual characters). Stephen's work has had a wide influence in games, VR, and real-time graphics industries: his work on motion planning has been licensed by Relic Entertainment, EA, and other digital entertainment companies; he has been a speaker in the AI Summit at GDC, the leading conference in the games development industry. He is the recipient of several awards including the Charles E. Bowers Faculty Teaching Award and multiple best paper awards for his research work in simulation and planning. Stephen's academic work has appeared in top venues for robotics, AI and computer graphics including SIGGRAPH, IJRR, IEEE Trans. on Robotics, AAMAS, AAAI, and IJCAI. His work on simulating virtual humans has been widely covered in popular media including newspapers, magazines, documentaries, and late-night TV. Prior to joining Minnesota, he received his Ph.D. in Computer Science in 2012 from the University of North Carolina - Chapel Hill with support from fellowships from Google, Intel, and the UNCF, and his B.S. in Computer Engineering with honors from the University of Virginia in 2006.

Natural Language Processing Seminar Series: Katie Stasaski

Katie Stasaski is a Ph.D. student at University of California, Berkeley.

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration.

Contact Dongyeap Kang (dongyeap@umn.edu) for any questions or inquiries.

Robotics 8970 Colloquium: Professor Michael McAlpine

3D Printing Functional Materials and Devices

The ability to three-dimensionally interweave biological and functional materials could enable the creation of devices possessing personalized geometries and functionalities. Indeed, interfacing active devices with biology in 3D could impact a variety of fields, including biomedical devices, regenerative biomedicines, bioelectronics, smart prosthetics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature-sensitive. This renders most biological platforms incompatible with the fabrication and material processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid, and brittle. A number of strategies have been developed to overcome these dichotomies.

Our approach is to utilize extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers freeform, autonomous fabrication. This approach addresses the challenges presented above by (1) using 3D printing and imaging for personalized device architectures; (2) employing ‘nano-inks’ as an enabling route for introducing a diverse palette of functionalities; and (3) combining 3D printing of biological and functional inks on a common platform to enable the interweaving of these two worlds, from biological to electronic. 3D printing is a multiscale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, functional materials, and ‘living’ inks may enable next-generation 3D printed devices.

About Dr. Michael McAlpine

Michael McAlphine

Michael C. McAlpine is the Kuhrmeyer Family Chair Professor of Mechanical Engineering at the University of Minnesota. He received a B.S. in Chemistry with honors from Brown University (2000), and a Ph.D. in Chemistry from Harvard University (2006).

His current research is focused on 3D printing functional materials and devices for biomedical applications, with recent breakthroughs in 3D printed deformable sensors and 3D printed bionic eyes (one of National Geographic’s 12 Innovations that will Revolutionize the Future of Medicine). He has received several awards for this work, including the Presidential Early Career Award for Scientists and Engineers (PECASE), and the National Institutes of Health Director’s New Innovator Award.

 

UMN Machine Learning Seminar

The UMN Machine Learning Seminar Series brings together faculty, students, and local industrial partners who are interested in the theoretical, computational, and applied aspects of machine learning, to pose problems, exchange ideas, and foster collaborations. The talks are every Thursday from 12 p.m. - 1 p.m. during the Fall 2021 semester.

This week's speaker is Salman Avestimehr (University of Southern California).

MnRI Seminar: Megan Turner

Dr. Morgan Turner is the recipient of the CRA/CCC/NSF Computing Innovation Fellowship.  Morgan's Ph.D. is in Evolutionary Biology and she studies multi-dimensional kinematics of animal locomotion (e.g., alligators, dinosaurs), specifically the interaction of the feet when walking on a variety of ground surfaces.  Morgan is also a visual artist and her research involves some serious data visualization challenges.

More information will be available at a future date.

Past Events

Robotics 8970 Colloquium: Dongyeop Kang

Human and Data in the loop of NLP Pipeline

NLP systems trained on standard machine learning pipelines; annotation, learning, and evaluation, are limited to causing various problems; for instance, the dataset collected from crowd workers often contains annotation artifacts or repeating patterns; as the systems are deployed to real-world users, they are not well controlled, interpreted, or interacted with real users. To address these problems caused by the ML pipeline, I will discuss recent work from the Minnesota NLP group on human-centric and data-centric approaches. For the human-centric aspect, we collect human’s perception on linguistic styles and then make the model to mimic how humans perceive styles. Then we develop interactive NLP systems that help scholars better read and write academic papers. In the data-centric NLP, we model data informativeness based on various training dynamics and then use them to find new important data points for data augmentation and annotation. We believe more involvement of humans and consideration of data dynamics transforms the traditional ML-driven NLP pipeline to be more robust, interactive, and information-effective.

Dongyeop Kang headshot

About Dongyeop Kang
Dongyeop Kang is an assistant professor in the Computer Science Engineering department at the University of Minnesota, Twin Cities. He leads the Minnesota Natural Language Processing (NLP) group that aims to develop human-centered language technologies. His group's research lies at the intersection of computational linguistics, machine learning, and human-computer interaction.

He completed a postdoc at the University of California, Berkeley, and obtained a Ph.D. in the Language Technologies Institute of the School of Computer Science at Carnegie Mellon University. During his Ph.D. study, he interned at Facebook AI research, Allen Institute for AI (AI2), and Microsoft Research. He has been awarded the AI2 fellowship, CMU Presidential fellowship, and ILJU Ph.D. fellowship.

Robotics 8970 Colloquium: Parikshit Maini

Mobile Robots in Agriculture

The presentation will start with an overview of some of the recent projects and research in agricultural robotics in the Robotic Sensor Networks Lab. Dr. Parikshit will then talk about our recent work on weed removal in organic dairy pastures using autonomous robots. The carbon footprint of using diesel-run farm vehicles for weed removal and other agricultural tasks has been a cause of concern, especially in the case of organic farms that do not use chemicals. Combined with the knowledge that one third of all land in the mainland US is used for cattle grazing, this problem holds considerable significance. The lab has designed an autonomous battery-powered mobile robot, called Cowbot, for weed control in the rough and challenging environment on cow pastures. Cow pastures are usually open fields and there is large variation in weed population with geographic location and time of the year. He will then present their work on two interesting research questions: budget-aware weed detection using aerial imagery and online trajectory planning for the Cowbot to efficiently use weed detection information.

Traditionally, detection and planning have been addressed as separate problems that do not account for the range of operation of mobile robots. This separation leads to mobile robots either completing only a part of the operation or needing to refuel and resume operations. He will will present our work on weed detection from aerial imagery that accounts for the available planning budget of the autonomous mower. The second problem addresses online trajectory planning for the Cowbot with a limited field of view of onboard sensors and a finite turning radius. Given an onboard weed detection module, efficiently using detection information in real time to plan robot trajectories is challenging. Due to the unknown and variable weed density on pastures, coverage paths can lead to large wastage of resources. I will present reactive planning algorithms to compute efficient robot trajectories that utilize detection information from onboard sensing systems. They have deployed these algorithms on the Cowbot and have evaluated them in large scale experiments on cow pastures. He will then show videos of the Cowbot in action and talk about future directions that we are pursuing in this space.

About Parikshit Maini

Parikshit Maini is a Post-Doctoral Associate in the Department of Computer Science and Engineering at University of Minnesota and a member of  the Robotic Sensor Networks lab headed by Prof. Volkan Isler. He works in the area of field robotics and applied AI with a focus on environmental and agricultural applications for mobile robot systems. He is leading the planning and navigation team on the "Cowbot - autonomous weed mower" project that has been covered in multiple news media stories (PBS, Star Tribune, Rural Media Group) and was recently showcased in live demos at the Minnesota FarmFest 2021. He also works on cooperative planning for heterogeneous multi-robot systems. He has developed planning algorithms for large-scale area coverage, persistent monitoring and visibility-based monitoring on terrains using cooperative aerial and ground robotic sensor nodes.

He holds a PhD in Computer Science and Engineering from Indraprastha Institute of Information Technology-Delhi, India. He also holds a M.Tech. degree in Computer Science and Engineering from IIIT-Delhi and a B.Tech. in Information Technology from Guru Gobind Singh Indraprastha University, Delhi in India.

Natural Language Processing Seminar Series: Dheeraj Rajagopal

Dheeraj Rajagopal is a Ph.D. student at Carnegie Mellon University.

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration.

Contact Dongyeap Kang (dongyeap@umn.edu) for any questions or inquiries.

Inside the Minnesota Robotics Institute: At the Heart of a Growing Robotics Industry

The Minnesota Robotics Institute (MnRI) is an outcome of the University of Minnesota’s Discovery, Research, and InnoVation Economy (MnDRIVE) initiative that brings together interdisciplinary researchers to solve grand challenges and increase Minnesota’s position as a worldwide leader in robotics research and education. Join MnRI Director Nikos Papanikoloupoulous, faculty members Hyun Soo Park and Maria Gini, and Graduate Program Advisor Travis Henderson to learn about the Institute’s mission, hear about the new M.S. in Robotics program, and take a virtual tour inside the MnRI’s world-class facilities. The presentation will also highlight recent research projects, including 3D reconstruction of dynamic human geometry, and how to allocate tasks depending on the number of robots available.

Robotics 8970 Colloquium: Dr. Andrew Hansen (MADE)

Development and Translation of Products for Veterans – Made by MADE

 This presentation will provide an overview of the Minneapolis Adaptive Design & Engineering (MADE) Program’s history and development of products for Veterans. The MADE Program specializes in the development of rehabilitation technologies such as lower-limb prostheses, wheelchairs, exercise equipment, and skin screening systems. We utilize a stage-gate model for product development and work on projects that aim to improve the participation of Veterans in important life activities regardless of their physical abilities. MADE is also a site for the Technology Transfer Assistance Program, which serves to prototype clinician-driven ideas throughout the country.

Andrew Hanson

About Andrew Hansen

Hansen received a bachelor’s degree in biomedical engineering from the University of Iowa in 1995, preceding his master’s and Ph.D. degrees in biomedical engineering from Northwestern University in 1998 and 2002. In 2010, Dr. Hansen and Dr. Gary Goldish founded the MADE Program, which has grown to over 25 multidisciplinary personnel in 2021. Dr. Hansen directs the MADE Program at the Minneapolis VA as a Research Biomedical Engineer, and is also a Professor of Rehabilitation Science and Biomedical Engineering at the University of Minnesota.

Robotics 8970 Colloquium: Junaed Sattar (Fall 2021)

 

From ideas to implementations: challenges of robot deployment in the field

Field robotics is all about deploying robotic systems in natural, and often hostile, conditions to evaluate their performance in realistic settings. In the case of our Interactive Robotics and Vision Lab, it involves deploying autonomous underwater robots in open-water environments -- open seas and lakes. This talk will try to give some insights into the journey from the drawing board to the dive board, with a focus on highlighting the process of conceiving algorithms for underwater robotics, specifically for visual perception, learning, human-robot interaction, and navigation, to field testing the entire system.

About Junaed Sattar
Junaed Sattar is an assistant professor at the Department of Computer Science and Engineering at the University of Minnesota and a MnDrive (Minnesota Discovery, Research, and Innovation Economy) faculty, and a member of the Minnesota Robotics Institute. He is the founding director of the Interactive Robotics and Vision Lab, where he and his students investigate problems in field robotics, robot vision, human-robot communication, assisted driving, and applied (deep) machine learning, and develop rugged robotic systems. His graduate degrees are from McGill University in Canada, and he has a BS in Engineering degree from the Bangladesh University of Engineering and Technology. Before coming to the UoM, he worked as a post-doctoral fellow at the University of British Columbia where his research focused on human-robot dialog and assistive wheelchair robots, and at Clarkson University in New York as an Assistant Professor. Find him at junaedsattar.info, and the IRV Lab at irvlab.cs.umn.edu, @irvlab on Twitter, and their YouTube page at https://www.youtube.com/channel/UCbzteddfNPrARE7i1C82NdQ.

 

Natural Language Processing Seminar Series: Shirley Anugrah Hayati

Shirley Anugrah Hayati is a Ph.D. student at the Georgia Institute of Technology.

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration.

Contact Dongyeap Kang (dongyeap@umn.edu) for any questions or inquiries.

Robotics 8970 Colloquium: Catherine Zhao (Fall 2021)

Attention in Vision-based AI Systems

Imagine that you are at a bus stop in a new city. You take a few glimpses around, parse and summarize the information you gather, and decide the next steps. Although intuitive, it implies a highly sophisticated and superior ability to select and parse information. Our research is along this line to develop and utilize machine attention for AI systems. In this talk, Professor Catherine Zhao will discuss the challenges and share the recent innovations in data, models, and applications from our research.

Zhao will first talk about attention prediction - the ability of machines to find the most relevant information. She will elaborate on our computational models and experimental methods for attention prediction and explain how they have advanced the state-of-the-art. She will then discuss new approaches that leverage attention in computer vision and language tasks, leading to better interpretability and task performance. She will also present preliminary data suggesting that this approach can help reveal and improve the black-box decision-making process of learning-based AI systems. Finally, Zhao will discuss the applications of our models and data in healthcare and will give two examples where our work leads to the discovery of neurobehavioral signature in autism patients, as well as cutting-edge brain-machine interface technology that restores the lost motor function in upper-limb amputee patients.

About Catherine Qi Zhao

Catherine Qi Zhao

Catherine Qi Zhao is an associate professor in the Department of Computer Science and Engineering at the University of Minnesota. Dr. Zhao’s research interests are in computer vision and machine learning, and their applications in healthcare. Her current research on machine attention is supported by NSF and NIH. Dr. Zhao has published more than 100 papers in peer reviewed conferences and journals. She is an associate editor at the IEEE Transactions on Neural Networks and Learning Systems and the IEEE Transactions on Multimedia, a program chair at WACV '2022, and an area chair at CVPR and other computer vision and AI venues.

Robotics 8970 Colloquium: Andrew Lamperski (Fall 2021)

Non-convex learning, system identification, and stabilization for model-free reinforcement learning

In this talk, Dr. Lamperski will first examine the convergence of Langevin algorithms for machine learning and system identification problems with constraints. 

Much of machine learning fits model parameters to data via optimization, typically via some variation of stochastic gradient descent. However, in many cases, such as neural network regression, the loss functions are non-convex and stochastic gradient descent can get stuck in local minima, if it even converges. Langevin methods augment standard gradient-based methods with additive noise. In the case of unconstrained problems, it is well-understood how the additive noise helps the algorithm escape undesirable minima. However, many neural network regression and probabilistic estimation problems require constraints. We describe a Langevin method for problems with non-convex losses and convex constraints. We will show how the method provably escapes local minima and converges to the global optima, albeit slowly in the non-convex case. Then, we will show how the method can be applied to problems with correlated data, as arise in identification of parameters of dynamic systems. 

Secondly, Dr. Lamperski will describe the problem of model-free learning of stabilizing controllers for linear systems. In recent years, there has been a strong push for understanding the theoretical properties of reinforcement learning problems for simple benchmark optimal control. The simplest optimal control problem with continuous state and action spaces is the linear quadratic regulator. All previous model-free approaches to this problem required knowledge of a stabilizing controller. However, computing this stabilizing controller is typically the most important part of the design process. We will describe an algorithm based on Q-learning that can find a stabilizing controller and then optimize it. It can be applied online to a single trajectory or offline on a fixed data-set.

About Dr. Andrew Lamperski

Andrew lamperski headshot on white background

Dr. Andrew Lamperski received the B.S. in Biomedical Engineering and Mathematics in 2004 from Johns Hopkins University and a Ph.D. in Control and Dynamical Systems in 2011 from the California Institute of Technology.

He held postdoctoral positions in ontrol and dynamical systems at the California Institute of Technology from 2011 - 2012 and in mechanical engineering at The Johns Hopkins University in 2012. From 2012 - 2014, Lamperski did postdoctoral work in the Department of Engineering, University of Cambridge, on a scholarship from the Whitaker International Program. In 2014, he joined the Department of Electrical and Computer Engineering, University of Minnesota as an Assistant Professor.

His research interests include optimal control and machine learning, with applications to neuroscience and robotics.

 

Natural Language Processing Seminar Series: Philippe Laban

Philippe Laban is an research scientist at Salesforce Research.

The Minnesota Natural Language Processing (NLP) Seminar is a venue for faculty, postdocs, students, and anyone else interested in theoretical, computational, and human-centric aspects of natural language processing to exchange ideas and foster collaboration.

Contact Dongyeap Kang (dongyeap@umn.edu) for any questions or inquiries.