Past Events

MnRI Seminar: Dr. Yezhou Yang

Visual Recognition beyond Appearances, and its Robotic Applications

The goal of computer vision, as coined by Marr, is to develop algorithms to answer the what, where, and when from visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm.

This talk will present Dr. Yang's efforts over the last decade, ranging from reasoning beyond appearance for visual question answering, image understanding, and video captioning tasks, through temporal and self-supervised knowledge distillation with incremental knowledge transfer, until their roles in a robotic visual learning framework via a Robotic Indoor Object Search task. The talk will also feature the Active Perception Group (APG)’s ongoing projects (NSF RI, NRI and CPS, DARPA KAIROS, and Arizona IAM) addressing emerging challenges of the nation in autonomous driving, AI security, and healthcare domains at the ASU School of Computing, Informatics, and Decision Systems Engineering (CIDSE).

About Yezhou Yang
Yezhou Yang is an Assistant Professor at School of Computing, Informatics, and Decision Systems Engineering, Arizona State University (ASU). He is currently directing the ASU Active Perception Group. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots.

Before joining ASU, Dr. Yang was a Postdoctoral Research Associate at the Computer Vision Lab and the Perception and Robotics Lab, with the University of Maryland Institute for Advanced Computer Studies. He is a recipient of Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018 and the Amazon AWS Machine Learning Research Award 2019. He receives his Ph.D. from University of Maryland at College Park, and B.E. from Zhejiang University, China.

MnRI Seminar: Dr. Joshua Stopek

Histotripsy, a non-invasive robotic cancer therapy

Histotripsy uses very high amplitude and short pulses (microseconds) of focused ultrasound to induce and control acoustic cavitation in the form of a histotripsy “bubble cloud”. The negative pressure, which can exceed -25MPa in the focal zone, allows the rapid formation and collapse of nano and microbubbles (within the bubble cloud), derived from endogenous gases naturally present in the targeted tissue. As the bubbles within the cloud form and collapse in microseconds, creating mechanical forces strong enough to destroy tissue at cellular and sub-cellular levels, and without the need for ionizing or thermal energy. An image-guided robotic platform is used to deliver the therapy. HistoSonics is currently focused on developing the broad platform capability to non-invasively treat tumors across the body, initially focused on significant unmet needs in abdominal and liver tumors. The company is also researching and developing immuno-oncology enabling therapies, which may further benefit from the tissue lysate/effect created by histotripsy treatments, potentially offering promise to turn cold tumors hot and systemic responses to local therapy.

About Dr. Joshua Stopek
Dr. Stopek has more than 20 years of R&D leadership experience, including a strong background in developing image-guided technologies, new therapy platforms and combination devices to market. He currently is Vice President of R&D for HistoSonics. He formerly led R&D in various business areas at Medtronic, Covidien and US Surgical. Prior to that, he was the co-founder and VP of a startup medical device company, VMSI, working on new minimally invasive and tissue regeneration therapies. Dr. Stopek has over 200+ issued and pending patents. He received his BS, MS and PhD in Materials Science and Engineering from the University of Florida, where he also completed a fellowship in Neurosurgery and Neuroscience.

MnRI Seminar: Shweta Gupta

Next Frontier in AI

Today, we have different technologies available that have become core to our ecosystem, such as the internet, wireless connectivity, internet of things (IoT), artificial intelligence (AI), and different devices. These technologies are solving some of the biggest use cases, like mobility, e-commerce, marketing, driverless cars, cybersecurity, and so on. But now, with using all these technologies available, known as "technical convergence", what can we do next?  

The product is evolving from physical products (cars, mobile, electronics) to virtual products (Google, YouTube, Netflix). What is the next generation of products that we can build using AI? This talk will be having glimpses of future products that kindle your imagination of what is possible with AI. For example, can we build ecosystem-intelligence-centric products?

Finally, we will talk about what are the real-use cases that the world is waiting to be solved using AI. Because technology is only as good as its use-cases.

About Shweta Gupta
Shweta is a co-Founder, CTO, and Chief Data Scientist at ImagoAI, a food-tech startup providing AI-based food-quality analysis. Her start-up has won many international awards, such as TechCrunch Start-up BattleField, Berlin and is associated with many world-renowned organizations—TechStars, Microsoft, Google, and NVIDIA. Shweta’s work has resulted in several peer-reviewed scientific articles, publication in some of the most prestigious, high impact journals in the AI field. She has won multiple Best Data Scientist awards during her career. With more than a decade of experience in AI, she is passionate to build products using AI which can create a massive impact on the world.

MnRI Seminar: Javair Gillett

Augmented Reality Training Improves Sports Performance

Ecological dynamics of sport-specific skill acquisition integrates the brain, body, and environment. Nonlinear behavioral modifications occur as information is taken in, processed, and perceived. Linking neurocognitive training with autonomic processes and muscular behavior would be valuable to the athlete looking to effectively transfer motor skill learning to game performance. Augmented Reality (AR) technology using “smart glasses” offers a user more freedom to interact with their ecosystem in real-time.

These devices, used as a neurocognitive training tool, immerse the user into a more realistic, dynamic environment. Investigations within this realm are scarce. A well-developed, evidence-based tool may improve spatial perception by allowing the user to detect and modify actions, transferrable to real, game scenarios. This current research will introduce a new AR training device and examine its usefulness in a sport-specific training environment.

About Javair Gillett

Gillett is in his 1st season with the Timberwolves as their Vice President of Sports Science and Performance after spending six seasons as the Director of Athletic Performance for the Houston Rockets. He comes to the Timberwolves with more than 20 years of experience in the industry. Prior to joining the Rockets, Javair spent 14 years with the Detroit Tigers in similar roles. He is certified as a Registered Strength and Conditioning Coach (RSCC*E) by the National Strength and Conditioning Association. In 2017, Javair was voted by his peers to receive the Strength Coach of the Year award by the National Basketball Strength and Conditioning Association.

Javair is currently pursuing his PhD in Health Sciences from Rush University (Chicago). He has an M.S. in Human Movement from A.T. Still University and completed his bachelor’s degree at DePauw University majoring in Health and Human Performance with an emphasis in Exercise Science. Prior to joining the Tigers, Javair gained experience in the field working with the Orlando Magic during the 2002-03 NBA season, Indiana University (2001), and The Pennsylvania State University (2000). He also lettered four seasons with DePauw University’s baseball team and was given All-Conference honors two of those four years as well as All-American Honorable Mention his final season.

Javair resides in Minneapolis with his wife Erin and daughter Anabella. In his free time, Javair dedicates himself to sharing his knowledge with youth athletes, parents, and coaches; working on educational tools, and bringing awareness to help achieve fitness goals and live out a long, healthy lifestyle. Javair has been a speaker at numerous educational events and has published research articles and other educational content for a variety of resources.

MnRI Seminar: Aaron Lorenz and Suma Sreekanta

Developing Phenotyping Solutions Towards Optimizing Soybean Shoot Architecture

Attaining a 50% increase in the yield of major crops by 2050 to feed the expected population ranks among one of our greatest societal challenges. It has been suggested that one promising way to break through this dilemma is to revolutionize the efficiency of crop light harvesting systems. Soybean, the second most widely grown crop in the U.S., is well known to be sub- optimal in its canopy structure, limiting its light interception efficiency. Despite this, very little is known about the soybean shoot architecture properties creating an optimal canopy structure for light interception.

Soybean plants are made up of repeating units of branches and leaves. The dimensions and orientation of these units change in time and space in response to the environment giving way to remarkable phenotypic diversity. An estimation of how the overall architecture influences light penetration and scatter within the soybean canopy requires us to be able to accurately image and analyze structural data on many thousands of plants over a short period of time. Soybean phenotyping at mass scale requires novel high-throughput robotics based imaging techniques. It also makes for a compelling argument for applying a computer vision- and machine learning-based approach, a massive step up from the current manual ineffective process currently in use.

About Aaron Lorenz
Aaron Lorenz is an Associate Professor of Soybean Breeding and Genetics in the Department of Agronomy and Plant Genetics at the University of Minnesota. The University of Minnesota Soybean Breeding Program develops specialty, food-type, and general-use soybean varieties adapted to the Upper Midwest. Dr. Lorenz’s research focuses on the optimization and application of genomics and phenomics to an applied cultivar development program. Additional areas of research include the mapping of genes underlying complex traits relevant to soybean production and the development of soybean varieties adapted to new cropping systems.

About Suma Sreekanta
Suma Sreekanta is a plant biologist and geneticist by training and technology junkie by enthusiasm. She received her BS in Botany from Miami University studying flooding stress tolerance in soybeans. She received her PhD from the University of Minnesota studying plant immune signaling in the model system Arabidopsis thaliana. As a post-doctoral researcher, she studies soybean canopy architecture and its influence on canopy coverage and light interception. She is actively engaged in collaborative projects involving expertise in agriculture, engineering, computer science and machine learning to forward a more data driven agenda to field phenotyping in crops. She is keen on working with others who are just as enthusiastic about pushing boundaries to field phenotyping considered difficult if not impossible.

Robotics 8970 Colloquium: Ognjen Ilic (MnRI Seminar Rewind)

Opto-mechanics: A vision of long-range manipulation enabled by subwavelength metamaterials and metasurfaces

In this MnRI Seminar Rewind, Dr. Ilic discusses his team's approach to engineer artificial materials with subwavelength structure—i.e., metamaterials and metasurfaces—that exhibit self-stabilizing mechanical behavior.

About Dr. Ognjen Ilic
Ognjen Ilic is a Benjamin Mayhugh Assistant Professor of Mechanical Engineering at the University of Minnesota, Twin Cities. He completed his Ph.D. in physics at MIT and was a postdoctoral scholar in applied physics and materials science at Caltech. His research themes encompass wave-matter interactions in nanoscale structures and low-dimensional materials. His recent awards include the 3M Non-Tenured Faculty Award and the Bulletin Prize of the Materials Research Society. More details can be found at z.umn.edu/ilic.

Robotics 8970 Colloquium: Jairong Hong

Field imaging of flow and particle transport: how can robots help?

Presenting an exciting opportunity to integrate innovative flow and particle sensors with robots that can potentially revolutionize our field measurement approaches and open up space for a broad range of applications. 

AEM Seminar: How to Make Your Ocean Smarter

Our oceans drive worldwide weather-climate systems; our rivers serve as nutrient conduits; and our marine ecosystems house the largest repository of biodiversity and mineral resources on the planet. Humans have relied on rivers, lakes, and oceans for transportation, energy generation, farming, and recreation throughout our history. And today, robots are critical tools in our stewardship of these resources. However, there are significant autonomy challenges when working in dynamic and uncertain environments like oceans and rivers. Robot dynamics are tightly coupled to those of the environment, while communication and localization are limited.

Control under these conditions can be exacting, but environmental dynamics may be harnessed to plan energy efficient paths and to maintain network connectivity. Networked robot teams can collect data to construct high fidelity models of the environmental dynamics which can be integrated into robot control and planning. Those same models can be used to guide robot control and sampling strategies to increase their predictive power. In this talk, I will present our vision of a smart ocean observational framework to improve forecasting of weather-climate systems, mitigation of contaminant dispersions, and coordination of maritime search and rescue and humanitarian efforts.
 
About Dr. M. Ani Hsieh

M. Ani Hsieh is a Research Associate Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. She is also the Deputy Director of the General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory. Her research interests lie at the intersection of robotics, multi-agent systems, and dynamical systems theory. Hsieh and her team design algorithms for estimation, control, and planning for multi-agent robotic systems with applications in environmental monitoring, estimation and prediction of complex dynamics, and design of collective behaviors. She received her B.S. in Engineering and B.A. in Economics from Swarthmore College and her PhD in Mechanical Engineering from the University of Pennsylvania. Prior to Penn, she was an Associate Professor in the Department of Mechanical Engineering and Mechanics at Drexel University. Hsieh is the recipient of a 2012 Office of Naval Research (ONR) Young Investigator Award and a 2013 National Science Foundation (NSF) CAREER Award.

Robotics 8970 Colloquium: Julianna Abel

Design and Manufacture of Multifunctional Yarns and Textiles

Highlighting recent advancements in the design and manufacture of yarns and textiles fabricated from shape memory alloys.

About Dr. Abel
Dr. Julianna Abel is a Benjamin Mayhugh Assistant Professor in the Department of Mechanical Engineering at the University of Minnesota. Dr. Abel earned her Ph.D. and M.S. in Mechanical Engineering from the University of Michigan and her B.S. from the University of Cincinnati. She is a NSF CAREER Award recipient, Toyota Programmable Systems Innovation Fellow, Glenn Research Center Faculty Fellow, and recently earned the 2020 ASME Ephrahim Garcia Best Paper Award. Her research combines innovative design processes and advanced manufacturing techniques with material and structural modeling to lay the scientific foundation necessary for the design of multifunctional yarns and textiles.

Robotics 8970 Colloquium: Suhasa Kodandaramaiah

See here at a later date for more details.