Building a brighter future with robotics

Sitting on the floor, a toddler listens to her mechanical companion, who also sits.

“Clap your hands. Can you clap your hands?” her companion says.

The girl claps enthusiastically. She then stands up and dances vigorously to pop music with her companion. When all is over, she reaches down, pats the motionless companion on the head, and says to the young woman who’s been watching: “I like your robot.”

The scene was a baseline study of how young, healthy children interact with robots like this toddler's companion. The young woman, Marie Manner, was a graduate student with Maria Gini, a professor in the University of Minnesota Department of Computer Science & Engineering. Manner was investigating whether humanoid robots could help push back the age when autism can first be detected so that treatment may begin earlier. The idea is that children with and without autism in their future may interact with a robot—a standard, bias-free presence—in subtly different ways.

A specialist in artificial intelligence (AI), Gini is a mainstay of MnRI, the U of M’s Minnesota Robotics Institute, a unit of the College of Science and Engineering (CSE). She and other MnRI faculty are designing and building robots to perform in ways that mimic the human ability to collect information, process it, and act based on it—in other words, to learn from experience.

Humans do this without giving it a thought. Two people asked to move the same heavy object from point A to point B will naturally try to lift or push it together. But robots must be taught to make connections like this, which means their human designers must know not only about circuits and digital messaging, but about how their own brains work.

Challenges like this excite MnRI researchers, from undergrads up through seasoned professors of computer science & engineering like Gini, Volkan Isler, and MnRI Director Nikolaos Papanikolopoulos. In fact, Papanikolopoulos says the dedication of students and former students makes his job a joy.

“Seeing them lead the pack in industry, seeing them create hundreds of jobs in Minnesota—I never imagined, as a young student in Greece, I’d be part of such a thing,” he says.

Mastery at a young age

From its birth in 2019, MnRI has been luring top students from the U of M and around the world and bringing them together with faculty in Shepherd Laboratory on the Twin Cities campus. Among its distinctions, MnRI offers a rare, three-semester M.S. in Robotics program.

“A master’s degree in robotics allows you to explore many possibilities, as some might be interested in programming while others are more interested in hardware design,” says M.S. student Jun-Jee Chao. “The Robotics Institute provides lots of resources for you to discover your interest."

The U of M is one of the top universities for people who want to pursue a degree in robotics. I would say the laboratory resource is the most valuable, as you can dive deeper into an interesting topic with all these experts.—Jun-Jee Chao, M.S. student

Adds fellow M.S. student Kai Wang: “I found my interest in computer vision and robotics in my junior year [at the U of M]. This degree offered an opportunity to take more professional courses and to do hands-on research in robotics.

“The U has a really strong robotics department and a powerful Gemini-Huntley Robotics Research Laboratory. The most valuable part [for me] is definitely the research experience in the Robotic Sensor Networks Laboratory—it gives me a real picture of today's field robots.”

Send in the Scouts

Some of the earliest robots built at the U of M came out of Papanikolopoulos' and Gini's labs. Called Scouts, these autonomous robots resembled soda cans with wheels at either end and could both roll and jump. They were designed to enter and relay information from dangerous situations, such as what soldiers and police may encounter, even in total darkness. They have been deployed in dozens of countries, and today their descendants are learning to scale previously insurmountable obstacles. Graduate student Dario Canelon-Suarez is researching the next generation of these robots.

Also, Ruben D’Sa, a former graduate student in Papanikolopoulos’ lab, designed an unmanned aerial vehicle (UAV) that can take off vertically as a typical multirotor drone and then, in midair, unfold flaps and transform into a fixed-wing aircraft. This dual nature combines the efficiency and range of a fixed-wing aircraft with the maneuverability and hovering capabilities of a multirotor platform, which can be critical in pickup and delivery missions.

Robots in the heartland

Volkan Isler has long worked on sensing systems and developed a system to track invasive fish. Now, he’s designing robots that can manipulate their environments. One, the “cowbot,” is trained to navigate around pastures after cows have grazed them and mow leftover weeds—like a rural Roomba. Why use a robot? Because pastures make for a jarringly rough ride.

Leading the project are two members of Isler’s Robotic Sensor Networks Lab: postdoc Parikshit Maini and Ph.D student Minghan Wei. The team modified a lawn mower and is collaborating with the U of M's West Central Research and Outreach Center to make the machine solar-powered and self-sufficient.

“We just finished one big field test. We’re getting good performance,” says Isler. “It now follows a given trajectory. The next stage is, we’re going to make it detect weeds and avoid obstacles.”

Isler’s group has also designed a flying robot that can monitor orchards, and has a project on robotic fruit picking.

“We can count apples and measure their size across an entire orchard," Isler says. "There’s now a U of M startup [Farm Vision Technologies] commercializing this technology.”

Isler and David Mulla, director of the U of M Precision Agriculture Center in the College of Food, Agricultural and Natural Resource Sciences, have a patent on a system to combine the talents of ground and aerial robots to monitor farm fields and apply water or nutrients only and exactly where needed. This practice will improve yields while eliminating excessive water use and runoff of nutrients into waterways.

Protecting lakes, oceans, and streams

In Junaed Sattar’s lab, swimming robots learn to outperform humans. Someday, one could, for example, walk to a lake, dive and take samples of mud or organisms, then surface and walk back to the lab, he says.

An assistant professor of computer science & engineering, Sattar works with autonomous underwater vehicles (AUVs) equipped with sensors to help them make intelligent decisions. They have profound potential in hazardous situations, such as searching shipwrecks or clearing lakes of invasive species. His team can, for example, train robots to identify and locate invasive weeds like Eurasian watermilfoil, which changes water chemistry and affects wildlife important to the Minnesota economy.

It takes the DNR (state Department of Natural Resources) three weeks to clear a lake, then multiply that by 10,000 lakes. We’d like robots to do this dangerous task by themselves.—Junaed Sattar, assistant professor

His AUV sensors can identify objects like rocks, fish, plants, and shipwrecks. The AUVs could learn to retrieve objects from wrecks, and even have a special algorithm for robots to see better in spite of artifacts such as bubbles, the bane of many an AUV.

A team of robots could, he says, scour a lake bottom, take images and sensor data, then provide that to experts. Or monitor the health of coral reefs.

As Sattar’s team works, the shadow of Malaysia Airlines Flight 370, lost in the Indian Ocean in March 2014, is never far away.

“If they find the wreckage, people will want black boxes,” Sattar says. “That’s one of our biggest motivations.”

The underwater domain poses unique challenges. For example, neither GPS, Wi-Fi, phones, nor any other device that uses electromagnetic waves will work underwater. Sattar’s team has only cameras, and acoustic (sonar) pings to work with.

In many human-robotic interactions, humans learn to speak "Robot." My goal is to make robots learn "Human." This won’t be solved by computer scientists or electrical engineers alone. We need people from psychology, math, life sciences, cognitive sciences, linguistics, and so on so we can look at the elephant from all angles.—Junaed Sattar, assistant professor

Sattar has personally witnessed “a lot of kids who want to do underwater human-robot interaction.” Today, however, his is one of only a few groups worldwide that work on them, he says.

His team, including students—grad, undergrad, and even high school—built the LoCO AUV in-house for only $4,000. Underwater robots usually cost six figures, he says, but “we made LoCO available open source.”

LoCO has performed well in pool tests and field trials in both the Caribbean Sea off Barbados and Minnesota’s Lake Minnetonka.

The challenge of ordinary conversation

Can robots learn human-level skills in understanding and producing speech? Maria Gini has set her sights squarely on answering that central question.

She has produced a prototype “chatbot” for radio stations. It will answer common listener questions, like “What were those last two songs?” Eventually, the chatbots will have voices and personalities to fit each station’s style.

Another project addresses the problem of getting robots to work together by, for example, pushing the aforementioned heavy object.

“One question is, Do they need language or some kind of signaling—perhaps through gestures—or do they learn in a random way?” Gini says. “That project is in the early stages.”

And then there’s the challenge of creating robots that can realistically converse with people, especially those who need help. This multilayered work brings in colleagues from the Colleges of Design (notably Professor Lucy Dunne, a specialist in wearable technology), Liberal Arts (Psychology), and Pharmacy, as well as CSE.

“We want to see if there’s a correlation between what people say and what amount of stress they’re experiencing,” Gini explains. “Can we get, for instance, a more sophisticated watch that can maybe say, ‘Whoa, looks as though you’re stressed’?"

She notes how compression vests are used to calm autistic children and envisions one that can tell from physiological data that something’s wrong and then say, “I’ll give you a hug” or simply warm the body. Gini is also near the end of a two-year project to design a robot that can, for example, remind people of tasks or get them to talk about their lives and store that information.

It goes way beyond what Alexa can do.—Maria Gini, professor

As Gini envisions it, “I’m trying to have a real conversation. The program will figure out what I’m saying. Am I asking a question or making a statement? What am I talking about?”

She’s convinced that organization is key. “People learn how to build sentences from examples,” Gini says. “We have memory structures. Will AI be able to construct them?”

This is an interesting time for AI, says Gini, thanks to today’s immense computing power and the questions it raises.

“With more computational power, will computers be able to learn everything?” she muses. “Or is there something unique about the human brain?”
 

Edited from original article published on the University of Minnesota website.

Share