2023 REU Student Projects

This summer, the University of Minnesota offered a 10-week summer Research Experiences for Undergraduates (REU) program for undergraduate students focusing on human-centered computing for social good. Participants will be immersed in a collaborative community of practice at the Department of Computer Science and Engineering and mentored by faculty researchers in the areas of virtual reality, social and embodied computing, and human-robot interaction. To frame the societal relevance of their research, students will work on a project that addresses at least one United Nations Sustainable Development Goal. Lab activities will be supplemented with weekly research training seminars, invited talks, and professional development workshops.

The REU students capped off the program by presenting their work at the Summer Undergraduate Research Expo (SURE). The event featured eight posters from REU students and two posters from independent undergraduate researchers at the U of M.

Learn more about SURE by visiting their website. 

REU Students - Human-Centered Computing

Expand all

Jaiden Barthel

Home Institution: University of Minnesota (transferring in the fall)
Advisor: Evan Suma-Rosenberg
Project Title: AFFIRMING REALITY: Using Virtual Reality Self-Avatars to Mitigate Gender Dysphoria
Abstract: This research explores the challenges faced by individuals identifying under the trans umbrella, particularly the distress caused by Gender Dysphoria. It focuses on the potential benefits of self-virtual reality (VR) avatars as a positive space for gender identity exploration and support for transgender individuals. The study aims to investigate the effectiveness of selfVR avatars in mitigating gender dysphoria symptoms and providing better support for affected individuals. By narrowing our focus on self-VR avatars, we seek to understand their impacts on gender dysphoria distress. The study will involve analyzing users' responses within three separate avatar scenarios to evaluate their distress levels. Our findings may shed light on the potential of self-VR avatars as a powerful tool for supporting individuals dealing with gender dysphoria. Ultimately, this research strives to contribute to the development of effective interventions and strategies to enhance the well-being of those navigating their gender identity within the trans community.

Megdalia Bromhal

Home Institution: University of North Carolina Wilmington
Advisor: Junaed Sattar
Title: Pointing Isn't that Simple: Improving Diver & Robot Interactions in a 3D Underwater Environment
Abstract: You would think pointing to communicate would be simple. If you pointed to a photograph hanging on the wall to your right, your coworker in the room would understand that you’re pointing to the photograph. A robot, however, might think you’re pointing to the wall behind you, not the photograph. This is because the robot inherently perceives the world in 2D, whereas you and I perceive it in 3D. Thus, my work this summer has revolved around improving human-robot interactions, specifically in underwater environments where the human is a diver, and the robot is an AUV, an automatic underwater vehicle. In these environments, a diver may need to point to coral or a floating piece of trash for the robot to note. So, what I work on is taking the images from the robot and running a pose detector on the diver in the images to get the XY coordinates for the diver’s elbows and wrists. I then use a colleague’s code to find the Z coordinate. From the now-3D coordinates of the diver’s elbows and wrists, we will give the robot a 3D area of interest to search for the object the diver points to.

Katherine Hartley

Home Institution: University of Florida
Advisor: Victoria Interrante
Title: Acknowledging the Gap Between Real and Virtual Nature
Abstract: It might seem like nature is everywhere, but for several populations, like elderly in care homes and the incarcerated, it’s a special occasion to experience full immersion and be surrounded by biomass. Current virtual reality systems are capable of completely replacing the visual and auditory stimuli available to a VR user, albeit at a level of fidelity that does not (yet) match that of an actual real-world experience. Progress in virtually replicating haptic, olfactory, and other complex stimuli is ongoing. This experiment investigates the impact of these missing factors on the restorative benefits that a user derives from immersion in a VR-based virtual nature environment. After completing the Trier Social Stress Test, participants were immersed in computer-generated virtual environments both outside in real nature and indoors. We compared multiple restorative outcomes between four conditions, as well as preference ratings among all participants.

Duc Hoa Nguyen

Home Institution: University of California Los Angeles
Advisor: Karthik Desingh
Title: Generating Precise Grasp Locations and Controlling UR5 Robot Arm for Object Manipulation in Recycling Robotics
Abstract: Our project aims to revolutionize recycling through innovative robotics, using the UR5 robot arm. We focus on creating precise capture points for diverse objects and efficiently controlling the arm for accurate grasping. To achieve this, we employ a D515 camera for scanning, obtaining XYZ coordinates and depth information to create a detailed point cloud map of the environment and targeted objects. From this map, we compute multiple potential capture positions for the robot arm. Our key approach involves identifying the optimal capture point based on object shape, orientation, and stability to ensure successful grasping. This enables the robot arm to handle objects with precision and efficiency. We implement this technology using the UR5 robot arm and a control program that guides it smoothly toward the capture point. Through a combination of real-time feedback and precise motion planning, the robot arm performs a smooth and accurate grasp, securely lifting the object and dropping it in a designated area. The project's ultimate goal is to enable the robot arm to autonomously pick up recyclable plastic bottles and shells, significantly improving recycling efficiency and promoting sustainability.

Ayooluwa Odeyinka

Home Institution: Williams College
Advisor: Stevie Chancellor 
Title: Communities of Support in YouTube Comments
Abstract: Have you ever been watching a YouTube video but find yourself more interested in the comments than the video itself? Well that’s due to a number of factors, one of which is the communities that tend to form under these videos, especially ones that mention mental health. The research I'm doing focuses on analyzing these communities that form in the comment section, as well as the influencers themselves and seeing how opinions of mental health put out by YouTube Influencers results in positive or stigmatizing frames of mental illness. The data is gathered using YouTube's Application Programming Interface (API) and is broken down using python and pandas.The data we gather from the API is then analyzed using natural language processing and other methods.

Ella Rider

Home Institution: University of Wisconsin—Oshkosh 
Advisor: Lana Yarosh 
Title: VolumiVive: Adding Interactive Elements to 360 Degree Video 
Abstract: While Virtual Reality experiences are meant to be immersive and interactive, most 360° video technologies in use only provide panoramic views of a scene—making use of two dimensions despite masquerading as three. Volumetric video is a video type that captures a scene in full 3D, enabling a viewer to see 'what's going on' from any angle they like. We use our recording system, composed of six Microsoft Kinect cameras, six laptop computers, and one five-sided calibration marker, to record volumetric videos for VolumiVive. VolumiVive is a program that allows the user to add custom interactive elements to any video from inside the virtual space, to increase personalized and meaningful interactions with the content. Combining volumetric video's immersion with the freedom to design one's own interactions, VolumiVive invites users to fully step into the scene, choose their own paths and feel more in tune with their virtual surroundings than previously possible. Future research into the feasibility of its implementation into fields such as education, as a tool to help teach material, and sports, to support athlete training, is a next step for this project, following the official study.

Revanth Krishna Senthilkumaran

Home Institution: Purdue University
Advisor: Karthik Desingh
Title: Training a Mobile Manipulation Agent Towards Furniture Organization 
Abstract: Using a robot to navigate through and manipulate objects in the environment has been proven by companies such as Boston Dynamics. Even with the ability to move and manipulate, learning what actions are necessary towards fulfilling a task autonomously remains a challenging problem for robots. Through this work, we focussed on training an agent, in our case the Boston Dynamics Spot robot, capable of identifying and manipulating a single chair by selecting a grasp location and a destination. For this task, we build on the Per-Act vision-language model which has previously been used for table-top manipulation. Per-Act uses a voxel-representation of 3D space to infer waypoints, which are intermediate states that an end-effector has to sequentially reach in order to complete a task. To construct this voxel space, we develop a system to collect and process data from Spot. The data collected from Spot includes camera information from six different cameras on the robot, its location, and a state indicating if it has completed a task. Per-Act is then trained and an evaluation script is used to assess how well the model learns.

Kathleen Shea

Home Institution: Colorado College 
Advisor: Lana Yarosh 
Title: Technology-Mediated Disclosures for Sensitive Information 
Abstract: This project aims to find ways in which technology is able to mitigate the burden of public stigma that people living with Human Immunodeficiency Virus (HIV) experience. Using participatory and speculative design principles, my advisor, Fernando Maestre, worked with this population to identify the ways in which they want technology to support them. After conducting co-design workshops, participants identified the need for technology to assist them in disclosing their HIV diagnosis. This led to the creation of an application prototype that allows people living with HIV to disclose their diagnosis, express their emotions, provide educational resources, and be explicit with what they need from their disclosure recipient. The study has recently concluded the piloting stage and has shown promising results in fostering supportive interactions during HIV disclosure. In particular, participants in the pilot (n=8) identified generalizability, customizability, access to educational resources, and user-friendly design as positive and important features of the application. Participants also identified ideas for future iterations of the app, which will be reflected in the user-study that will be conducted in the fall. Ultimately, the study aspires to expand the app's utility beyond HIV disclosures to address other sensitive topics that people find challenging to discuss.

Independent Study Students

Expand all

Athreyi Badithela

Home Institution: University of Minnesota 
Advisor: Karthik Desingh 
Title: Targeted Sorting of Objects in Clutter - Segmentation 
Abstract: This project involves creating a segmentation model that is capable of segmenting any novel unseen objects using interactive segmentation. One example of unique objects in scene is during recycling, where one can encounter transparent, deformable, and opaque objects. Our model uses Meta's SAM to create initial segments of objects. The objects are then moved by a robot while maintaining clutter. We use XMem, a long-term video object segmentation model to track the segments as the objects move. SAM is not capable of perfectly segmenting novel objects. By tracking the over segmented objects, we gain additional information about the objects. Based on this, a couple of heuristic functions were implemented to collapse the over segmented objects. The obtained segments are then used for grasping the objects. From our experiments, we note that there are plenty of metrics to evaluate two masks that need to be collapsed. We specifically use the distance between the centroids of the segments as well as the direction of motion of the centroids. Our results note that this method works very well on opaque multi-colored objects. Future work would involve improving the heuristic to be capable of working on transparent and deformable objects as well.

Ryan Diaz

Home Institution: University of Minnesota
Advisor: Karthik Desingh 
Title: Imitation Learning for Spatio-Geometry Driven Assembly Task with Dual-Arm Manipulator 
Abstract: Part assembly tasks, such as the peg-in-hole task, are a challenging problem in the field of robotic manipulation. This project focuses specifically on the dual-arm peg-in-hole task of aligning and assembling two objects with geometrical intrusions and extrusions. As this manipulation task requires a high degree of precision, the accurate detection and orientation of the hole and peg is important. We employ a general framework in visuomotor policy learning that utilizes visual pretraining models as vision encoders. This study investigates the robustness of this framework against grasp variations within a dual-arm setup, both in simulation and the real world. Qualitative analysis of experiments done in simulation show that a visual encoder trained from scratch consistently outperforms frozen pretrained models. We then apply this model architecture to experiments involving running the task in the real world, finding that while the model is readily able to adapt to translation variations in grasp, it evidently still needs more signals for rotation variations, which may be provided by wristview cameras.