GroupLens: A Human-Centered Approach to AI

Since the internet boom in late 1990s, humans spend more and more of their daily lives on the World Wide Web. We shop, search, connect with friends, binge watch shows, and follow news online. The digital systems we use everyday are becoming increasingly reliant on artificial intelligence (AI) to optimize the user experience. While many computer scientists build AI techniques for social problems, researchers in the GroupLens Center for Social and Human-Centered Computing at the University of Minnesota are taking a human-centered approach to computing that is grounded in current practices and experiment-based innovation.

“We run experiments on systems and put together specific scenarios to make sure we are studying something people actually do and care about rather than an abstract idea,” said Joseph Konstan, professor and associate dean for research in the College of Science and Engineering. “Whether you are looking at social media disruption and moderation, shared content collaboration, or expert explanations and their use in different contexts, we feel that this work is enriched when the data you are gathering is coming from people who actually care about the thing you are studying.”

GroupLens is a research lab in the Department of Computer Science & Engineering (CS&E) at the U of M with a mission to advance the theory and practice of social computing by building and understanding systems used by real people. Konstan, Stevie Chancellor, Harmanpreet Kaur, Loren Terveen, and Lana Yarosh lead human-centered AI research efforts in areas including recommender systems, online communities, mental health, ethics, and more.

“Human centeredness is about embedding humans and human-centered values at every point in a machine-learning or AI lifecycle,” said Kaur, the newest member of the GroupLens lab and an assistant CS&E professor. “What makes GroupLens unique is that we truly look at the entirety of that lifecycle. Every time something new happens with AI, we start looking at a human-centered design, as well as experimentation and results that we can measure in the process.”

With the emergence of publicly-available AI tools like ChatGPT, the world is abuzz with the possibilities and risks that AI brings to the table. Embedding human values into AI systems is a necessary and complicated step that involves a variety of technical and societal factors.  

“When you are dealing with people, there are trade-offs because you might be giving people what they want, but are you giving them what they need and what is good for society,” said Terveen, professor and associate department head in CS&E. “Inevitably, when you start to think about the effects of AI systems on people, it becomes very complicated and we look at those intricacies.”

Konstan’s work on recommender systems specifically aims to move past what people want and steer them towards something new and unexpected. With nearly 30 years working with real users to improve his movie and news recommender systems, his work aims to delight users and broaden their interests.

“A lot of what I have been doing with MovieLens, now that we have more than 300,000 users into the program since 1997, is experiments on things like broadening people’s exposure and consumption,” said Konstan. “It’s easy to get people to see the blockbusters. But do we care about the diversity of writers and directors and actors, and who ultimately gets the most exposure? Or do I only care about what’s popular and giving the people what they want? Once people trust the system based on things they do like, we can borrow some of that trust to recommend something new.”

Building off the trust he has established in the field, Konstan also hopes to nudge more academic researchers towards human-centered computing experiments.

“I’m working on a large grant from the National Science Foundation to build a recommender for news designed from the bottom up so it can be dispatched to other people’s experiments,” said Konstan. “A researcher could come in and say, ‘I want to borrow 150 users for three weeks to try out this idea.’ We are still in the design and production of this system.”

Working with real people is also at the core of Kaur’s work in human-centered AI explainability and interpretability. While building AI systems that are grounded in human values, her work helps people better interact with and understand AI.

“Increasingly, AI is shaping our experiences online and becoming a partner in decision-making settings,” said Kaur. “As AI gets more complicated, people are using it without fully understanding what it’s doing. We might not be aware of how much our online experiences are changing because of AI, and we might not understand why it is recommending certain things or nudging us towards one decision or another. My work focuses on helping people better understand what AI is doing so they can make more informed decisions and trust it appropriately.”

Another part of this equation is designing AI outputs that are easy for humans to understand and take into account how humans naturally interpret data and information.

“A significant part of my research has to do with how people interpret AI outputs and determining ways to make outputs even better,” said Kaur. “One problem we run into is that people aren’t always rationally internalizing information, so how do we account for potential cognitive and social biases that people have when they are taking in new information.”

Human biases are not a new topic in the world of Wikipedia, a focal point of Terveen’s work on community-based information systems. The encyclopedia that anyone can edit is now being updated by humans and AI alike, bringing a new set of challenges into the community.

“The problem with people producing knowledge is that it does not scale,” said Terveen. “Wikipedia can’t keep up with all of the bad edits that people make. So now we are incorporating AI to help with those tasks, which makes it a member of that community. What happens when the AI systems disagree with what people think? Who wins? Ultimately, when you are deciding what is a good answer or edit in these AI systems, you have to consider whose beliefs are behind deciding whether or not this is a good answer. Individual human values are being reflected in AI systems, so we need to find ways to build systems that incorporate values and perspectives from the full community that is affected.”

The merging of human and artificial intelligence also amplifies the threat of misinformation. Terveen stressed that AI can be used as a tool, but humans must remain at the center of information systems.

“Systems like Chat GPT don’t know what is factual and what is not factual, so we need people to treat anything that AI produces as a first draft,” said Terveen. “That suggests that we are moving towards a world where things people used to do - answer questions, produce content - might now have AI playing a larger role. However, things will go south if we don’t have some humans supporting and verifying that information. That ties back to explainability to trust. It’s a new layer of media literacy.”

Chancellor’s work is also using AI as a tool to utilize the wealth of data on social media platforms in order to help people exhibiting high-risk or dangerous health behaviors. Her human-centered AI systems aim to predict and intervene in these cases.  

“While developing an AI solution to figure out when someone is in crisis and when to intervene, all of the decisions you make with people’s data must consider the individuals who will be impacted by the system,” said Chancellor. “I believe you can do this by involving people throughout the whole process through labeling or evaluating AI systems. Recently, we had end-users designing what they think the system should ultimately do and what the style of intervention should be for each high-risk behavior.”

Refining the outputs of these large data sets is central to success in Chancellor’s area of interest.

“Having some clinical grounding in the mental health space is really important,” said Chancellor. “It’s really easy to get lots of data that has no context for what that might mean for someone’s mental health status. There is a difference between being clinically depressed and other mental illnesses. Unfortunately, when we get more granular with high-quality labels, our data points go from millions down to less than 200. That is not big enough to generalize or build a stable model. When it comes to large language models, I am interested in examining how much they really help or hurt when that kind of inference occurs, and its ability to translate a small dataset into a larger space.”

A human-centered approach to AI places ethics at the center of every project and experiment. But what does ethics look like in this new, unregulated AI environment? When data is used as the fuel to make AI systems run, that data can no longer be considered neutral. GroupLens researchers understand that transparency and human input is critical to ensure systems are trustworthy and working in a way that makes sense in the given context.

“When you think about the modern AI systems, they are trained on data people produced in Reddit, Wikipedia, open-source software systems, etc,” said Terveen. “Nobody thought about how their data was going to be used to train an AI system and the implications of that when they originally created content. It’s completely legal to do that, but is it ethical and how we want the world to work? That’s something that we really think about in our work.”

As AI continues to embed itself into the day-to-day lives of the population, a human-centered approach will safeguard the values of society.

“There are a lot of new, interesting methods coming out which is why this line of work is so much fun,” said Konstan. “The field is moving while we are doing the work. Part of moving the field forward is doing the work while it is still hard; those are the people that allow it to eventually become easy.”

Learn more about GroupLens and human-centered AI at their website.

Share