CS&E Excels at 2026 ACM SIGCHI Conference with 13 Accepted Papers
Bottom row (l-r): Loren Terveen, Victoria Interrante, Evan Suma Rosenberg, Lana Yarosh
Department of Computer Science & Engineering human-centered computing researchers will have a strong presence at the upcoming CHI 2026 Conference in Barcelona, Spain. CHI, the ACM Conference on Computer-Human Interaction, is the premier international conference in the field of human-computer interaction.
The University of Minnesota crew has 13 accepted papers (one of which earned Best Paper Honorable Mention), three poster presentations, and will host two workshops heading into the leading international conference on human-computer interaction. Additionally, Professor and Associate Dean for Research Joseph Konstan will be honored with the ACM SIGCHI Lifetime Research Award at the event.
“This is the biggest venue in our field, so it's great to see our department have a huge presence,” said Assistant Professor Harmanpreet Kaur. “The University of Minnesota has always had a strong human-centered computing program. This year's papers reflect the continued diversity of human-centered computing (HCI) research happening here at UMN, highlighting the many ways in which we are making an impact in the field.”
“Getting papers in venues like this helps us change the way people think and behave with artificial intelligence (AI),” said Assistant Professor Stevie Chancellor. “CHI is such an important and prestigious venue. These papers help us make the case for responsible and effective development of AI tools across many domains.”
Kaur and Chancellor are both members of the GroupLens Research Lab, and are featured in a number of papers, workshops, and posters over the course of the conference. They are co-organizing a workshop titled, “AI CHAOS! 2nd Workshop on the Challenges for Human Oversight of AI Systems”, and they are both authors on the paper that earned Best Paper Honorable Mention - “Opportunities and Barriers for AI Feedback on Meeting Inclusion in Socioorganizational Teams”.
In addition to Kaur and Chancellor, Assistant Professor Zhu-Tian Chen also has a strong showing at the conference with three accepted papers that explore AI and human collaboration in augmented reality (AR), with the goal to one day enhance human capabilities in the physical world.
“I am happy to see our work recognized by the community, and we hope that this research helps shape how humans and AI collaborate to make decisions in the physical world,” Zhu-Tian said. “Our goal is to move AR beyond the lab and into real-world settings, where it can meaningfully augment human capabilities.”
In all, CS&E has 24 researchers represented at CHI26, including faculty members Chancellor, Chen, Professor Victoria Interrante, Kaur, Konstan, Associate Professor Evan Suma Rosenberg, Professor and Department Head Loren Terveen, and Associate Professor Lana Yarosh.
Explore each of the accepted papers, posters, and workshops detailed below (UMN participants are bolded). Visit the CHI26 website for more information!
Accepted Papers
+
An Expert Schema for Evaluating Large Language Model Errors in Scholarly Question-Answering Systems
Authors: Anna Martin-Boyle, William Humphreys, Martha Brown, Cara Leckey, Harmanpreet Kaur
Scholars everywhere recognize that LLM outputs are prone to errors – but what are these errors? We asked scientists to evaluate LLM answers to questions about their own papers, then catalogued every mistake they found. The result is a schema of 20 error types across 7 categories: from hallucinated citations and invented equations, to synthesis failures across multiple papers, and answers that are technically correct but dangerously incomplete. Standard automated benchmarks miss most of this; you need real experts evaluating real outputs to see how LLMs actually fail in scholarly work.
+
Beyond Content Exposure: Systemic Factors Driving Moderators’ Mental Health Crisis in Africa
Authors: Nuredin Ali Abdelkadir, Tianling Yang, Shivani Kapania, Kauna Ibrahim Malgwi, Fasica Berhane Gebrekidan, Adio-Adet Dinika, Elaine O. Nsoesie, Milagros Miceli, Stevie Chancellor
Recently documented the terrible mental health conditions of African content moderators working for Meta and TikTok contractors. In a survey of 134 moderators, and interviews with 15 of them, we found that these moderators experience significantly worse mental health outcomes than their counterparts in other regions. Former moderators show even higher distress levels, suggesting long-term harm. The root causes appear to be systemic, tied to the labor conditions, poor pay, and poor mental health support of these workers.
+
Can AR Embedded Visualizations Foster Appropriate Reliance on AI in Spatial Decision-Making? A Comparative Study of AR X-Ray vs. 2D Minimap
Authors: Xianhao Carton Liu, Difan Jia, Tongyu Nie, Evan Suma Rosenberg, Victoria Interrante, Chen Zhu-Tian
Artificial Intelligence (AI) and indoor sensing increasingly support decision-making in spatial environments. However, traditional visualization methods impose a substantial mental workload when viewers translate this digital information into real-world spaces, leading to inappropriate reliance on AI. Embedded visualizations in Augmented Reality (AR), by integrating information into physical environments, may reduce this workload and foster more appropriate reliance on AI. To assess this, we conducted an empirical study (N = 32) comparing an AR embedded visualization (X-ray) and 2D Minimap in AI-assisted, time-critical spatial target selection tasks. Surprisingly, evidence shows that the embedded visualization led to greater inappropriate reliance on AI, primarily as over-reliance, due to factors like perceptual challenges, visual proximity illusions, and highly realistic visual representations. Nonetheless, the embedded visualization showed benefits in spatial mapping. We conclude by discussing empirical insights, design implications, and directions for future research on human-AI collaborative decision in AR.
+
“I’ll Just Do It”: Designing for the Hidden Work of Collaborative Family Caregiving
Authors: Shichen Liang, Alvina Salim, Svetlana Yarosh, Amanda Johnson, Ji Youn Shin
Family caregivers of patients with chronic conditions take on many responsibilities, often at the expense of their own health. Previous studies have examined technological supports, such as information access and online social groups. While these approaches are promising, they primarily focus on easing the load of a single caregiver. Even when interventions are designed to support collaborative caregiving, they often overlook which tasks are shareable and what influences caregivers’ decisions to delegate. To address these gaps, we conducted activity-based qualitative interviews with 10 family caregivers of patients who underwent life-threatening medical procedures. Our findings show that collaboration occurs when tasks are shareable in practice, including tangible and logistical help, and when caregivers judge that the benefits of receiving support outweigh the work of seeking help. We propose that technologies should support caregivers in determining when collaboration is feasible by recognizing task types, addressing the trade-offs embedded in activation costs, and enabling both proactive and reactive assistance as situations evolve.
+
Opportunities and Barriers for AI Feedback on Meeting Inclusion in Socioorganizational Teams (Best Paper Honorable Mention)
Authors: Mo Houtti, Moyan Zhou, Daniel Runningen, Surabhi Sunil, Leor Porat, Harmanpreet Kaur, Loren Terveen, Stevie Chancellor
We built an AI agent to improve inclusion through better feedback exchange. The agent uses the Induced Hypocrisy procedure, a well-grounded technique from social psychology. This procedure can prompt people to make behavior changes by highlighting hypocrisies in their own behaviors. We show that it works in the lab with an experimental study, but organizational factors made people use the tool for personal reflection, NOT improving meeting inclusion.
+
PaperTrail: A Claim-Evidence Interface for Grounding Provenance in LLM-based Scholarly Q&A
Authors: Anna Martin-Boyle, Cara Leckey, Martha Brown, Harmanpreet Kaur
We built a system called PaperTrail to help researchers verify whether LLM-generated answers to scholarly questions are actually supported in the source papers. Instead of just showing source links or citations, PaperTrail breaks both the LLM's answer and the source documents into specific claims and evidence, showing you what's supported, what's missing, and what might be made up. The system made researchers more skeptical of the AI, BUT that skepticism didn't change how much they relied on the LLM. We outline a trust-behavior gap: turns out, knowing an LLM might be wrong isn't enough to make people act on that suspicion.
+
Unraveling Entangled Feeds: Rethinking Social Media Design to Enhance User Well-being
Authors: Ashlee Milton, Dan Runningen, Loren Terveen, Harmanpreet Kaur, Stevie Chancellor
We found that social media AI algorithms, like those that support TikTok FYP, are impossible for people to control when they need to avoid harmful content. Through workshops with people, researchers have created a process model to describe why and how it affects people. This paper gives us a clearer mechanism and explanation for the process of how social media AI algorithms cannot be controlled, and potential design solutions to these problems.
+
Challenges in Synchronous & Remote Collaboration Around Visualization
Authors: Matthew Brehmer, Maxime Cordeil, Christophe Hurter, Takayuki Itoh, Wolfgang Büschel, Mahmood Jasim, Arnaud Prouzeau, David Saffo, Lyn Bartram, Sheelagh Carpendale, Chen Zhu-Tian, Andrew Cunningham, Tim Dwyer, Samuel Huron, Masahiko Itoh, Alark Joshi, Kiyoshi Kiyokawa, Hideaki Kuzuoka, Bongshin Lee, Gabriela Molina León, Harald Reiterer, Bektur Ryskeldiev, Jonathan Schwabish, Brian A. Smith, Yasuyuki Sumi, Ryo Suzuki, Anthony Tang, Yalong Yang, Jian Zhao
We characterize 16 challenges faced by those investigating and developing remote and synchronous collaborative experiences around visualization. Our work reflects the perspectives and prior research efforts of an international group of 29 experts from across human-computer interaction and visualization sub-communities. The challenges are anchored around five collaborative activities that exhibit a centrality of visualization and multimodal communication. These activities include exploratory data analysis, creative ideation, visualization-rich presentations, joint decision making grounded in data, and real-time data monitoring. The challenges also reflect the changing dynamics of these activities in the face of recent advances in extended reality (XR) and artificial intelligence (AI). As an organizing scheme for future research at the intersection of visualization and computer-supported cooperative work, we align the challenges with a sequence of four sets of research and development activities: technological choices, social factors, AI assistance, and evaluation.
+
“How would I know what I would want from or with them?'': Supporting A-Spec Approaches to Developing Relationships Through Online Platforms
Authors: Kelly Wang, Ashlee Milton, Leah Namisa Rosenbloom, Erika Melder, Ada Lerner, Michael Ann DeVito
Online platforms have become a key avenue for forming new relationships, especially for queer individuals. However, some individuals, such as those in asexual and aromantic communities (A-Spec), seek forms of relationships that trouble existing frameworks assumed by online platforms, such as dating apps. To investigate A-Spec needs, we conducted an 8-week ARC study with 38 A-Spec participants who have used online platforms for developing relationships. Participants described a mismatch between the design of dating apps and their approach to building relationships, suggesting platform design that combines affordances of dating apps and other social platforms. We thus outline a ``process-oriented'' paradigm for relationship-building platforms inspired by community design suggestions, supporting participants' process of first establishing a low-stakes relationship and then co-constructing its properties. We also argue for a ``pluralized'' approach to defining identity and relationship in the design of online systems, upsetting default assumptions surrounding any given label.
+
(Re)mediators of Epistemic Injustice: Generative AI and Hermeneutic Resource Provision in Intimate Partner Violence
Authors: Jasmine C Foriest, Leah Ajmani, Munmun De Choudhury
Intimate partner violence (IPV) is defined as “abuse or aggression that occurs in a romantic relationship." IPV survivors face barriers when help-seeking, such as epistemic injustice -- secondary victimization from dismissal and indifference when disclosing, misdirection, or inappropriate interventions. Survivors may leverage generative AI to make sensitive disclosures and access hermeneutic resources. However, these tools mediate outcomes for IPV survivors through novel manifestations of epistemic injustice. Using mixed-methods, we investigated hermeneutic resource provision by large-language models (LLMs). We evaluated LLM responses to IPV disclosures on three axes: hermeneutic resource provision, readability, and risk. Prompts were derived from a content analysis of IPV and generative AI discussions in 5 abuse subreddits. We contribute a taxonomy of 7 uses of generative AI in the experience of IPV, empirical illustration of epistemic inequity, and considerations for evaluating epistemic harm in generative AI. Content Warning: This study contains descriptions of abuse and violence.
+
'The plan is just survival': Data Work in Kenya and the Regime of Entrapment
Authors: Shivani Kapania, Tianling Yang, Nuredin Ali, Morgan Klaus Scheuerman, Milagros Miceli, Alex S Taylor, Sarah E Fox
The rapid expansion of the AI industry relies heavily on the production, verification, and maintenance of data, otherwise known as "data work". Companies outsource and offshore this work through global AI supply chains that operate under exploitative conditions. Drawing on semi-structured interviews with Kenyan data workers across platforms and BPOs, this paper examines how such conditions take shape and persist. We argue that workers are caught within a regime of entrapment, a system of interconnected mechanisms that make it difficult for workers to leave or improve their positions. These mechanisms include the push to invest in the promise of ‘AI’ jobs, the use of precarious contracts to govern workers, the capture of regulatory institutions, and the exploitation of global labor arbitrage. Using complementary lenses of neoliberal governmentality, precarity, and supply chain capitalism, we analyze why labor mobilization in this sector remains uniquely constrained. We conclude by outlining an orientation for research and scholarly practice that can support workers' organizing efforts and contest the structural conditions sustaining this regime.
+
ViSTAR: Virtual Skill Training with Augmented Reality with 3D Avatars and LLM coaching agent
Authors: Chunggi Lee, Hayato Saiki, Tica Lin, EIJI IKEDA, Kenji Suzuki, Chen Zhu-Tian, Hanspeter Pfister
We present ViSTAR, a Virtual Skill Training system in AR that supports self-guided basketball skill practice, with feedback on balance, posture, and timing. From a formative study with basketball players and coaches, the system addresses three challenges: understanding skills, identifying errors, and correcting mistakes. ViSTAR follows the Behavioral Skills Training (BST) framework—instruction, modeling, rehearsal, and feedback. It provides feedback through visual overlays, rhythm and timing cues, and an AI-powered coaching agent using 3D motion reconstruction. We generate verbal feedback by analyzing spatio-temporal joint data and mapping features to natural-language coaching cues via a Large Language Model (LLM). A key novelty is this feedback generation: motion features become concise coaching insights. In two studies (N=16), participants generally preferred our AI-generated feedback to coach feedback and reported that ViSTAR helped them notice posture and balance issues and refine movements beyond self-observation.
+
Who Is At Risk? Examining the Prevalence of Digital-Safety Attacks and Contextual Risk Factors in the United States
Authors: Sharon Heung, Claire Florence Weizenegger, Mo Houtti, Sunny Consolvo, Patrick Gage Kelley, Tara Matthews, Renee Shelby, Kurt Thomas, Ashley Marie Walker
A growing body of qualitative research has identified contextual risk factors that elevate people’s chances of experiencing digital-safety attacks. However, the lack of quantitative data on the population-level distribution of these risk factors prevents policymakers and tech companies from developing targeted, evidence-based interventions to improve digital safety. To address this gap, we surveyed 5,001 adults in the United States to analyze: (1) the frequency of and relationship between digital-safety attacks (e.g., scams, harassment, account hacking), and (2) how these attacks align with 10 contextual risk factors. Nearly half of our respondents identify as resource constrained, which significantly correlates with higher likelihood of experiencing four common attacks. We also present qualitative insights to expand our understanding of the factors beyond the existing literature (e.g., “prominence” included high-visibility roles in local communities). This study provides the first large-scale quantitative analysis correlating digital-safety attacks with contextual risk factors and demographics.
Awards
+
ACM SIGCHI Lifetime Research Award
Winners: Lorrie Cranor, Joseph Konstan
+
Best Paper Honorable Mention Award
Winner: "Opportunities and Barriers for AI Feedback on Meeting Inclusion in Socioorganizational Teams"
Authors: Mo Houtti, Moyan Zhou, Daniel Runningen, Surabhi Sunil, Leor Porat, Harmanpreet Kaur, Loren Terveen, Stevie Chancellor
Posters
+
From the Lab to the Field: The Role of Motivation in Rigorous XAI User Studies
Authors: Malik Khadar, Harmanpreet Kaur
Explainable AI (XAI) is facing a measurement crisis, as the intended benefits of transparency often fail to translate from laboratory settings to real-world practice. We argue this crisis stems from a fundamental distinction between typical XAI user study participants and actual XAI end users, as XAI researchers rely on paid crowdworkers. In this paper, We center motivation as key to bridging this difference and improving the rigor of XAI user studies. We propose a study evaluating XAI use in citizen science, a complementary context where participants are driven by more self-determined motivation with a real stake in their XAI usage outcomes. By comparing this population with a conventional sample of paid crowdworkers completing the same tasks, we aim to investigate whether typical XAI research practices generalizes to a wider range of XAI use contexts. We are excited to discuss the study methodology to improve its ability to resolve XAI's measurement crisis.
+
Input–Envelope–Output: Auditable Generative Music Rewards in Sensory-Sensitive Context
Authors: Cong Ye, Songlin Shang, Xiaoxu Ma, Xiangbo Zhang
Generative feedback in sensory-sensitive contexts poses a core design challenge: large individual differences in sensory tolerance make it difficult to sustain engagement without compromising safety. This tension is exemplified in autism spectrum disorder (ASD), where auditory sensitivities are common yet highly heterogeneous. Existing interactive music systems typically encode safety implicitly within direct input–output (I–O) mappings, which can preserve novelty but make system behavior hard to predict or audit. We instead propose a constraint-first Input–Envelope–Output (I–E–O) framework that makes safety explicit and verifiable while preserving action–output causality. I–E–O introduces a low-risk envelope layer between user input and audio output to specify safe bounds, enforce them deterministically, and log interventions for audit. From this architecture, we derive four verifiable design principles and instantiate them in MusiBubbles, a web-based prototype. Contributions include the I–E–O architecture, MusiBubbles as an exemplar implementation, and a reproducibility package to support adoption in ASD and other sensory-sensitive domains.
+
MetaMate: Understanding How Educational Researchers Experience AI-Assisted Data Extraction for Systematic Reviews
Authors: Xue Wang, Gaoxiang Luo
Systematic reviews are essential for evidence synthesis in education, yet data extraction remains a bottleneck: labor-intensive and error-prone. Large language models offer automation potential, but questions remain about AI performance compared to human coders and how researchers experience these tools in practice. We present MetaMate, an open-access web-based tool for automated data extraction in educational systematic reviews. Our mixed-methods evaluation combines a quantitative validation study benchmarking MetaMate against trained human coders across 32 studies and 20 data elements with a qualitative user study involving six educational researchers using think-aloud protocols. MetaMate achieves precision (81-96%), recall (90-100%), and F1 scores (88-96%) comparable to or exceeding human coders, with strengths in mathematical reasoning and semantic comprehension. Qualitative findings reveal insights about trust calibration, verification behaviors, usability challenges, and human-AI collaboration. We contribute empirical evidence on LLM extraction capabilities and design implications for AI-assisted research tools balancing automation with human oversight. MetaMate is available at https://metamate.online.
Workshops
+
AI CHAOS! 2nd Workshop on the Challenges for Human Oversight of AI Systems
Authors: Malik Khadar, Julia Cecil, Leon Van Der Neut, Nikola Banovic, Kevin Baum, Stevie Chancellor, Enrico Costanza, Motahhare Eslami, Anna Maria Feit, Susanne Gaube, Ujwal Gadiraju, Harmanpreet Kaur
As AI systems are increasingly adopted in high-stakes domains such as healthcare, autonomous driving, and criminal justice, their failures may threaten human safety and rights. Human oversight of AI systems is therefore critically important as a potential safeguard to prevent harmful consequences in high-risk AI applications. The global regulatory and policy landscape for AI governance remains understandably fragmented and diverse. While frameworks like the European AI Act require human oversight for high-risk AI systems, there is currently a lack of well-defined methodologies and conceptual clarity to operationalize such oversight effectively. Independent of policy and regulation, poorly designed oversight can create dangerous illusions of safety while obscuring accountability. This interdisciplinary workshop aims to bring together researchers from various disciplines, including AI, HCI, psychology, law, and policy, to address this critical gap. We will explore the following questions: (1) What are the greatest challenges to achieving effective human oversight of AI systems? (2) How can we design AI systems that enable meaningful human oversight? (3) How do we assign responsibilities to and support the various stakeholders involved in oversight? Through talks and interactive group discussions, participants will identify oversight challenges; examine stakeholder roles; discuss supporting tools, methods, and regulatory frameworks; and establish a collaborative research agenda. Our central goal is to further a roadmap that enables effective human oversight for the responsible deployment of AI in society.
+
Social and Emotional Uses of AI
Authors: Emily Tseng, Daniel A. Adler, Ashley Marie Walker, Renee Shelby, Stevie Chancellor, Eugenia Kim, Sachin R Pendse, Renwen Zhang
More and more people look to generative AI for social and emotional support — presenting profound interpersonal and societal risks. In this workshop, we invite HCI researchers across the sub-communities of digital safety, digital mental health and well-being, and responsible AI to come together and articulate a shared research agenda for HCI to lead the design, governance, and safeguarding of social and emotional uses of AI. Workshop participants will engage in a series of talks and group discussions focused on defining and addressing foundational, methodological, and translational challenges towards safer AI use.