Interfaces Volume 2 (2021)
Essays and Reviews in Computing and Culture
Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences.
Co-Editors-in-Chief: Jeffrey R. Yost and Amanda Wick
Managing Editor: Melissa J. Dargay
+
2021 (Vol. 2) Table of Contents
Before the Byte, There Was the Word: The Computer Word and Its Many Histories -- J. Rodgers
Early “Frictions” in the Transition towards Cashless Payments -- B. Bátiz-Lazo and T. R. Buckley
Top 10 Signs We Are Talking About IBM’s Corporate Culture -- J. Cortada
NFTs, Digital Scarcity, and the Computational Aura -- A. Vee
Everyday Information Studies: The Case of Deciding Where to Live -- M. Ocepek and W. Aspray
The Cloud, the Civil War, and the “War on Coal” -- P.E. Ceruzzi
Everyday Information Studies: The Case of Deciding Where to Live
Melissa G. Ocepek and William Aspray
Abstract: This essay introduces everyday information studies to historians of computing. This topic falls within the subdiscipline of information behavior, one of the main subject areas in information studies. We use our recent edited book, on Deciding Where to Live (Rowman & Littlefield, 2021), as a means to illustrate the kinds of topics addressed and methods used in everyday information studies. We also point the reader to some other leading examples of scholarship in this field and to two books that present an overview of the study of information behavior.
This essay introduces everyday information studies to historians of computing. The story of this field of study and its history are too large to tell in detail here. This topic falls within the subdiscipline of information behavior, one of the main subject areas in information studies – a field that began to be studied between the two world wars and took off in the 1960s. The reader interested in information behavior studies more generally should examine two well-regarded reference works on this subject (Case and Given 2016; Fisher, Erdelez, and McKechnie 2005).
Information Study Approaches
The early research on information behavior focused on human behavior in structured information environments, such as when a person went to a library to seek information or interacted with a database. But, of course, there were other, less structured environments for finding information, such as through conversations with friends and family; consulting religious or civic leaders, or area specialists such as financial advisors; and through consumption of the media. With the coming of the Internet and portable information devices, one could seek information anywhere, anytime, on any subject profound or frivolous. Information seeking, consumption, and analysis became an increasingly everyday part of ordinary people’s lives. The field expanded over time to not only include information needs, wants, and seeking, but also information avoidance and overload, and various kinds of affective as well as cognitive responses to information.
In fact, the everyday aspects of information were studied not only by information scholars but also by sociologists, communications scholars, and media scholars beginning as early as the 1930s. These studies about the roles information plays in one’s everyday life draw upon theorizing by such scholars as Michel de Certeau (1984), Henri Lefebvre (2008/1947), Dorothy E. Smith (1987), and Carolyn Steedman (1987). For an overview of the relevant theorizing, see Highmore (2001), Bakardjieva (2005, Chs. 1 and 2), and Haythornthwaite and Wellman (2002). Highmore also includes writing selections from many of these theorists. To make this introduction to everyday information studies more manageable, we focus here on our own work and primarily on our recent edited book, Deciding Where to Live (Ocepek and Aspray 2021). For a sample of other everyday information studies, see for example the work of Denise Agosto (with Sandra Hughes-Hassell, 2005), Karen Fisher (neé Pettigrew, 1999), Tim Gorichanaz (2020), Jenna Hartel (2003), Pam McKenzie (2003), and Reijo Savolainen (2008).
Our personal involvement with research on everyday information studies began with Everyday Information (Aspray and Hayes 2011), which injected historical scholarship into studies on everyday information. In a long study of “100 Years of Car Buying”, one of us (Aspray, pp. 9 – 70 in Aspray and Hayes 2011) introduced a historical model, showing how endogenous forces (e.g., the dealership model for selling automobiles, or the introduction of foreign automobiles into the American market) and exogenous forces (e.g., war, or women entering the workforce) shaped the information questions that people were interested in and sometimes even the information sources they consulted. This volume, presenting an historical approach to everyday information behavior, included contributions by the noted historians of computing James Cortada, Nathan Ensmenger, and Jeffrey Yost.
Our collaboration began when the two of us, together with our colleague George Royer (today a game designer in Austin, TX), wrote two books about food from the perspective of information studies. We did not follow the typical approaches of food scholars, studying such topics as food pathways or food security, but instead applied the lens of information studies to this topic of wide popular interest. In the two short books that we produced, Food in the Internet Age (Aspray, Royer, Ocepek 2013) and Formal and Informal Approaches to Food Policy (Aspray, Royer, and Ocepek 2014), we discussed a wide variety of topics, such as: the online grocer Webvan (the largest loser of venture capital in the dot-com crash of 2001); the harms that Yelp, OpenTable, and Groupon created for small brick-and-mortar businesses and customers; the different ways in which the Internet has been used to represent and comment upon food and food organizations; the regulation of advertising of sweetened cereals to children; and the strategies of informal, Bully Pulpit persuasion compared to formal regulation of food and nutrition – carried out through a pair of studies: one of Eleanor and Franklin Roosevelt, and the other of Michele and Barak Obama.
This work on food, and some of our subsequent research, falls into the field of information studies. We carefully use that term instead of information science because our work is more informed by humanities (critical theory, cultural studies) and social science disciplines (sociology, psychology, organizational and management studies, work and labor studies) than by computer science, natural science, and engineering disciplines. We both have worked in information schools, part of a movement toward the interdisciplinary study of computing and information that has emerged in the past quarter century out of (1) library schools becoming more technical, (2) computer science departments becoming more interested in people and human institutions and their social impact, and (3) newly created interdisciplinary enterprises. These information schools offer a big tent for many different kinds of methods, theories, and approaches. The breadth of these studies can be seen in the wide range of papers delivered at the annual meeting of ASIST (for example, https://www.conftool.org/asist2020/index.php?page=browseSessions&path=adminSessions) and the annual "iConference" (https://ischools.org/Program). Also see the types of scholarship presented as the specialty biennial conference on "Information Seeking in Context" (ISIC, e.g., http://www.isic2018.com).
So far, there is little cross-usage of methods or approaches by people studying everyday information, e.g. by a traditional information studies scholar who studies information literacy incorporating research from data science or ubiquitous computing, but this cross-fertilization is just beginning to happen. In our own research, we do the next best thing through edited volumes which include chapters using a variety of approaches, so as to gain multiple perspectives on an issue. This is true, for example, in our book on where to live (discussed in detail below) and the book on information issues in aging (mentioned below).
Edited by Melissa G. Ocepek and William Aspray.
Deciding Where to Live
In our recent edited book, Deciding Where to Live, we are continuing our study of everyday phenomena through an information lens. We describe this book in some detail here to give our readers a better sense of the ways in which information studies scholars operate. All of the chapters in this book were written by people associated with leading information schools in the United States (Colorado, Illinois, Indiana, Syracuse, Texas, Washington). As with our food studies, we have taken multiple perspectives – all drawn from information studies – to investigate various aspects of housing. These studies, for example, employ work studies and business history; information, culture, and affective aspects of information issues; community studies; information behavior; and privacy.
Information scholars are often interested in the results of scholarship by labor, management, and organization scholars; and sometimes they adopt their theories and methods. These scholars are interested in such issues as the growing number of information occupations, the increased percentage of a person’s job tasks on information activities, and the ways in which tools of communication and information have changed firm strategies and industry structures. Everyday information scholars, too, are interested in these results, but primarily for what they have to say about the everyday or work lives of individuals.
The work of real estate firms, realtors, and home buyers and sellers have been profoundly changed by the massive adoption of information and communication technologies in recent years. Let us consider two chapters, by James Cortada and Steve Sawyer, from the Deciding Where to Live book. One major change in the 21st century has been the rise of websites, such as Zillow and Realtor.com, that enable individuals to access detailed information about housing without having to rely upon a realtor or the Multiple Listing Service. Using a business history approach, Cortada shows how these changes have changed the structure of the real estate industry, altered the behavior of individual firms, made the buyer and seller more informed shoppers, lowered commissions on house sales, and introduced new business models such as Zillow buying homes themselves and not just providing information about them. Some people believe that the rise of companies such as Zillow means that the imbalance between the information held by realtors and buyers is largely a thing of the past, that disintermediation by realtors is also largely over, and that the need for realtors is greatly diminished – and that we will see a radical shrinking in this occupation in the same way that the numbers of telephone operators and travel agents has plummeted. (See Yost 2008.)
Sawyer argues, however, that the work of the real estate agent is evolving rather than being eliminated. As he states his argument: “real estate agents have been able to maintain, if not further secure, their role as market intermediaries because they have shifted their attention from being information custodians to being information brokers: from providing access to explaining.” (Sawyer 2021, p. 35) As he notes, the buying of a house is a complex process, involving many different steps and many different participants (selecting the neighborhood and the particular house, inspecting the property, checking on title and transferring it, obtaining financing, remediating physical deficiencies in the property, etc.). One might say that it takes a village to sell a house in that village; and an important role of the real estate agent is to inform the buyers of the many steps in the process and use their network of specialists to help the buyers to carry out each step in a professional, timely, and cost-effective way.
How do these changes affect the everyday life of the individual? There are more than 2 million active real estate agents in the United States. Their work has changed profoundly as they adapt real-estate-oriented websites and apps in their work. Even though most real estate agents work through local real estate firms, to a large degree they act largely as independent, small businesspeople who carry out much of their work from their cars and homes, as much as from their offices. So, they rely on websites and apps not only for information about individual homes, but also for lead generation, comparative market analysis, customer relationship management, tracking their business expenses such as mileage, access to virtual keys, video editing of listings, mounting marketing campaigns, and a multitude of other business functions. For those who are buyers and sellers, they can use Zillow or its online competitors to become informed buyers before ever meeting with a real estate agent, learning how much their current home is worth, figuring out how large a mortgage they can qualify for, checking out multiple potential neighborhoods not only for housing prices but also for quality of schools and crime rates, checking out photos and details of numerous candidate houses, and estimating the total cost of home ownership. Interestingly, many individuals who are not looking to buy or sell a home in the near term are regular users of Zillow. It is a way to spy on neighbors, try out possible selves, plan for one’s future, or just have a good time. In our introductory chapter, we address these issues.
Another chapter, by Philip Doty, reflects upon the American dream of the smart home. Drawing upon the scholarship in surveillance capitalism Soshanna Zuboff (2019), feminist scholarship on privacy, Anita Allen (1988), Patricia Bolling (1996), Catherine MacKinnon (1987), gender studies in history of science and technology, Ruth Cowan (1983), geography of surveillance, Lisa Makinen (2016), and other scholarly approaches, Doty reflects on the rhetorical claims about technological enthusiasm related to smart cities and smart homes, and discusses some of the privacy and in particular surveillance issues that arise in smart homes.
Information is not merely used by people in cognitive ways; it can also bring joy, sadness, anxiety, and an array of other emotions. Deciding where to live can be an exciting, fraught, and stressful experience for many people. When one is searching for a home in a particularly competitive housing market, the addition of time pressures can amp up the emotional toll of house hunting and discourage even the most excited home buyer. In her chapter, Carol Landry recounts how the high stakes decision making of home buying becomes even more complicated when time pressure and emotions come into play. Her research is based on an empirical study of home buyers in the highly competitive Seattle real estate market. The chapter describes the experience of several home buyers dealing with bidding wars that required quick decision making and many failed attempts at securing a home. The stories shared in this chapter highlight the despair and heartbreak that made continuing the home search difficult to participants described as going from enthusiastic information seekers to worn out information avoiders. This chapter highlights how internal and external factors can impact the home buying process and the information behaviors associated with it.
A competitive real estate market is but one of myriad experiences that can further complicate the process of deciding where to live. There are times in most people’s lives where the unique attributes of a life stage play an outsized role in decision-making around housing; one of these times is retirement. In Aspray’s chapter, the realities of retirement complicate the lives of individuals lucky enough to be able to retire with new considerations that shape decision making. Retirement adds new complexity to deciding where to live because the stability of work that binds many people’s lives is no longer there, creating many exciting new opportunities and constraints. Different elements shape questions around where to live for retired people including the emotional ties to their current homes, the financial realities of retirement income, and the physical limitations of aging.
During times of societal uncertainty, a home can be a comforting shelter that keeps the external world at bay. Even when a lot of uncertainty stems from the housing market, as it did during the Housing Crisis of 2007 and the recession that followed. As more and more people lost their homes to foreclosures or struggled to pay their mortgages, home and garden entertainment media provided a pleasant, comfortable escape for millions of Americans. Ocepek, in her chapter on home and garden sources, found that, throughout the housing crisis, recession, and recovery, home and garden sources grew or maintained their popularity with viewers and readers – likely due to the social phenomenon of cocooning or taking shelter in one’s space when the world outside becomes uncertain and scary. Both home and garden magazines and HGTV made changes to their content to represent the new home realities of many of their readers and viewers, but they also largely stayed the same, presenting comforting content about making whatever space you call home the most comfortable.
The financial hardships throughout the housing crisis, recession, and recovery were not experienced by all Americans in equal measure. Several authors in the book presented multiple examples where housing policies, economic conditions, and social unrest disproportionately affected marginalized communities throughout the United States. One is Pintar’s chapter about Milwaukee, mentioned below. Although some of the legal frameworks built to segregate cities and communities throughout the country have changed, the experience of deciding where to live for Black and African Americans adds additional layers of complexity to the already complicated process. Drawing on critical race theory, Jamillah Gabriel delineates how Black and African American house searchers (renters and buyers) create information seeking and search strategies to overcome the historic and contemporary discriminatory policies and practices of housing segregation. The chapter analyzes specialized information sources the provide useful information to help this group of house searchers find safer communities where they have the greatest chance to prosper. These sources include lists of the best and worst places for African American and Black individuals and families to live. The lists draw on research the compares communities based on schools, employment, entertainment, cost of living, housing market, quality of life, and diversity. Drawing on historic and contemporary account, the analysis provided in this chapter highlights that, “the housing industry can be a field of land mines for African American in search of home” (Gabriel 2021, p. 274).
It is often said that information and information tools are neither inherently good or bad, but that they can be used for both good and bad purposes. Two chapters in the book illustrate this point. In a study of the city of Milwaukee, Judith Pintar shows how HOLC maps, which were created to assess the stability of neighborhoods, were used to reinforce the racist practice of redlining. In another chapter, by Hannah Weber, Vaughan Nagy, Janghee Cho, and William Aspray, the authors show how information tools were used by the city of Arvada, Colorado and various groups (such as builders, realtors, parents, activists, and the town council) to improve the city’s quality of life in the face of rapid growth and its attendant issues such as traffic problems, rising housing prices, the need to build on polluted land, and the desire to protect the traditional look and feel of this small town. A third chapter, by David Hopping, showed how an experiment in Illinois was able to repurpose military housing for non-military purposes for the social good. His empirical study is seen through the lens of the theoretical constructs of heterotopia (Foucault 1970), boundary objects (Star and Griesmer 1989), and pattern language (Alexander 1977).
Conclusions
Both of us are continuing to pursue work on everyday information issues. One (Aspray) is continuing this work on information studies in everyday life, through an edited book currently in progress on information issues related to older Americans (Aspray, forthcoming in 2022). This book ranges from traditional Library and Information Science approaches about health information literacy on insurance for older Americans, the variety of information provided by AARP and its competitors, and the use of information and communication technologies to improve life in elderly communities; to more technologically oriented studies on ubiquitous computing, human-computer interaction, and Internet of Things for older people. Meanwhile, Ocepek is building on her work in her doctoral dissertation (Ocepek 2016), which examined from both social science and cultural approaches the everyday activity of grocery shopping. Her new study is examining what has happened to grocery shopping during the pandemic.
We are pleased to see the broadening in mission of the Babbage Institute to consider not only the history of computing but also the history and cultural study of information. For example, many scholars (including some computer historians) since 2016 have been studying misinformation. (See, for example, Cortada and Aspray 2019; Aspray and Cortada 2019.) This study of everyday information is another way in which the Babbage Institute can carry out its broadened mission today.
In particular, there are a few lessons for computer historians that can be drawn from the scholarship we have discussed here, although many readers of this journal may already be familiar with and practicing them:
- One can study information as well as information technology. On the history of information, see for example Blair (2010), Headrick (2000), Cortada, (2016), and Ann Blair et al. (2021). For a review of this scholarship, see Aspray (2015).
- One can study everyday uses of information and information technology, even if they may be regarded by some as quotidian – expensive, complex, socially critical systems are not the only kinds of topics involving information technology that are worth studying.
- This past year has taught all of us how an exogenous force, the COVID-19 pandemic, can quickly and radically reshape our everyday lives. In the opening chapter of our book, we briefly discuss the earliest changes the pandemic brought to real estate. We are also seeing the grocery industry as well as the millions of consumers learning, adapting, and changing their information behaviors around safely acquiring food.
- In order to study both historical and contemporary issues about information and information technology, one can blend historical methods with other methods from computer science (e.g., human-computer interaction, data science), social science (qualitative and quantitative approaches from sociology, psychology, economics, and geography), applied social science (labor studies, management and organization studies), and the humanities disciplines (cultural studies, critical theory).
These are exciting times for the historians of computing and information!
Bibliography
Agosto, Denise E. and Sandra Hughes-Hassell. (2005). "People, places, and Questions: An Investigation of the Everyday Life Information-Seeking Behaviors of Urban Young Adults." Library & Information Science Research, vol. 27, no. 2, pp. 141-163.
Alexander, Christopher. (1977). A Pattern Language. Oxford University Press.
Allen, Anita L. (1988). Uneasy Access: Privacy for Women in a Free Society. Rowman & Littlefield.
Aspray, William. (2015). The Many Histories of Information. Information & Culture, 50.1: 1-23.
Aspray, William. (forthcoming 2022). Information Issues for Older Americans. Rowman & Littlefield.
Aspray, William and James Cortada. (2019). From Urban Legends to Political Factchecking. Springer.
Aspray, William and Barbara M. Hayes. (2011). Everyday Information. MIT Press.
Aspray, William, George W. Royer, and Melissa G. Ocepek. (2013). Food in the Internet Age. Springer.
Aspray, William, George W. Royer, and Melissa G. Ocepek. (2014). Formal and Informal Approaches to Food Policy. Springer.
Bakardjieva, Maria. (2005). Internet Society: The Internet in Everyday Life. Sage.
Blair, Ann. (2010). Too Much to Know. Yale.
Blair, Ann, Paul Duguid, Anja Silvia-Goeing, and Anthony Grafton, eds. (2021). Information: A Historical Companion. Princeton.
Boling, Patricia. (1996). Privacy and the Politics of Intimate Life. Cornell University Press.
Case, Donald O. and Lisa M. Given. (2016). Looking for Information. 4th ed. Emerald.
Cortada, James and William Aspray. (2019). Fake News Nation. Rowman & Littlefield.
Cowan, Ruth Schwartz. (1983). More Work for Mother. Basic Books.
De Certeau, Michel (1984). The Practice of Everyday Life. Translated by Steven F. Rendall. University of California Press.
Fisher, Karen E. Sandra Erdelez, and Lynne McKechnie. (2009). Theories of Information Behavior. Information Today.
Foucault, Michel. (1970). The Order of Things. Routledge.
Gorichanaz, Tim (2020). Information Experience in Theory and Design. Emerald Publishing.
Hartel, Jenna. (2003). "The Serious Leisure Frontier in Library and Information Science: Hobby Domains." Knowledge Organization, vol. 30, No. 3-4, pp. 228-238.
Haythornthwaite, Caroline and Barry Wellman, eds. (2002). The Internet in Everyday Life. Wiley-Blackwell.
Headrick, Daniel. (2000). When Information Came of Age. Oxford.
Highmore, Ben ed. (2001). The Everyday Life Reader. Routledge.
Lefebvre, Henri. (2008). Critique of Everyday Life. vol. 1, 2nd ed. Translated by John Moore. Verso.
MacKinnon, Catherine. (1987). Feminism Unmodified. Harvard University Press.
Makinen, Lisa A. (2016). "Surveillance On/Off: Examining Home Surveillance Systems from the User’s Perspective." Surveillance & Society, 14.
McKenzie, Pamela J. (2003). "A Model of Information Practices in Accounts of Everyday‐Life Information Seeking." Journal of Documentation, vol. 59, no. 1, pp. 19-40.
Pettigrew, Karen E. (1999). "Waiting for Chiropody: Contextual Results from an Ethnographic Study of the Information Behaviour Among Attendees at Community Clinics." Information Processing & Management. vol. 35, no. 6, pp. 801-817.
Ocepek, Melissa G. (2016). "Everyday Shopping: An Exploration of the Information Behaviors of the Grocery Shoppers." Ph.D Dissertation, School. Of Information, University of Texas at Austin.
Ocepek, Melissa G. and William Aspray, eds. (2021). Deciding Where to Live. Rowman & Littlefield.
Savolainen, Reijo. (2008). Everyday Information Practices: A Social Phenomenological Perspective. Scarecrow Press.
Smith, Dorothy E. (1987). The Everyday World as Problematic: A Feminist Sociology. Northeastern University Press.
Star, Susan Leigh and James R. Griesemer. (1989). "Institutional Ecology, Translations, and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39." Social Studies of Science 19, 3: 387-420.
Steedman, Carolyn. (1987). Landscape for a Good Woman: A Story of Two Lives. Rutgers University Press.
Yost, Jeffrey R. (2008). “Internet Challenges for Nonmedia Industries, Firms, and Workers.” pp. 315-350 in William Aspray and Paul Ceruzzi, eds., The Internet and American Business. MIT Press.
Zuboff, Shoshanna. (2019). The Age of Surveillance Capitalism. Public Affairs.
Aspray, William and Ocepek, Melissa G. (April 2021). “Everyday Information Studies: The Case of Deciding Where to Live." Interfaces: Essays and Reviews in Computing and Culture Vol. 2, Charles Babbage Institute, University of Minnesota, 27-37.
About the authors:
Melissa G. Ocepek is an Assistant Professor at the University of Illinois Urbana-Champaign in the School of Information Sciences. Her research draws on ethnographic methods and institutional ethnography to explore how individuals use information in their everyday lives. Her research interests include everyday information behavior, critical theory, and food. Recently, she co-edited Deciding Where to Live (Rowman & Littlefield, 2021) with William Aspray. Previously she published two books that address the intersection of food, information, and culture: Food in the Internet Age and Formal and Informal Approaches to Food Policy (both with William Aspray and George Royer). Dr. Ocepek received her Ph.D. at the University of Texas at Austin in the School of Information.
William Aspray is Senior Research Fellow at CBI. He formerly taught in the information schools at Indiana, Texas, and Colorado; and served as a senior administrator at CBI, the IEEE History Center, and Computing Research Association. He is the co-editor with Melissa Ocepek of Deciding Where to Live (Rowman & Littlefield, 2021). Other recent publications include Computing and the National Science Foundation (ACM Books, 2019, with Peter Freeman and W. Richards Adrion); and Fake News Nation and From Urban Legends to Political Fact-Checking (both with James Cortada in 2019, published by Rowman & Littlefield and Springer, respectively).
Of Mice and Mentalité: PARC Ways to Exploring HCI, AI, Augmentation and Symbiosis, and Categorization and Control
Jeffrey R. Yost, Charles Babbage Institute, University of Minnesota
Abstract: This think piece essay comparatively explores history and mindsets with human-computer interaction (HCI) and artificial intelligence (AI)/Machine Learning (ML). It draws on oral history and archival and other research to reflect on the institutional, and cultural and intellectual history of HCI (especially the Card, Moran, and Newell team at Xerox PARC) and AI. It posits the HCI mindset (focused on augmentation and human-machine symbiosis, as well iterative maintenance) could be a useful framing to rethink dominant design and operational paradigms in AI/ML that commonly spawn, reinforce, and accelerate algorithmic biases and societal inequality.
This essay briefly recounts the 1982 professional organizational founding for the field of Human-Computer Interaction (HCI) before reflecting on two decades prior in interactive computing—HCI’s prehistory/early history—and its trajectory since. It comparatively explores history and mindsets with HCI and artificial intelligence (AI). For both HCI and AI, “knowing users” is a common target, but also a point of divergent departure.
For AI—especially large-scale, deployed systems in defense, search, and social networking—knowing users tends to involve surveillance, data collection, and analytics to categorize and control in the service of capital and power. Even when aims are purer, algorithmic biases frequently extend from societal biases. Machines can be programed to discriminate or learn it from data and data practices.
For HCI—from idealistic 1960s beginnings through 1980s professionalization and beyond—augmenting users and human-machine symbiosis has been its core. While an HCI-type mindset offers no magic bullet to AI’s ills, this essay posits that it can be a useful framing, a reminder toward proper maintenance, stewardship, and structuring of data, design, code (software), and codes (legal, policy, and cultural). HCI systems, of course, can be ill designed, perform in unforeseen ways, or users can misapply them, but this likely is less common and certainly is of lesser impactful scale relative to AI. Historians and sociologists must research the vast topics of AI and HCI more fully in many contexts and settings.
HCI and Solidifying the Spirit of Gaithersburg
In mid-March 1982 IIT Programming Technology Center’s Bill Curtis and University of Maryland’s Ben Shneiderman held the first “Human Factors in Computing Systems” conference in Gaithersburg, Maryland. The inspiring event far exceeded the organizers’ expectations, attracting more than 900 attendees. It was the pivotal leap forward in professionalizing HCI.
Rich program content filled the three-day program, as impactful organizational work occurred at an evening, small group side meeting. At the latter, Shneiderman, Curtis, UCSD’s Don Norman, Honeywell’s Susan Dray, Northwestern’s Loraine Borman, Xerox PARC's (Palo Alto Research Center) Stuart Card and Tom Moran, and others strategized about HCI’s future and possibilities for forming an association within a parent organization. Borman, an information retrieval specialist in a leadership role at ACM SIGSOC (Social and Behavioral Computing), and Shneiderman, a computer scientist, favored the Association for Computing Machinery (ACM). Insightfully seeing an expedient workaround, Borman proposed SIGSOC transform—new name/mission—bypassing the need for a new SIG approval.
Cognitive scientist Don Norman questioned whether ACM should be the home, believing computer science (CS) might dominate. After debate, Shneiderman and Borman’s idea prevailed. Dray recalls, the sentiment was “we can’t let the spirit of Gaithersburg die,” and for most, SIGSOC’s metamorphous seemed a good strategy (Dray 2020). Borman orchestrated transforming SIGSOC into SIGCHI (Computer-Human Interaction). The CHI tail essentially became the dog (SOC’s shrinking base mainly fit under HCI’s umbrella). Interestingly, “Computer” comes first in the acronym, but likely just to achieve a pronounceable word in the ACM SIG style, as “HCI” appeared widely in early CHI papers (SIGCHI’s annual conference).
Norman’s concerns proved prescient. SIGCHI steadily grew reaching over 2,000 attendees by the 1990 Seattle CHI, but in its first decade, it principally furthered CS research and researchers. Scholarly standards rose, acceptance rates fell, and some practitioners felt crowded out. In 1991, practitioners formed their own society, User Experience Professional Association (UXPA). In the 1990s and beyond, SIGCHI blossomed into an increasingly (academic) discipline diverse organization.
As with all fields/subfields, HCI has a prehistory or an earlier less organizationally defined history (for HCI, the 1960s and 1970s). SIGCHI’s origin lay in the confluence of: past work in human factors; university “centers of excellence” in interactive computing created through 1960s Advanced Research Projects Agency (ARPA) Information Processing Techniques Office (IPTO) support; two particularly impactful laboratories (PARC and SRI’s ARC); Systems Group artists in the UK; and the promise of Graphical User Interface (GUI) personal computers (PCs).
Nonprofit corporation SRI’s Augmentation Research Center (ARC), and Xerox’s PARC were at the forefront of GUI and computer mouse developments in the 1970s and 1980s. Neither the GUI nor mouse R&D were secret at PARC; in the 1970s, many visitors saw Alto demos, including, in 1979, Steve Jobs/Apple Computer team. In 1980 Apple hired away PARC’s Larry Tesler and others. Jobs launched the Apple Lisa effort (completed in 1983, priced at $10,000), which like the even more expensive Xerox Star (1981), possessed a GUI and mouse. The 1984 Apple Macintosh, retailing at $2,500, initiated an early mass market for GUI personal computers—inspiring initiators, most notably, Microsoft Windows 2.0 in 1987.
In early 2020, I conducted in-person oral history interviews with three of HCI’s foremost intellectual and organizational pioneers—the pilot for a continuing ACM/CBI project. This included UCSD Professor Don Norman (SIGCH Lifetime Research Awardee; Benjamin Franklin Medalist), Xerox PARC Scientist and Stanford Professor Stuart Card (SIGCHI Lifetime Research Awardee; National Academy of Engineering), and Dr. Susan Dray (SIGCHI Lifetime Practice Awardee; UXPA Lifetime Achievement Awardee).
Don Norman is well-known both within and outside CS—extending from his 1988 book The Psychology of Everyday Things (POET), re-released as wide selling, The Design of Everyday Things. A student of Duncan Luce (University of Pennsylvania), he was among the first doctorates in mathematical psychology. Early in his career, he joined the UCSD Psychology Department as an associate professor. After stints at Apple and Hewlett-Packard, and at Northwestern, he returned to lead the UCSD Design Laboratory. Norman helped take design from its hallowed ground of aesthetics to establish it in science, and greatly advanced understanding and practice of usability engineering.
Norman stressed to me that there is one scientist so consistently insightful he never misses his talks at events he attends, PARC’s Stuart Card. Card was the top doctoral student of Carnegie Mellon Professor of Cognitive Psychology and Computer Science Allen Newell. While these two interviews were in California, my interview with Dr. Susan Dray was in Minneapolis, with the scientist who pioneered the first corporate usability laboratory outside the computer industry (IBM and DEC had ones) at American Express Financial Advisors (AEFA).
Dray took a different path after her doctorate in psychology from UCLA—human factors—on classified Honeywell Department of Defense (DoD) projects. In the early 1980s, Honeywell, a pioneering firm in control systems, computers, and defense-contracting, had a problem with ill-adapted computing in its headquarters for clerical staff, which Dray evaluated. This became path defining for her career, toward computer usability. After pioneering HCI work at Honeywell, Dray left for American Express, and later became a successful and impactful HCI consultant/entrepreneur. She applied observations, ethnographic interviewing, and the science of design to improve interaction, processes, and human-machine symbiosis in cultures globally, from the U.S., South Africa, Egypt and Jordan to India, Panama, and France.
Earlier, in the late 1980s, at American Express, Dray was seeking funds for a usability lab, and she creatively engaged in surreptitious user feedback. She bought a “carton” of Don Norman’s POET book, had copies delivered to all AEFA senior executives on the top/29th floor, and rode up and down the elevator starting at 6 am for a couple hours each morning for weeks, listening to conversations concerning this mysteriously distributed book on the science of design. Well-informed, she pitched successfully, gaining approval for her usability lab.
The Norman, Card, and Dray oral histories, another HCI interview I just conducted, with artist Dr. Ernest Edmonds, my prior interview with Turing Awardee Butler Lampson of Alto fame, preparation for these five interviews, and AI and HCI research at the CBI, MIT, and Stanford University archives inform this essay.
For AI and HCI, Is There a Season?
Microsoft Research Senior Scientist, Jonathan Grudin—in his valuable From Tool to Partner (2017) on HCI’s history—includes a provocative argument that HCI thrives during AI Winters and suffers during AI’s other seasons. The usefulness of the widespread Winter metaphor is debatable, it is based on changing funding levels to elite schools (Mendon-Plasek, 2021 p. 55), but Grudin’s larger point—only one of the two fields thrives at a time—hints to a larger truth: HCI and AI have major differences. The fields overlap with some scientists and some common work but have distinct mindsets. Ironically, AI, once believed to be long on promises and short on deliveries (the rationalized basis for AI Winters), is now delivering stronger, and likely, more harmfully than ever given algorithmic and data biases in far reaching corporate and government systems.
Learning How Machines Learn Bias
Increasingly more and more of our devices are “smart,” a distracting euphemism obscuring how AI (in ever-increasingly interconnected sensor/IoT/cloud/analytics systems) reinforces and extends biases based on race, ethnicity, gender, sexuality, and disability. Recent interdisciplinary scholarship is exposing roots of discriminatory code (algorithms/software) and codes (laws, policy, culture), including deeply insightful keynotes at the Charles Babbage Institute’s (CBI) “Just Code” Symposium (a virtual, major event with 345 attendees in October 2020) by Stephanie Dick, Ya-Wen Lei, Kenneth Lipartito, Josh Lauer, and Theodora Dryer. Their work contributes to an important conversation also extended in important scholarship by Ruha Benjamin, Safiya Noble, Matt Jones, Charlton McIlwain, Danielle Allen, Jennifer Light (MIT; and CBI Sr. Research Fellow), Mar Hicks, Virginia Eubanks, Lauren Klein, Catherine D’Ignazio, Amanda Menking, Aaron Mendon-Plasek (Columbia; and current CBI Tomash Fellow), and others.
AI did not merely evolve from a benevolent past to a malevolent present. Rather, it has been used for a range of different purposes at different times. Geometrically expanding the number of transistors on chips—the (partially) manufactured and manufacturing/fabrication trajectory of Moore’s Law—enabled computers and AI to become increasingly powerful and pervasive. Jennifer Light’s insightful scholarship on the RAND Corporation’s 1950s and 1960s operations research, systems engineering, and AI, created in the defense community, and later misapplied to social welfare, counters notions of an early benevolent age. Even if chess is the drosophila of AI—a phrase of John McCarthy’s from the 1990s—its six-decade history is one of consequential games, power contests. Work in computer rooms at the Pentagon’s basement and at RAND harmfully escalated Cold War policies as DoD/contractors simulated and supported notions of the U.S. rapidly “winning” the Vietnam War, and earlier, C-E-I-R (founded by ex-RAND scientists) used input/output-economics algorithmic systems to determine optimal bomb targets to decimate the Soviet Union industrially (Yost, 2017).
What helped pulled AI out of its first long (1970s) Winter was successes and momentum with expert systems—the pioneering work of Turing Awardee Stanford AI scientist Edward Feigenbaum and molecular biologist and Nobel Laureate Joshua Lederberg’s late 1960s Dendral, to advance organic chemistry, and Feigenbaum and others’ early 1970s MYCIN in medical diagnostics and therapeutics. These AI scientific triumphs stood out and lent momentum for expert systems, as did fears of Japan’s Fifth Generation (early 1980s—government and industry partnership in AI/systems). In the 1980s, elite US CS departments again received strong federal support for AI. Work in expert systems in science, medicine, warfare, and computer intrusion detection abounded (Yost, 2016).
Some AI systems are born biased; others learn it—from algorithmic tweaks to expert system inference engines to biased data. Algorithmic bias is just one of the many problematic byproducts of valuing innovation over maintenance (Vinsel and Russell 2020, Yost 2017).
Human Factors and Ergonomics
The pre-history/early history of human-machine interaction dates back many decades to the control of workers and soldiers to maximize efficiency. The late-1950s-spawned Human Factors Engineering Society grew out late inter-war period organizational work of the Southern California aerospace industry. In the first half of the 20th century, human factors had meaningful roots in the scientific management thought, writings, and consulting of Frederick Winslow Taylor. This tradition defined the worker as an interchangeable part, a cog within the forces of production to efficiently serve capital. At Taylorist-inspired and organized factories, management oppressed laborers, and human factors has a mixed record in its targets, ethics, and outcomes. However, in HCI’s organizational start, early 1980s, the mantra was not merely of efficiency; it was the frequently uttered, “know the user.” This, importantly, was a setting of personal computing and GUI idealism, a trajectory insightfully explored by Stanford’s Fred Turner in From Counterculture to Cyberculture.
We’re on a Road to Intertwingularity, Come on Inside
Years before the National Science Foundation (NSF) took the baton to be the leading federal funder of basic CS research at universities, ARPA’s IPTO (following 1962 founding director’s J.C.R. Licklider’s vision), changed the face of computing toward interaction. Well known philosopher and sociologist Ted Nelson, a significant HCI contributor of the 1960s and 1970s, creatively coined the term “intertwingularity” of the symbiosis and all being intertwined or connected (networking, text through his term/concept of “hypertext,” human user with interactive computing)—it can aptly describe the multifaceted HCI work of 1960’s IPTO-funded SRI’s ARC and 1970s Xerox PARC.
The 1970-enacted Mansfield Amendment required direct and defined DoD function for all DoD research funding. It left a federal funding vacuum for years until NSF could ramp up to become a roughly comparable basic funder for the interactive computing that IPTO started. The vacuum, however, was largely filled by a short golden age of corporate industrial research in interactive computing at Xerox, a firm with a capital war chest, much dry powder, from its past photocopier patent-based monopoly, and seeking to develop the new, new thing(s). Xerox looked to its 1970-launched PARC to invent the office of the future. It hired many previously IPTO-supported academic computer scientists, it produced and employed a cadre of Turing Awardees, an unprecedented team far exceeding any single university’s CS department in talent or resources.
Inside the PARC Homeruns
Douglas Engelbart and the earliest work on the first mouse designed by him and SRI’s Bill English is addressed by French Sociologist Thierry Bardini in Bootstrapping, a biography of Engelbart. Journalists, such as Michael Hiltzik, have covered some major contours of technical innovation at PARC.
Central to Bardini’s and Hiltzik’s and others’ narratives is the important HCI work of Turing Awardees Douglas Engelbart at SRI; and Butler Lampson, Alan Kay, Charles Thacker, and Charles Simonyi at PARC. In this essay I look beyond oft-told stories and famed historical actors in GUIs and mice to briefly discuss a hitherto largely overlooked, highly impressive small PARC research team composed of Newell, Card, and Moran, and a larger team that Card later led. The incredible accomplishments of Lampson and others changed the world with the GUI. They hit the ball out park, so to speak—"a shot heard round the world” (1951 Bobby Thompson Polo Grounds, Don DeLillo immortalized, homerun sense) that very visibly revolutionized interactive computing
Newell is one of the most famous of the first-generation AI scientists, a principal figure at John McCarthy’s famed Dartmouth Summer 1956 Workshop, in which McCarthy, Newell, Herbert Simon, Marvin Minsky, and others founded and gave name to the field—building upon earlier work of Alan Turing. On a project launched in 1955, Newell, as lead, co-invented (with Simon and Clifford Shaw) “The Logic Theorist” in 1956, the first engineered, automated logic or AI program. Many historian and STS colleagues I have spoken with associate Newell solely with AI, and they are unaware of his PARC HCI work. Unlike Turing and Simon, Newell does not have a major biography documenting the full breadth of his work. Newell’s HCI research has been neglected by historians, as has that of his two top students, Card and Moran. They published many seminal HCI papers in Communications of the ACM and other top journals.
This oversight (by historians, they were revered by fellow scientists), especially neglecting career long contributions of Card and Moran, is a myopic favoring of first-recognized invention over subsequent ones, missing key innovations, and devaluing maintenance. It was not merely the dormouse (mouse co-inventors Engelbart and English, the recognized revolution), but multiple dormice (the science and engineering behind optimizing mice for users). Remember(ing) what the dormice said (and with an open ear of historical research), Card and Moran clearly conducted brilliant scientific research spawning many quiet revolutions.
Rookie Card to All-Star Card, Pioneering HCI Scientist Stuart Card
Stuart Card was first author of a classic textbook, Psychology of Human-Computer Interaction, with co-authors Newell and Moran. Card progressed through various research staff grades and in 1986 became a PARC Senior Research Scientist. Two years later, he became Team Leader of PARC’s User Interface Research Group. The breadth and contributions of Card and PARC’s HCI research in the 1970s to 1990s is wide in both theory and practice. The work fell into three broad categories: HCI Models, Information Visualization, and Information Retrieval—and major contributions in each is breathtaking. One early contribution in HCI models was Card and the team’s work on the mouse and its performance by an information-theoretical model of motor movement, Fitts’ Law, using a processing rate parameter of 10 bits/sec, roughly at the same performance ability as the hand, demonstrating performance was not limited by the device/mouse in terms of speed, but by the hand itself. It proved a mouse was optimized to interact with humans. This impacted the development of the Xerox Star mouse in 1981 and the earliest computer mice developed by Apple Computer. Card’s, and his team’s, work was equally profound on information visualization, in areas such as Attentive-Reactive Visualizer and Visualizer Transfer Functions. In information retrieval, they did advanced Information Foraging Theory.
While staying at PARC for decades, Card concurrently served as a Stanford University Professor of Computer Science. He became a central contributor to SIGCHI and was tremendously influential to academic, industrial, and government scientists.
In listening to Card’s interview responses (and deeply influenced by my Norman, Dray, and Butler Lampson interviews also, as well as by my past research), I reflected that many AI scientists could learn much from such a mindset of valuing users, all users—knowing users to help augment, for symbiosis, not to control. AI scientists, especially on large scale systems in corporations and government (much ethical AI research is done at universities), could benefit in not merely technical ways, as Steve Jobs and others did from their day in the PARC, but from Card and his team’s ethos and ethics.
Professionalizing HCI: Latent Locomotion to Blissful Brownian Motion
While SIGCHI unintentionally pushed out many non-scientists in the 1980s, it and the HCI field shed strictly a computer science and cognitive science focus to become ever more inclusive of a wide variety of academic scientists, engineers, social scientists, humanities scholars, artists, and others from the 1990s forward. CHI grew from about 1,000 at the first events in Gaithersburg and Boston to more than 3,600 attendees at some recent annual CHI meetings (and SIGCHI now has more than two-dozen smaller conferences annually). The SIGCHI/CHI programs and researchers are constantly evolving and exploring varying creative paths that from a 30,000-foot vantage might seem to be many random walks, Brownian motion. The research, designing to better serve users, contributes to many important trajectories. The diversity of disciplines and approaches can make communication more challenging, but also more rewarding, and to a high degree a Galison-like trading zone exist in interdisciplinary SIGCHI and HCI.
One example is the Creativity and Cognition Conference co-founded by artists/HCI scientists Ernest Edmonds and Linda Candy in 1993 that became a SIGCHI event in 1997. It brings together artists, scientists, engineers, and social scientists to share research/work on human-computer interaction in art and systems design. As Edmonds related to me, communication and trust between artists and scientists takes time, but is immensely valuable. Edmonds is an unparalleled figure in computer generative and interactive art, and a core member of the Systems Group of principally UK computer generative artists. In addition to many prestigious art exhibitions in the 1970s (and beyond), Edmonds published on adaptive software development, with critique of the waterfall method. His work—in General Systems in 1974—anticipated and helped to define adaptive techniques, later referred to as agile development. Edmonds, through his artist, logician, and computer science lenses insightfully saw interactive and iterative processes, a new paradigm in programming technique, art, and other design.
HCI research, and its applications, certainly is not always in line with societal good, but it has an idealistic foundation and values diversity and interdisciplinarity. Historians still are in the early innings of HCI research. Elizabeth Petrick has done particularly insightful scholarship on HCI and disability (2015).
Coding and Codifying, Fast and Slow
Nobel Laureate Daniel Kahneman has published ideas on human cognition that are potentially useful to ponder with regard to AI and HCI. Kahneman studies decision-making, and judgement, and how different aspects of these arise from how we think—both fast, emotionally, unconsciously, and instinctively; and slow, more deeply and analytically.
Programming projects for applications and implementation of systems are often behind schedule and over-budget. Code, whether newly developed or recycled, often is applied without an ethical evaluation of its inherent biases.
HCI often involves multiple iterations with users, usability labs, observation in various settings, ethnographic interviewing, and an effective blend of both inspiring emotional-response, fast thinking, and especially, deep reflective slow thinking. This slow and analytical thinking and iterative programming (especially maintenance, and endless debugging) could potentially be helpful in beginning to uproot underlying algorithmic biases. Meanwhile, slow, and careful reflection on how IT laws, practices, policies, culture, and data are codified is instructive. All of this involves ethically interrogating the what, how, why, and by and for whom of innovation, and valuing maintenance labor and processes, not shortchanging maintenance in budget, respect, or compensation.
Beyond “Laws” to Local Knowledge
In 1967 computer scientist, Melvin Conway, noted (what became christened) Conway’s Law—computer architecture reflects the communication structure of the underlying organization where it was developed (made famous by Tracy Kidder’s The Soul of a New Machine). Like Moore’s Law, Conway’s Law is really an observation, and a self-fulfilling prophecy. Better understanding and combatting biases at the macro is critical. Also essential is evaluation and action at the local and organizational levels. How does organizational culture structure algorithms/code? What organizational policies give rise to what types of code? What do (end) users, including and especially marginalized individuals and groups, have to say on bias? How do decisions at the organizational level reinforce AI/ML algorithmic and data biases, and reinforce and accelerate societal inequality? These are vital questions to consider through many future detailed cases studies in settings globally. The goal should not be a new “law,” but rather a journey to gain local knowledge and learn how historical, anthropological, and sociological cases inform on code and codes toward policies, designs, maintenance, and structures that are more equitable.
“Why Not Phone Up Robinhood and Ask Him for Some Wealth Distribution”
The lyric above from the 1978 reggae song “White Man in a Hammersmith Palais,” by The Clash, might be updated to why not open a Robinhood app… (at least until it suspended trading). How historians later assess the so-called Robinhood/Reddit “Revolution” a transfer of $20 billion away from hedge funds/banks/asset managers over several weeks in early 2021 (punishing bearish GameStop shorting by bidding up shares to force short covering), remains to be seen. Is it a social movement, and of what demographic makeup and type? For many, it likely, at least in part, is a stand against Wall Street, and thus Zuccotti Park comparisons seem apropos. Eighty percent of stock trading volume is automated—algorithmic/programmed (AI/ML)—contributing to why a 2021 CNBC poll showed 64 percent of Americans believe Wall Street is rigged. Like capitalism, equities markets and computers combine as a potent wealth concentrating machine—one turbocharged in pandemic times and fueled by accommodative monetary policy. “Smart” systems/platforms in finance, education, health, and policing all accelerated longstanding wealth, health and incarceration gaps and divergences to hitherto unseen levels. Not to dismiss volatility or financial risk to the Reddit “revolutionaries,” but the swiftness of regulatory calls by powerful leaders is telling. It begs questions on priorities: regulation for who, of what, when, and why? U.S. IT giants using AI to surveille, and to dominate with anti-competitive practices has gone largely unregulated (as has fintech) for years. Given differential surveillance, Blacks, Indigenous, People of Color (BIPOC) suffer differentially. The U.S. woefully lags Europe on privacy protections and personal data corporate taxes. U.S. racial violence/murders by police disgracefully dwarfs other democratic nations, and America stands out for Its (police and courts) embracement of racially biased facial recognition technology (FRT) and recidivist predictive AI—such as Clearview FRT and Northpointe’s (now Equivant) Corrective Offender Management Profiling for Alternative Sanctions (COMPAS).
Meanwhile parallel Chinese IT giants Baidu, Alibaba, and Tencent, dominant in search, e-commerce, and social networking respectively, use intrusive AI. These firms (fostered by the government), ironically, are also contributing to platforms enabling a “contentious public sphere.” (Lei 2017).
At times, users can appropriate digital computing tools against the powerful in unforeseen ways. Such historical agency is critical to document and analyze. History informs us that AI/ML, like many technologies, left unchecked by laws, regulations, and ethical scrutiny will continue to be powerfully accelerating tools of oppression.
Raging Against Machines That Learn
U.S. headquartered AI-based IT corporate giants’ record on data and analytics policy and practices have garnered increasing levels of critique by journalists, academics, legislators, activists, and others. The New York Times has reported on clamp downs on employees expressing themselves on social and ethical issues. The co-leader of Google’s Ethical AI Group Timnit Gebru tweeted in late 2020 she was fired for sending an email encouraging minority hiring and drawing attention to bias in artificial intelligence. Her email included, “Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset.” (Metz and Wakabayashi 2020).
On June 30, 2020, U.S. Senators Robert Menendez, Mazie Hirono, and Mark Warner wrote Facebook CEO Mark Zuckerberg critiquing his company for failing to “rid itself of white supremacist and other extremist content.” (Durkee 2020). A subsequent Facebook internal audit called for better AI—a tech fix. Deep into 2019 Zuckerberg (with a lack of clarity, as at Georgetown in October 2019) sought to defend Facebook’s policies on the basis of free speech. More concerning than his inability to execute free speech arguments is the lack of transparency and the power wielded by a platform with 2.5 billion users, it has immense power to subvert democracy and to differentially harm. It has a clear record of profits over principles. In mid-2020 The Color of Change, NAACP, National Hispanic Media Coalition and others launched the “Stop Hate for Profit” boycott on Facebook advertising for July 2020, more than 1,200 organizations participated. Pivoting PR in changing political winds, Zuckerberg is seeking to shift responsibility to Congress asking it to regulate (Facebook’s legal team likely will defend the bottom line).
Data for Black Lives, led by Executive Director Yeshimabeit Milner, is an organization and movement of activists and mathematicians. It focuses on fighting for possibilities for data use to address societal problems and fighting against injustices, stressing “discrimination is a high-tech enterprise.” It recently launched, Abolish Big Data, “a call to action to reject the concentration of Big Data in the hands of the few, to challenge the structures that allow data to be wielded as a weapon…” (www.d4bl.org). This organization is an exemplar of vital work for change underway, and also of the immense challenge ahead given the power of corporations and government entities (NSA, CIA, FBI, DoD, police, courts).
HCI, never the concentrating force AI has become, continues to steadily grow as a field—intellectually, in diversity, and in importance. It has a record of embracing diversity, helping to augment and advance human and computer symbiosis. More historical work on HCI is needed, but it offers a useful mindset.
Given AI historical scholarship to date, we know its record has been mixed from the start. From its first decades of 1950s and 1960s to today, DoD, NSA/CIA/FBI, Police, and criminal justice have been frequent funders, deployers and users of AI systems plagued with algorithmic biases that discriminate against BIPOC, women, the LGBTQIA, and the disabled. Some of the most harmful systems have been with facial recognition and predictive policing. Yet, properly designed, monitored, and maintained, AI offers opportunities for science, medicine, and social services (especially at universities and nonprofits).
The social science, humanities, and arts can have a fundamental positive role on the design, structuring, and policies with AI/ML. A handful of universities recently have launched interdisciplinary centers to focus on AI, history, and society. This includes the recently formed AI Now Institute at NYU (2017) and the Institute for Human-Centered AI at Stanford (2019). The Charles Babbage Institute has made the interdisciplinary social study of AI and HCI a focus (with “Just Code” and beyond)—research, archives, events, oral histories, and publications. In CS, ACM’s (2018 launched) Conference on Fairness, Accountability, and Transparency (FAccT), offers a great forum. Outside academe many are doing crucial research, policy, and activist work—a few examples: Data for Black Lives; Blacks in Technology; NC-WIT; AnitaB.org; Algorithmic Justice League; Indigenous AI.Net; Algorithmic Bias Initiative, (U. of Chicago).
The lack of U.S. regulation to date, discrimination and bias, corporate focus and faith on tech fixes, inadequate transparency, corporate imperialism, and overpowering employees and competitors have many historical antecedents inside and outside computing. History—the social and policy history of AI and HCI, as well as other labor, race, class, gender, and disability history—has much to offer. It can be a critical part of a broad toolkit to understand, contextualize, and combat power imbalances—to better ensure just code and ethically shape and structure the ghost in the machine that learns.
Acknowledgments: Deep thanks to Bill Aspray, Gerardo Con Diaz, Andy Russell, Loren Terveen, Honghong Tinn, and Amanda Wick for commenting on a prior draft.
Bibliography
Allen, Danielle and Jennifer S. Light. (2015). From Voice to Influence: Understanding Citizenship in a Digital Age. University of Chicago Press.
Alexander, Jennifer. (2008). The Mantra of Efficiency: From Waterwheel to Social Control. Johns Hopkins University Press.
Bardini, Thierry. (2000). Bootstrapping: Coevolution and the Origins of Personal Computing. Stanford University Press.
Benjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity.
Card, Stuart K., Thomas Moran, and Allen Newell (1983). The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates.
Card, Stuart K., Oral History (2020). Conducted by Jeffrey R. Yost, Los Altos Hills, CA, February 17, 2020. CBI, UMN.
Dick, Stephanie. (2020). “NYSIIS, and the Introduction of Modern Digital Computing to American Policing.” Just Code: Power, Inequality, and the Global Political Economy of IT (Symposium presentation: Oct. 23). [Hereafter “Just Code” Symposium]
D’Ignazio, Catherine and Lauren Klien. (2020). Data Feminism. MIT Press.
Dray, Susan, Oral History (2020). Conducted by Jeffrey R. Yost, CBI, Minneapolis, Minnesota, January 28, 2020. CBI, UMN.
Durkee, Alison. (2020). “Democratic Senators Demand Facebook Answer For Its White Supremacist Problem.” Forbes. June 30. (accessed online at Forbes.com).
Dryer, Theodora. (2020). “Streams of Data, Streams of Water: Encoding Water Policy and Environmental Racism.” Just Code” Symposium.
Edmonds, Ernest. (1974). “A Process for the Development of Software for Non-Technical Users as an Adaptive System.” General Systems 19, 215-218.
Eubanks, Virginia. (2019). Automating Inequality: How High-Tech Tools Punish, Police, and Punish the Poor. Picador.
Galison, Peter. (1999) “Trading Zone: Coordinating Action and Belief.” In The Science Studies Reader, ed. by Mario Biagioli. Routledge. 137-160.
Grudin, Jonathan. (2017). From Tool to Partner: The Evolution in Human-Computer Interaction. Morgan and Claypool.
Hiltzik, Michael. (2009). Dealers in Lightening: Xerox PARC and the Dawn of the Computer Age. HarperCollins.
Kahnman, Daniel. (2011). Thinking, Fast and Slow. Farrar, Straus, and Giroux.
Kidder, Tracy. (1981). Soul of a New Machine. Little, Brown, and Company.
Lampson, Butler, Oral History (2014). Conducted by Jeffrey R Yost, Cambridge, Massachusetts, December 11, 2014. Charles Babbage Institute, UMN.
Lauer, Josh and Kenneth Lipartito. (2020) “Infrastructures of Extraction: Surveillance Technologies in the Modern Economy.” Just Code” Symposium.
Light, Jennifer S. (2005). From Warfare to Welfare: Defense Intellectuals and Urban Problems in Cold War America. University of Chicago Press.
McIlwain, Charlton. (2020). Black Software. The Internet and Racial Justice: From AfroNet to Black Lives Matter. Oxford University Press.
Mendon-Plasek, Aaron. (2021). “Mechanized Significance and Machine Learning: Why It Became Thinkable and Preferable to Teach Machines to Judge the World.” In J. Roberge and M. Castelle, eds. The Cultural Life of Machine Learning. Palgrave Macmillan, 31-78.
Menking, Amanda and Jon Rosenberg. (2020). “WP:NOT, WP:NPOV, and Other Stories Wikipedia Tells Us: A Feminist Critique of Wikipedia's Epistemology.” Science, Technology, & Human Values, May, 1-25.
Metz, Cade and Daisuke Wakabayashi. (2020). “Google Researcher Says She was Fired Over Paper Highlighting Bias in AI.” New York Times, Dec. 2, 2020.
Norman, Don, Oral History. (2020). Conducted by Jeffrey R. Yost, La Jolla, California, February 12, 2020. CBI, UMN.
Noble, Safiya Umoja. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
Petrick, Elizabeth. (2015). Making Computers Accessible: Disability Rights and Digital Technology. Johns Hopkins University Press.
Turner, Fred. (2010). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University of Chicago Press.
Vinsel, Lee and Andrew L. Russell. (2020). The Innovation Delusion: How our Obsession with the New has Disrupted the Work That Matters Most. Currency.
Yost, Jeffrey R. (2016). “The March of IDES: Early History of Intrusion Detection Expert Systems.” IEEE Annals of the History of Computing 38:4, 42-54.
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry. MIT Press.
Yost, Jeffrey R. (March 2021). “Of Mice and Mentalité: PARC Ways to Exploring HCI, AI, Augmentation and Symbiosis, and Categorization and Control". Interfaces: Essays and Reviews in Computing and Culture Vol. 2, Charles Babbage Institute, University of Minnesota, 12-26.
About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor at the University of Minnesota. He has published six books (and dozens of articles), most recently Making IT Work: A History of the Computer Services Industry (MIT Press, 2017) and FastLane: Managing Science in the Internet World (Johns Hopkins U. Press, 2016) [co-authored with Thomas J. Misa]. He is a past EIC of IEEE Annals of the History of Computing, and current Series Co-Editor [with Gerard Alberts] of Springer’s History of Computing Book Series. He has been a principal investigator of a half dozen federally sponsored projects (NSF and DOE) on computing/software history totaling more than $2 million. He is Co-Editor [with Amanda Wick] of Interfaces: Essays and Reviews in Computing & Culture.
The Cloud, the Civil War, and the “War on Coal”
Paul E. Ceruzzi, National Air and Space Museum, Smithsonian Institution
Abstract: The term “The Cloud” has entered the lexicon of computer-speak along with “cyberspace,” the Matrix,” the “ether,” and other terms suggesting the immateriality of networked computing. Cloud servers, which store vast amounts of data and software accessible via the Internet, are located around the globe. This essay argues that this “matrix” has an epicenter, namely the former rural village of Ashburn, Virginia. Ashburn’s significance is the result of several factors, including northern Virginia’s historic role in the creation of the Internet and its predecessor, the ARPANET. The Cloud servers located there also exist because of the availability of sources of electric power, including a grid of power lines connected to wind turbines, gas-, and coal-fired plants located to its west—a “networking” of a different type but just as important.
In In his recent book, Making IT Work, Jeffrey Yost quotes a line from the famous Joni Mitchell song, “Clouds”: “I really don’t know clouds at all.” He also quotes the Rolling Stones’ hit, “[Hey, you,] Get off my Cloud.” Why should a business or government agency trust its valuable data to a third-party, whose cloud servers are little understood? No thank you, said the Rolling Stones; not until you can explain to me just what the Cloud is and where it is. Yost gives an excellent account of how cloud servers have come to the fore in current computing. Yet Joni Mitchell’s words still ring true. Do we really know what constitutes the “Cloud”?
A common definition of the Cloud is that of sets of high-capacity servers, scattered across the globe, using high-speed fiber to connect the data stored therein to computing installations. These servers supply data and programs to a range of users, from mission-critical business customers to teenagers sharing photos on their smartphones. What about that definition is cloud-like? Our imperfect understanding of the term is related to the misunderstanding of similar terms also in common use. One is “cyberspace,” the popularity of which is attributed to the Science Fiction author William Gibson, from his novel Neuromancer, published in 1984. Another is “The Matrix”: the title of a path-breaking book on networking by John Quarterman, published in 1990 at the dawn of the networked age. The term came into common use after the award-winning 1999 Warner Brothers film starring Keanu Reeves. (Quarterman was flattered that Hollywood used the term, but he is not sure whether the producers of the film knew of his book.) In the early 1970s, Robert Metcalfe, David Boggs, and colleagues at the Xerox Palo Alto Research Center developed a local area networking system they called “Ethernet”: suggesting the “luminiferous aether” that was once believed to carry light through the cosmos.
These terms suggest an entity divorced from physical objects—pure software independent of underlying hardware. They imply that one may dismiss the hardware component as a given, just as we assume that fresh, drinkable water comes out of the tap when we are thirsty. The residents of Flint, Michigan know that assuming a robust water and sewerage infrastructure is hardly a given, and Nathan Ensmenger has reminded us that the “Cloud” requires a large investment in hardware, including banks of disk drives, air conditioning, fiber connections to the Internet, and above all, a supply of electricity. Yet the perception persists that the cloud, like cyberspace, is out there in the “ether.”
Most readers of this journal know the physical infrastructure that sustains Ethernet, Cyberspace, and the Cloud. I will go a step further: not only does the Cloud have a physical presence, but it also has a specific location on the globe: Ashburn, Virginia.
A map prepared by the Union Army in 1862 of Northern Virginia shows the village of Farmwell, and nearby Farmwell Station on the Alexandria, Loudoun, and Hampshire railroad. The town later changed its name to Ashburn, and it lies just to the north of Washington Dulles International Airport. In the early 2000s, as I was preparing my study of high technology in northern Virginia, Ashburn was still a farming community. Farmwell Station was by the year 2000 a modest center of Ashburn: a collection of buildings centered on a general store. The railroad had been abandoned in 1968 and was now the Washington and Old Dominion rail-trail, one of the most popular and heavily traveled rails-to-trails conversions in the country. Thirsty hikers and cyclists could get refreshment at the general store, which had also served neighboring farmers with equipment and supplies.
Cycling along the trail west of Route 28 in 2020, one saw a series of enormous low buildings, each larger than the size of a football field, and surrounded by a mad frenzy of construction, with heavy equipment trucks chewing up the local roads. Overhead was a tangle of high-tension electrical transmission towers, with large substations along the way distributing the power. The frenzy of construction suggested what it was like to have been in Virginia City, Nevada, after the discovery and extraction of the Comstock Lode silver. The buildings themselves had few or no markings on them, but a Google search revealed that one of the main tenants was Equinix, a company that specializes in networking. The tenants of the servers try to avoid publicity, but the local chambers of commerce, politicians, and real estate developers are proud to showcase the economic dynamo of the region. A piece on the local radio station WTOP on November 17, 2020, announced that “Equinix further expands its big Ashburn data center campus,” quoting a company spokesperson saying that “…its Ashburn campus is the densest interconnection hub in the United States.” An earlier broadcast on WTOP reported on the activities of a local real estate developer, that “Northern Virginia remains the ‘King of the Cloud’” In addition to Equinix, the report mentioned several other tenants, including Verizon and Amazon Web Services.
These news accounts are more than just hyperbole from local boosters. Other evidence that indicates that, although cloud servers are scattered across the globe, Ashburn is indeed the navel of the Internet.
In my 2008 study of Tysons Corner, Virginia, I mentioned several factors that led to the rise of what I then called “Internet Alley.” One was the development of ARPANET at the Pentagon, and later at a DARPA office on Wilson Blvd. in Rosslyn. Another was the rise of the proto-Internet company AOL, headquartered in Tysons Corner. Also, in Tysons Corner was the location of “MAE-East”—a network hub that carried a majority of Internet traffic in its early days. The root servers of the dot.com and dot.org registry were once located in the region, with the a: root server in Herndon, later moved to Loudoun County. The region thus had a skilled workforce of network-savvy electrical and computer engineers, plus local firms such as SAIC and Booz-Allen who supported networking as it evolved from its early incarnations.
Around the year 2000, while many were relieved that the “Y2K” bug had little effect on mainframe computers, the dot.com frenzy collapsed. The AOL-Time Warner merger was a mistake. But there was an upside to the boom-and-bust. In the late 19th and early 20th Century the nation experienced a similar boom and bust of railroad construction. Railroads went bankrupt and people lost fortunes, But the activity left behind a robust, if overbuilt, network of railroads that served the nation well during the mid and late 20th century. During the dot.com frenzy, small firms like Metropolitan Fiber dug up many of the roads and streets of Fairfax and Loudoun Counties and laid fiber optic cables, which offered high speed Internet connections. After the bust these became unused— “dark fiber” as it was called. Here was the basis for establishing Cloud servers in Ashburn. By 2010, little land was available in Tysons Corner, Herndon, or Reston, but a little further out along the W&OD rail-trail was plenty of available land.
That leaves the other critical factor in establishing Cloud servers—the availability of electric power. While some Cloud servers are located near sources of wind, solar, or hydroelectric power, such as in the Pacific Northwest, Northern Virginia has few of those resources. The nearest large-scale hydroelectric plant, at the Conowingo Dam, lies about 70 miles to the north, but its power primarily serves the Philadelphia region. (That plant was the focus of the classic work on electric power grids, Networks of Power, by Thomas Parke Hughes.) To answer the question of the sources of power for Ashburn, we return to the Civil War map and its depiction of the Alexandria, Loudoun, and Hampshire, later known as the Washington and Old Dominion Railroad.
The origins of that line go back to the 1840s, when freight, especially coal, from the western counties of Virginia were being diverted to Baltimore, Maryland over the Baltimore and Ohio Railroad. In response, Virginians chartered a route west over the Blue Ridge to the mineral and timber rich areas of Hampshire County. (After 1866 Hampshire County was renamed Mineral County, in the new state of West Virginia.) The Civil War interrupted progress in construction, and after several challenges to its financial structure, the line was incorporated as the Washington and Old Dominion Railway Company in 1911. It never reached farther than the summit of the Blue Ridge, and the proposed route to the west would have had to cross rugged topography. The line could never have competed with the B&O’s water level route. The shortened line soldiered on, until finally being abandoned in 1968, making way for the rail-trail conversion. One interesting exception was a short spur in Alexandria, which carried coal to a power plant on the shore of the Potomac. That plant was decommissioned in 2014, thus ending the rail era of the Alexandria, Loudoun, and Hampshire.
In 1968, the rails-to-trails movement was in its infancy. Most of the freight once carried by rail was now being carried by trucks, and there was little room for rail-dependent industries to survive in a region of fast-growing residential suburban towns. There was little reason to assume that the right of way would revert to local landowners and be developed for commercial and residential use. That was the fate of the line west from Purcellville to the summit of the Blue Ridge at Snickers Gap. But the rest of the right of way was preserved. Shortly before abandonment, the Virginia Electric Power Company entered in to an agreement with Virginia Highway Department to purchase most of the remaining right of way as a conduit for high-voltage power lines, which would supply electric power to Northern Virginia. The agreement was criticized at the time, but among its results was the preservation of the right of way, making way for the establishment of the W&OD rail-trail by the Northern Virginia Regional Park Authority. As mentioned above, the trail is very popular for hiking, cycling, and horseback riding. Most of its users do not mind the overhead power lines above the trail. Given the rapid growth of suburbia in Fairfax and Loudoun counties, the trail could not have had the rural character common to many rail-trails in the country.
The power lines tell us how electric power gets to the Cloud servers. Where the power comes from is more complex. At the time Virginia Electric was negotiating for the right of way, the Engineering Firm Stone and Webster was building a power plant at Mount Storm, in Grant County, West Virginia. The plant was located in the heart of rich coal deposits. Upon its completion, the plant had a capacity of 1,600 Megawatts. Beginning in the early 2000s, the plant’s output was supplemented by a set of wind turbines located along the Allegheny Front – the divide between waters that flow directly to the Atlantic and those that flow to the Ohio and Mississippi Rivers. These turbines supply an additional 264 Megawatts of power.
The Alexandrian, Loudoun, and Hampshire Railroad never was completed far enough west to carry coal from the western mountains. Its charter, however, has been fulfilled, as the right of way now carries energy in the form of electricity generated by coal and wind from those mountains. The railroad’s founders were not thinking of Cloud servers, but today’s Cloud is powered, at least in part, by coal.
Coal mining in West Virginia and western Maryland is in a precipitous decline. Within a few years it may vanish altogether. Those involved with the construction and management of Data Centers in Loudoun County have stated that those centers will reduce their dependency on coal to zero by the next decade. In addition to converting to natural gas, described below, Virginia is supporting the further development of wind turbines, increasingly located offshore as well as in the mountains. Data centers are also exploring the use of geothermal resources.
These efforts will help reverse disturbing trends of global climate change, but the decline of the coal industry has been devastating to the western Virginia, western Maryland, and West Virginia economy, which is experiencing lay-offs and unemployment among miners and railroad workers. The cause is not the so-called “war on coal,” allegedly waged by Washington politicians. The primary cause is the development of hydraulic fracturing, or “fracking,” of rock, which allows rapid unlocking of natural gas deposits in Appalachia. The technique does require labor, but not on the scale of traditional coal mining. And it is transported not by rail but by pipelines—buried under the ground and largely invisible. Natural gas burns much cleaner than coal. Fracking has allowed natural gas to supplant coal for most new power plants. It also hastened the conversion of older, coal-fired plants to gas. An 800-Megawatt plant in Dickerson, Maryland, across the Potomac from Loudoun County and another major supplier of energy to the region, converted from coal to gas at the end of 2020. A similar conversion has taken place at the Chalk Point, Maryland plant. As of this writing, the Mont Storm plant remains coal-fired.
In 2017, the “Panda Stonewall” power plant came on-line. It is located south of Leesburg, a few miles west of Ashburn. The primary market for its 778-Megawatt output is the cloud complex at Ashburn. In promotional literature, the plant’s owner, Panda Power Funds of Dallas, Texas, touts its clean-burning natural gas fuel. The gas is transmitted by pipeline from the Marcellus Shale deposits centered in western Pennsylvania. To handle this new source of power, new substations and overhead lines were built over and beside the W&OD trail from Leesburg to Ashburn.
Conclusion
The center of the Cloud is in Ashburn, Virginia. It runs on a variety of energy sources, including coal, wind and Marcellus Shale gas deposits. Cloud servers are indeed scattered across the globe, but in Ashburn one can observe first-hand the dramatic transformation of computing. The servers require electric power, the sources of which: wind, solar, hydro, coal, and gas, all have environmental impacts. In his study of the Cloud, Jeffrey Yost mentioned the two songs by Joni Mitchell and the Rolling Stones. To those I would add a third: the jazz album by the Polish bassist Miroslav Vitus, “Mountain in the Clouds.” The title suggests the serenity and ethereal nature of the cloud, but the music is quite different: a cacophony of clashing instruments, driven by a frenzied drummer and bass line, suggesting the frenzy of cloud construction in Northern Virginia.
Bibliography
Bechtel Corporation, “Virginia Power Plant is one of the nation’s cleanest.” https://www.bechtel.com/projects/stonewall-energy-center/ accessed 12/10/2020.
Ceruzzi, Paul E. (2008). Internet Alley: High Technology in Tysons Corner, 1945-2005. Cambridge, MA: MIT Press.
Ensmenger, Nathan. (October 2018). “The Environmental History of Computing,” Technology & Culture, 59/4 Supplement, pp. S7-S33.
Equinix Corporation, “Equinix further expands its big Ashburn data canter campus,” also https://www.equinix.com/data-centers/americas-colocation/. accessed 12/10/2020.
“Crushing it: The world is finally burning less coal. It now faces the challenge of using almost none at all,” The Economist, December 5, 2020, pp. 25-28.
Hughes, Thomas Parke. (1983). Networks of Power: Electrification in Western Society, 1880-1930, Baltimore, Johns Hopkins University Press.
National Public Radio, “Supreme Court Says Pipeline May Cross Underneath Appalachian Trail,” Broadcast June 15, 2020, 6:09 PM ET. https://www.npr.org/2020/06/15/877643195/supreme-court-says-pipeline-may-cross-underneath-appalachian-trail. Accessed 12/10/2020.
Vitous, Miroslav, “Mountain in the Clouds,” Atlantic Records, SD 1622, 1975. Hear a YouTube recording of the initial track: https://www.youtube.com/watch?v=zafIe4Aduus">https://www.youtube.com/watch?v=zafIe4Aduus. Accessed 12/21/2020.
WTOP Radio, “Northern Virginia retains the ‘king of the cloud’,” https://wtop.com/business-finance/2020/09/northern-virginia-remains-the-king-of-the-cloud/ accessed 12/10/2020.
Williams, Ames W. (1984). Washington & Old Dominion Railroad, 1847-1968. Meridian Sun Press, p. 109.
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry. Cambridge, MA: MIT Press.
Paul E. Ceruzzi (January 2021). “The Cloud, the Civil War, and the “War on Coal”". Interfaces: Essays and Reviews in Computing and Culture Vol. 2, Charles Babbage Institute, University of Minnesota, 1-11.
About the Author:
Paul Ceruzzi is Emeritus Curator of Aerospace Electronics at the Smithsonian Institution's National Air and Space Museum. He is the author of several books on the history of computing and aerospace, including his most recent GPS (MIT Press 2018). His book on high technology in Northern Virginia, Internet Alley, was published in 2008. He lives with his family in the Maryland suburbs of Washington, DC.