Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences.
2021, Volume 2
2021 (Vol. 2) Table of Contents
2020 (Vol. 1) Table of Contents
Melissa G. Ocepek and William Aspray
Abstract: This essay introduces everyday information studies to historians of computing. This topic falls within the subdiscipline of information behavior, one of the main subject areas in information studies. We use our recent edited book, on Deciding Where to Live (Rowman & Littlefield, 2021), as a means to illustrate the kinds of topics addressed and methods used in everyday information studies. We also point the reader to some other leading examples of scholarship in this field and to two books that present an overview of the study of information behavior.
This essay introduces everyday information studies to historians of computing. The story of this field of study and its history are too large to tell in detail here. This topic falls within the subdiscipline of information behavior, one of the main subject areas in information studies – a field that began to be studied between the two world wars and took off in the 1960s. The reader interested in information behavior studies more generally should examine two well-regarded reference works on this subject (Case and Given 2016; Fisher, Erdelez, and McKechnie 2005).
Information Study Approaches
The early research on information behavior focused on human behavior in structured information environments, such as when a person went to a library to seek information or interacted with a database. But, of course, there were other, less structured environments for finding information, such as through conversations with friends and family; consulting religious or civic leaders, or area specialists such as financial advisors; and through consumption of the media. With the coming of the Internet and portable information devices, one could seek information anywhere, anytime, on any subject profound or frivolous. Information seeking, consumption, and analysis became an increasingly everyday part of ordinary people’s lives. The field expanded over time to not only include information needs, wants, and seeking, but also information avoidance and overload, and various kinds of affective as well as cognitive responses to information.
In fact, the everyday aspects of information were studied not only by information scholars but also by sociologists, communications scholars, and media scholars beginning as early as the 1930s. These studies about the roles information plays in one’s everyday life draw upon theorizing by such scholars as Michel de Certeau (1984), Henri Lefebvre (2008/1947), Dorothy E. Smith (1987), and Carolyn Steedman (1987). For an overview of the relevant theorizing, see Highmore (2001), Bakardjieva (2005, Chs. 1 and 2), and Haythornthwaite and Wellman (2002). Highmore also includes writing selections from many of these theorists. To make this introduction to everyday information studies more manageable, we focus here on our own work and primarily on our recent edited book, Deciding Where to Live (Ocepek and Aspray 2021). For a sample of other everyday information studies, see for example the work of Denise Agosto (with Sandra Hughes-Hassell, 2005), Karen Fisher (neé Pettigrew, 1999), Tim Gorichanaz (2020), Jenna Hartel (2003), Pam McKenzie (2003), and Reijo Savolainen (2008).
Our personal involvement with research on everyday information studies began with Everyday Information (Aspray and Hayes 2011), which injected historical scholarship into studies on everyday information. In a long study of “100 Years of Car Buying”, one of us (Aspray, pp. 9 – 70 in Aspray and Hayes 2011) introduced a historical model, showing how endogenous forces (e.g., the dealership model for selling automobiles, or the introduction of foreign automobiles into the American market) and exogenous forces (e.g., war, or women entering the workforce) shaped the information questions that people were interested in and sometimes even the information sources they consulted. This volume, presenting an historical approach to everyday information behavior, included contributions by the noted historians of computing James Cortada, Nathan Ensmenger, and Jeffrey Yost.
Our collaboration began when the two of us, together with our colleague George Royer (today a game designer in Austin, TX), wrote two books about food from the perspective of information studies. We did not follow the typical approaches of food scholars, studying such topics as food pathways or food security, but instead applied the lens of information studies to this topic of wide popular interest. In the two short books that we produced, Food in the Internet Age (Aspray, Royer, Ocepek 2013) and Formal and Informal Approaches to Food Policy (Aspray, Royer, and Ocepek 2014), we discussed a wide variety of topics, such as: the online grocer Webvan (the largest loser of venture capital in the dot-com crash of 2001); the harms that Yelp, OpenTable, and Groupon created for small brick-and-mortar businesses and customers; the different ways in which the Internet has been used to represent and comment upon food and food organizations; the regulation of advertising of sweetened cereals to children; and the strategies of informal, Bully Pulpit persuasion compared to formal regulation of food and nutrition – carried out through a pair of studies: one of Eleanor and Franklin Roosevelt, and the other of Michele and Barak Obama.
This work on food, and some of our subsequent research, falls into the field of information studies. We carefully use that term instead of information science because our work is more informed by humanities (critical theory, cultural studies) and social science disciplines (sociology, psychology, organizational and management studies, work and labor studies) than by computer science, natural science, and engineering disciplines. We both have worked in information schools, part of a movement toward the interdisciplinary study of computing and information that has emerged in the past quarter century out of (1) library schools becoming more technical, (2) computer science departments becoming more interested in people and human institutions and their social impact, and (3) newly created interdisciplinary enterprises. These information schools offer a big tent for many different kinds of methods, theories, and approaches. The breadth of these studies can be seen in the wide range of papers delivered at the annual meeting of ASIST (for example, https://www.conftool.org/asist2020/index.php?page=browseSessions&path=adminSessions) and the annual "iConference" (https://ischools.org/Program). Also see the types of scholarship presented as the specialty biennial conference on "Information Seeking in Context" (ISIC, e.g., http://www.isic2018.com).
So far, there is little cross-usage of methods or approaches by people studying everyday information, e.g. by a traditional information studies scholar who studies information literacy incorporating research from data science or ubiquitous computing, but this cross-fertilization is just beginning to happen. In our own research, we do the next best thing through edited volumes which include chapters using a variety of approaches, so as to gain multiple perspectives on an issue. This is true, for example, in our book on where to live (discussed in detail below) and the book on information issues in aging (mentioned below).
Deciding Where to Live
In our recent edited book, Deciding Where to Live, we are continuing our study of everyday phenomena through an information lens. We describe this book in some detail here to give our readers a better sense of the ways in which information studies scholars operate. All of the chapters in this book were written by people associated with leading information schools in the United States (Colorado, Illinois, Indiana, Syracuse, Texas, Washington). As with our food studies, we have taken multiple perspectives – all drawn from information studies – to investigate various aspects of housing. These studies, for example, employ work studies and business history; information, culture, and affective aspects of information issues; community studies; information behavior; and privacy.
Information scholars are often interested in the results of scholarship by labor, management, and organization scholars; and sometimes they adopt their theories and methods. These scholars are interested in such issues as the growing number of information occupations, the increased percentage of a person’s job tasks on information activities, and the ways in which tools of communication and information have changed firm strategies and industry structures. Everyday information scholars, too, are interested in these results, but primarily for what they have to say about the everyday or work lives of individuals.
The work of real estate firms, realtors, and home buyers and sellers have been profoundly changed by the massive adoption of information and communication technologies in recent years. Let us consider two chapters, by James Cortada and Steve Sawyer, from the Deciding Where to Live book. One major change in the 21st century has been the rise of websites, such as Zillow and Realtor.com, that enable individuals to access detailed information about housing without having to rely upon a realtor or the Multiple Listing Service. Using a business history approach, Cortada shows how these changes have changed the structure of the real estate industry, altered the behavior of individual firms, made the buyer and seller more informed shoppers, lowered commissions on house sales, and introduced new business models such as Zillow buying homes themselves and not just providing information about them. Some people believe that the rise of companies such as Zillow means that the imbalance between the information held by realtors and buyers is largely a thing of the past, that disintermediation by realtors is also largely over, and that the need for realtors is greatly diminished – and that we will see a radical shrinking in this occupation in the same way that the numbers of telephone operators and travel agents has plummeted. (See Yost 2008.)
Sawyer argues, however, that the work of the real estate agent is evolving rather than being eliminated. As he states his argument: “real estate agents have been able to maintain, if not further secure, their role as market intermediaries because they have shifted their attention from being information custodians to being information brokers: from providing access to explaining.” (Sawyer 2021, p. 35) As he notes, the buying of a house is a complex process, involving many different steps and many different participants (selecting the neighborhood and the particular house, inspecting the property, checking on title and transferring it, obtaining financing, remediating physical deficiencies in the property, etc.). One might say that it takes a village to sell a house in that village; and an important role of the real estate agent is to inform the buyers of the many steps in the process and use their network of specialists to help the buyers to carry out each step in a professional, timely, and cost-effective way.
How do these changes affect the everyday life of the individual? There are more than 2 million active real estate agents in the United States. Their work has changed profoundly as they adapt real-estate-oriented websites and apps in their work. Even though most real estate agents work through local real estate firms, to a large degree they act largely as independent, small businesspeople who carry out much of their work from their cars and homes, as much as from their offices. So, they rely on websites and apps not only for information about individual homes, but also for lead generation, comparative market analysis, customer relationship management, tracking their business expenses such as mileage, access to virtual keys, video editing of listings, mounting marketing campaigns, and a multitude of other business functions. For those who are buyers and sellers, they can use Zillow or its online competitors to become informed buyers before ever meeting with a real estate agent, learning how much their current home is worth, figuring out how large a mortgage they can qualify for, checking out multiple potential neighborhoods not only for housing prices but also for quality of schools and crime rates, checking out photos and details of numerous candidate houses, and estimating the total cost of home ownership. Interestingly, many individuals who are not looking to buy or sell a home in the near term are regular users of Zillow. It is a way to spy on neighbors, try out possible selves, plan for one’s future, or just have a good time. In our introductory chapter, we address these issues.
Another chapter, by Philip Doty, reflects upon the American dream of the smart home. Drawing upon the scholarship in surveillance capitalism Soshanna Zuboff (2019), feminist scholarship on privacy, Anita Allen (1988), Patricia Bolling (1996), Catherine MacKinnon (1987), gender studies in history of science and technology, Ruth Cowan (1983), geography of surveillance, Lisa Makinen (2016), and other scholarly approaches, Doty reflects on the rhetorical claims about technological enthusiasm related to smart cities and smart homes, and discusses some of the privacy and in particular surveillance issues that arise in smart homes.
Information is not merely used by people in cognitive ways; it can also bring joy, sadness, anxiety, and an array of other emotions. Deciding where to live can be an exciting, fraught, and stressful experience for many people. When one is searching for a home in a particularly competitive housing market, the addition of time pressures can amp up the emotional toll of house hunting and discourage even the most excited home buyer. In her chapter, Carol Landry recounts how the high stakes decision making of home buying becomes even more complicated when time pressure and emotions come into play. Her research is based on an empirical study of home buyers in the highly competitive Seattle real estate market. The chapter describes the experience of several home buyers dealing with bidding wars that required quick decision making and many failed attempts at securing a home. The stories shared in this chapter highlight the despair and heartbreak that made continuing the home search difficult to participants described as going from enthusiastic information seekers to worn out information avoiders. This chapter highlights how internal and external factors can impact the home buying process and the information behaviors associated with it.
A competitive real estate market is but one of myriad experiences that can further complicate the process of deciding where to live. There are times in most people’s lives where the unique attributes of a life stage play an outsized role in decision-making around housing; one of these times is retirement. In Aspray’s chapter, the realities of retirement complicate the lives of individuals lucky enough to be able to retire with new considerations that shape decision making. Retirement adds new complexity to deciding where to live because the stability of work that binds many people’s lives is no longer there, creating many exciting new opportunities and constraints. Different elements shape questions around where to live for retired people including the emotional ties to their current homes, the financial realities of retirement income, and the physical limitations of aging.
During times of societal uncertainty, a home can be a comforting shelter that keeps the external world at bay. Even when a lot of uncertainty stems from the housing market, as it did during the Housing Crisis of 2007 and the recession that followed. As more and more people lost their homes to foreclosures or struggled to pay their mortgages, home and garden entertainment media provided a pleasant, comfortable escape for millions of Americans. Ocepek, in her chapter on home and garden sources, found that, throughout the housing crisis, recession, and recovery, home and garden sources grew or maintained their popularity with viewers and readers – likely due to the social phenomenon of cocooning or taking shelter in one’s space when the world outside becomes uncertain and scary. Both home and garden magazines and HGTV made changes to their content to represent the new home realities of many of their readers and viewers, but they also largely stayed the same, presenting comforting content about making whatever space you call home the most comfortable.
The financial hardships throughout the housing crisis, recession, and recovery were not experienced by all Americans in equal measure. Several authors in the book presented multiple examples where housing policies, economic conditions, and social unrest disproportionately affected marginalized communities throughout the United States. One is Pintar’s chapter about Milwaukee, mentioned below. Although some of the legal frameworks built to segregate cities and communities throughout the country have changed, the experience of deciding where to live for Black and African Americans adds additional layers of complexity to the already complicated process. Drawing on critical race theory, Jamillah Gabriel delineates how Black and African American house searchers (renters and buyers) create information seeking and search strategies to overcome the historic and contemporary discriminatory policies and practices of housing segregation. The chapter analyzes specialized information sources the provide useful information to help this group of house searchers find safer communities where they have the greatest chance to prosper. These sources include lists of the best and worst places for African American and Black individuals and families to live. The lists draw on research the compares communities based on schools, employment, entertainment, cost of living, housing market, quality of life, and diversity. Drawing on historic and contemporary account, the analysis provided in this chapter highlights that, “the housing industry can be a field of land mines for African American in search of home” (Gabriel 2021, p. 274).
It is often said that information and information tools are neither inherently good or bad, but that they can be used for both good and bad purposes. Two chapters in the book illustrate this point. In a study of the city of Milwaukee, Judith Pintar shows how HOLC maps, which were created to assess the stability of neighborhoods, were used to reinforce the racist practice of redlining. In another chapter, by Hannah Weber, Vaughan Nagy, Janghee Cho, and William Aspray, the authors show how information tools were used by the city of Arvada, Colorado and various groups (such as builders, realtors, parents, activists, and the town council) to improve the city’s quality of life in the face of rapid growth and its attendant issues such as traffic problems, rising housing prices, the need to build on polluted land, and the desire to protect the traditional look and feel of this small town. A third chapter, by David Hopping, showed how an experiment in Illinois was able to repurpose military housing for non-military purposes for the social good. His empirical study is seen through the lens of the theoretical constructs of heterotopia (Foucault 1970), boundary objects (Star and Griesmer 1989), and pattern language (Alexander 1977).
Both of us are continuing to pursue work on everyday information issues. One (Aspray) is continuing this work on information studies in everyday life, through an edited book currently in progress on information issues related to older Americans (Aspray, forthcoming in 2022). This book ranges from traditional Library and Information Science approaches about health information literacy on insurance for older Americans, the variety of information provided by AARP and its competitors, and the use of information and communication technologies to improve life in elderly communities; to more technologically oriented studies on ubiquitous computing, human-computer interaction, and Internet of Things for older people. Meanwhile, Ocepek is building on her work in her doctoral dissertation (Ocepek 2016), which examined from both social science and cultural approaches the everyday activity of grocery shopping. Her new study is examining what has happened to grocery shopping during the pandemic.
We are pleased to see the broadening in mission of the Babbage Institute to consider not only the history of computing but also the history and cultural study of information. For example, many scholars (including some computer historians) since 2016 have been studying misinformation. (See, for example, Cortada and Aspray 2019; Aspray and Cortada 2019.) This study of everyday information is another way in which the Babbage Institute can carry out its broadened mission today.
In particular, there are a few lessons for computer historians that can be drawn from the scholarship we have discussed here, although many readers of this journal may already be familiar with and practicing them:
- One can study information as well as information technology. On the history of information, see for example Blair (2010), Headrick (2000), Cortada, (2016), and Ann Blair et al. (2021). For a review of this scholarship, see Aspray (2015).
- One can study everyday uses of information and information technology, even if they may be regarded by some as quotidian – expensive, complex, socially critical systems are not the only kinds of topics involving information technology that are worth studying.
- This past year has taught all of us how an exogenous force, the COVID-19 pandemic, can quickly and radically reshape our everyday lives. In the opening chapter of our book, we briefly discuss the earliest changes the pandemic brought to real estate. We are also seeing the grocery industry as well as the millions of consumers learning, adapting, and changing their information behaviors around safely acquiring food.
- In order to study both historical and contemporary issues about information and information technology, one can blend historical methods with other methods from computer science (e.g., human-computer interaction, data science), social science (qualitative and quantitative approaches from sociology, psychology, economics, and geography), applied social science (labor studies, management and organization studies), and the humanities disciplines (cultural studies, critical theory).
These are exciting times for the historians of computing and information!
Agosto, Denise E. and Sandra Hughes-Hassell. (2005). "People, places, and Questions: An Investigation of the Everyday Life Information-Seeking Behaviors of Urban Young Adults." Library & Information Science Research, vol. 27, no. 2, pp. 141-163.
Alexander, Christopher. (1977). A Pattern Language. Oxford University Press.
Allen, Anita L. (1988). Uneasy Access: Privacy for Women in a Free Society. Rowman & Littlefield.
Aspray, William. (2015). The Many Histories of Information. Information & Culture, 50.1: 1-23.
Aspray, William. (forthcoming 2022). Information Issues for Older Americans. Rowman & Littlefield.
Aspray, William and James Cortada. (2019). From Urban Legends to Political Factchecking. Springer.
Aspray, William and Barbara M. Hayes. (2011). Everyday Information. MIT Press.
Aspray, William, George W. Royer, and Melissa G. Ocepek. (2013). Food in the Internet Age. Springer.
Aspray, William, George W. Royer, and Melissa G. Ocepek. (2014). Formal and Informal Approaches to Food Policy. Springer.
Bakardjieva, Maria. (2005). Internet Society: The Internet in Everyday Life. Sage.
Blair, Ann. (2010). Too Much to Know. Yale.
Blair, Ann, Paul Duguid, Anja Silvia-Goeing, and Anthony Grafton, eds. (2021). Information: A Historical Companion. Princeton.
Boling, Patricia. (1996). Privacy and the Politics of Intimate Life. Cornell University Press.
Case, Donald O. and Lisa M. Given. (2016). Looking for Information. 4th ed. Emerald.
Cortada, James and William Aspray. (2019). Fake News Nation. Rowman & Littlefield.
Cowan, Ruth Schwartz. (1983). More Work for Mother. Basic Books.
De Certeau, Michel (1984). The Practice of Everyday Life. Translated by Steven F. Rendall. University of California Press.
Fisher, Karen E. Sandra Erdelez, and Lynne McKechnie. (2009). Theories of Information Behavior. Information Today.
Foucault, Michel. (1970). The Order of Things. Routledge.
Gorichanaz, Tim (2020). Information Experience in Theory and Design. Emerald Publishing.
Hartel, Jenna. (2003). "The Serious Leisure Frontier in Library and Information Science: Hobby Domains." Knowledge Organization, vol. 30, No. 3-4, pp. 228-238.
Haythornthwaite, Caroline and Barry Wellman, eds. (2002). The Internet in Everyday Life. Wiley-Blackwell.
Headrick, Daniel. (2000). When Information Came of Age. Oxford.
Highmore, Ben ed. (2001). The Everyday Life Reader. Routledge.
Lefebvre, Henri. (2008). Critique of Everyday Life. vol. 1, 2nd ed. Translated by John Moore. Verso.
MacKinnon, Catherine. (1987). Feminism Unmodified. Harvard University Press.
Makinen, Lisa A. (2016). "Surveillance On/Off: Examining Home Surveillance Systems from the User’s Perspective." Surveillance & Society, 14.
McKenzie, Pamela J. (2003). "A Model of Information Practices in Accounts of Everyday‐Life Information Seeking." Journal of Documentation, vol. 59, no. 1, pp. 19-40.
Pettigrew, Karen E. (1999). "Waiting for Chiropody: Contextual Results from an Ethnographic Study of the Information Behaviour Among Attendees at Community Clinics." Information Processing & Management. vol. 35, no. 6, pp. 801-817.
Ocepek, Melissa G. (2016). "Everyday Shopping: An Exploration of the Information Behaviors of the Grocery Shoppers." Ph.D Dissertation, School. Of Information, University of Texas at Austin.
Ocepek, Melissa G. and William Aspray, eds. (2021). Deciding Where to Live. Rowman & Littlefield.
Savolainen, Reijo. (2008). Everyday Information Practices: A Social Phenomenological Perspective. Scarecrow Press.
Smith, Dorothy E. (1987). The Everyday World as Problematic: A Feminist Sociology. Northeastern University Press.
Star, Susan Leigh and James R. Griesemer. (1989). "Institutional Ecology, Translations, and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39." Social Studies of Science 19, 3: 387-420.
Steedman, Carolyn. (1987). Landscape for a Good Woman: A Story of Two Lives. Rutgers University Press.
Yost, Jeffrey R. (2008). “Internet Challenges for Nonmedia Industries, Firms, and Workers.” pp. 315-350 in William Aspray and Paul Ceruzzi, eds., The Internet and American Business. MIT Press.
Zuboff, Shoshanna. (2019). The Age of Surveillance Capitalism. Public Affairs.
Aspray, William and Ocepek, Melissa G. (April 2021). “Everyday Information Studies: The Case of Deciding Where to Live." Interfaces: Essays and Reviews in Computing and Culture Vol. 2, Charles Babbage Institute, University of Minnesota, 27-37.
About the authors:
Melissa G. Ocepek is an Assistant Professor at the University of Illinois Urbana-Champaign in the School of Information Sciences. Her research draws on ethnographic methods and institutional ethnography to explore how individuals use information in their everyday lives. Her research interests include everyday information behavior, critical theory, and food. Recently, she co-edited Deciding Where to Live (Rowman & Littlefield, 2021) with William Aspray. Previously she published two books that address the intersection of food, information, and culture: Food in the Internet Age and Formal and Informal Approaches to Food Policy (both with William Aspray and George Royer). Dr. Ocepek received her Ph.D. at the University of Texas at Austin in the School of Information.
William Aspray is Senior Research Fellow at CBI. He formerly taught in the information schools at Indiana, Texas, and Colorado; and served as a senior administrator at CBI, the IEEE History Center, and Computing Research Association. He is the co-editor with Melissa Ocepek of Deciding Where to Live (Rowman & Littlefield, 2021). Other recent publications include Computing and the National Science Foundation (ACM Books, 2019, with Peter Freeman and W. Richards Adrion); and Fake News Nation and From Urban Legends to Political Fact-Checking (both with James Cortada in 2019, published by Rowman & Littlefield and Springer, respectively).
Of Mice and Mentalité: PARC Ways to Exploring HCI, AI, Augmentation and Symbiosis, and Categorization and Control
Jeffrey R. Yost, Charles Babbage Institute, University of Minnesota
Abstract: This think piece essay comparatively explores history and mindsets with human-computer interaction (HCI) and artificial intelligence (AI)/Machine Learning (ML). It draws on oral history and archival and other research to reflect on the institutional, and cultural and intellectual history of HCI (especially the Card, Moran, and Newell team at Xerox PARC) and AI. It posits the HCI mindset (focused on augmentation and human-machine symbiosis, as well iterative maintenance) could be a useful framing to rethink dominant design and operational paradigms in AI/ML that commonly spawn, reinforce, and accelerate algorithmic biases and societal inequality.
This essay briefly recounts the 1982 professional organizational founding for the field of Human-Computer Interaction (HCI) before reflecting on two decades prior in interactive computing—HCI’s prehistory/early history—and its trajectory since. It comparatively explores history and mindsets with HCI and artificial intelligence (AI). For both HCI and AI, “knowing users” is a common target, but also a point of divergent departure.
For AI—especially large-scale, deployed systems in defense, search, and social networking—knowing users tends to involve surveillance, data collection, and analytics to categorize and control in the service of capital and power. Even when aims are purer, algorithmic biases frequently extend from societal biases. Machines can be programed to discriminate or learn it from data and data practices.
For HCI—from idealistic 1960s beginnings through 1980s professionalization and beyond—augmenting users and human-machine symbiosis has been its core. While an HCI-type mindset offers no magic bullet to AI’s ills, this essay posits that it can be a useful framing, a reminder toward proper maintenance, stewardship, and structuring of data, design, code (software), and codes (legal, policy, and cultural). HCI systems, of course, can be ill designed, perform in unforeseen ways, or users can misapply them, but this likely is less common and certainly is of lesser impactful scale relative to AI. Historians and sociologists must research the vast topics of AI and HCI more fully in many contexts and settings.
HCI and Solidifying the Spirit of Gaithersburg
In mid-March 1982 IIT Programming Technology Center’s Bill Curtis and University of Maryland’s Ben Shneiderman held the first “Human Factors in Computing Systems” conference in Gaithersburg, Maryland. The inspiring event far exceeded the organizers’ expectations, attracting more than 900 attendees. It was the pivotal leap forward in professionalizing HCI.
Rich program content filled the three-day program, as impactful organizational work occurred at an evening, small group side meeting. At the latter, Shneiderman, Curtis, UCSD’s Don Norman, Honeywell’s Susan Dray, Northwestern’s Loraine Borman, Xerox PARC's (Palo Alto Research Center) Stuart Card and Tom Moran, and others strategized about HCI’s future and possibilities for forming an association within a parent organization. Borman, an information retrieval specialist in a leadership role at ACM SIGSOC (Social and Behavioral Computing), and Shneiderman, a computer scientist, favored the Association for Computing Machinery (ACM). Insightfully seeing an expedient workaround, Borman proposed SIGSOC transform—new name/mission—bypassing the need for a new SIG approval.
Cognitive scientist Don Norman questioned whether ACM should be the home, believing computer science (CS) might dominate. After debate, Shneiderman and Borman’s idea prevailed. Dray recalls, the sentiment was “we can’t let the spirit of Gaithersburg die,” and for most, SIGSOC’s metamorphous seemed a good strategy (Dray 2020). Borman orchestrated transforming SIGSOC into SIGCHI (Computer-Human Interaction). The CHI tail essentially became the dog (SOC’s shrinking base mainly fit under HCI’s umbrella). Interestingly, “Computer” comes first in the acronym, but likely just to achieve a pronounceable word in the ACM SIG style, as “HCI” appeared widely in early CHI papers (SIGCHI’s annual conference).
Norman’s concerns proved prescient. SIGCHI steadily grew reaching over 2,000 attendees by the 1990 Seattle CHI, but in its first decade, it principally furthered CS research and researchers. Scholarly standards rose, acceptance rates fell, and some practitioners felt crowded out. In 1991, practitioners formed their own society, User Experience Professional Association (UXPA). In the 1990s and beyond, SIGCHI blossomed into an increasingly (academic) discipline diverse organization.
As with all fields/subfields, HCI has a prehistory or an earlier less organizationally defined history (for HCI, the 1960s and 1970s). SIGCHI’s origin lay in the confluence of: past work in human factors; university “centers of excellence” in interactive computing created through 1960s Advanced Research Projects Agency (ARPA) Information Processing Techniques Office (IPTO) support; two particularly impactful laboratories (PARC and SRI’s ARC); Systems Group artists in the UK; and the promise of Graphical User Interface (GUI) personal computers (PCs).
Nonprofit corporation SRI’s Augmentation Research Center (ARC), and Xerox’s PARC were at the forefront of GUI and computer mouse developments in the 1970s and 1980s. Neither the GUI nor mouse R&D were secret at PARC; in the 1970s, many visitors saw Alto demos, including, in 1979, Steve Jobs/Apple Computer team. In 1980 Apple hired away PARC’s Larry Tesler and others. Jobs launched the Apple Lisa effort (completed in 1983, priced at $10,000), which like the even more expensive Xerox Star (1981), possessed a GUI and mouse. The 1984 Apple Macintosh, retailing at $2,500, initiated an early mass market for GUI personal computers—inspiring initiators, most notably, Microsoft Windows 2.0 in 1987.
In early 2020, I conducted in-person oral history interviews with three of HCI’s foremost intellectual and organizational pioneers—the pilot for a continuing ACM/CBI project. This included UCSD Professor Don Norman (SIGCH Lifetime Research Awardee; Benjamin Franklin Medalist), Xerox PARC Scientist and Stanford Professor Stuart Card (SIGCHI Lifetime Research Awardee; National Academy of Engineering), and Dr. Susan Dray (SIGCHI Lifetime Practice Awardee; UXPA Lifetime Achievement Awardee).
Don Norman is well-known both within and outside CS—extending from his 1988 book The Psychology of Everyday Things (POET), re-released as wide selling, The Design of Everyday Things. A student of Duncan Luce (University of Pennsylvania), he was among the first doctorates in mathematical psychology. Early in his career, he joined the UCSD Psychology Department as an associate professor. After stints at Apple and Hewlett-Packard, and at Northwestern, he returned to lead the UCSD Design Laboratory. Norman helped take design from its hallowed ground of aesthetics to establish it in science, and greatly advanced understanding and practice of usability engineering.
Norman stressed to me that there is one scientist so consistently insightful he never misses his talks at events he attends, PARC’s Stuart Card. Card was the top doctoral student of Carnegie Mellon Professor of Cognitive Psychology and Computer Science Allen Newell. While these two interviews were in California, my interview with Dr. Susan Dray was in Minneapolis, with the scientist who pioneered the first corporate usability laboratory outside the computer industry (IBM and DEC had ones) at American Express Financial Advisors (AEFA).
Dray took a different path after her doctorate in psychology from UCLA—human factors—on classified Honeywell Department of Defense (DoD) projects. In the early 1980s, Honeywell, a pioneering firm in control systems, computers, and defense-contracting, had a problem with ill-adapted computing in its headquarters for clerical staff, which Dray evaluated. This became path defining for her career, toward computer usability. After pioneering HCI work at Honeywell, Dray left for American Express, and later became a successful and impactful HCI consultant/entrepreneur. She applied observations, ethnographic interviewing, and the science of design to improve interaction, processes, and human-machine symbiosis in cultures globally, from the U.S., South Africa, Egypt and Jordan to India, Panama, and France.
Earlier, in the late 1980s, at American Express, Dray was seeking funds for a usability lab, and she creatively engaged in surreptitious user feedback. She bought a “carton” of Don Norman’s POET book, had copies delivered to all AEFA senior executives on the top/29th floor, and rode up and down the elevator starting at 6 am for a couple hours each morning for weeks, listening to conversations concerning this mysteriously distributed book on the science of design. Well-informed, she pitched successfully, gaining approval for her usability lab.
The Norman, Card, and Dray oral histories, another HCI interview I just conducted, with artist Dr. Ernest Edmonds, my prior interview with Turing Awardee Butler Lampson of Alto fame, preparation for these five interviews, and AI and HCI research at the CBI, MIT, and Stanford University archives inform this essay.
For AI and HCI, Is There a Season?
Microsoft Research Senior Scientist, Jonathan Grudin—in his valuable From Tool to Partner (2017) on HCI’s history—includes a provocative argument that HCI thrives during AI Winters and suffers during AI’s other seasons. The usefulness of the widespread Winter metaphor is debatable, it is based on changing funding levels to elite schools (Mendon-Plasek, 2021 p. 55), but Grudin’s larger point—only one of the two fields thrives at a time—hints to a larger truth: HCI and AI have major differences. The fields overlap with some scientists and some common work but have distinct mindsets. Ironically, AI, once believed to be long on promises and short on deliveries (the rationalized basis for AI Winters), is now delivering stronger, and likely, more harmfully than ever given algorithmic and data biases in far reaching corporate and government systems.
Learning How Machines Learn Bias
Increasingly more and more of our devices are “smart,” a distracting euphemism obscuring how AI (in ever-increasingly interconnected sensor/IoT/cloud/analytics systems) reinforces and extends biases based on race, ethnicity, gender, sexuality, and disability. Recent interdisciplinary scholarship is exposing roots of discriminatory code (algorithms/software) and codes (laws, policy, culture), including deeply insightful keynotes at the Charles Babbage Institute’s (CBI) “Just Code” Symposium (a virtual, major event with 345 attendees in October 2020) by Stephanie Dick, Ya-Wen Lei, Kenneth Lipartito, Josh Lauer, and Theodora Dryer. Their work contributes to an important conversation also extended in important scholarship by Ruha Benjamin, Safiya Noble, Matt Jones, Charlton McIlwain, Danielle Allen, Jennifer Light (MIT; and CBI Sr. Research Fellow), Mar Hicks, Virginia Eubanks, Lauren Klein, Catherine D’Ignazio, Amanda Menking, Aaron Mendon-Plasek (Columbia; and current CBI Tomash Fellow), and others.
AI did not merely evolve from a benevolent past to a malevolent present. Rather, it has been used for a range of different purposes at different times. Geometrically expanding the number of transistors on chips—the (partially) manufactured and manufacturing/fabrication trajectory of Moore’s Law—enabled computers and AI to become increasingly powerful and pervasive. Jennifer Light’s insightful scholarship on the RAND Corporation’s 1950s and 1960s operations research, systems engineering, and AI, created in the defense community, and later misapplied to social welfare, counters notions of an early benevolent age. Even if chess is the drosophila of AI—a phrase of John McCarthy’s from the 1990s—its six-decade history is one of consequential games, power contests. Work in computer rooms at the Pentagon’s basement and at RAND harmfully escalated Cold War policies as DoD/contractors simulated and supported notions of the U.S. rapidly “winning” the Vietnam War, and earlier, C-E-I-R (founded by ex-RAND scientists) used input/output-economics algorithmic systems to determine optimal bomb targets to decimate the Soviet Union industrially (Yost, 2017).
What helped pulled AI out of its first long (1970s) Winter was successes and momentum with expert systems—the pioneering work of Turing Awardee Stanford AI scientist Edward Feigenbaum and molecular biologist and Nobel Laureate Joshua Lederberg’s late 1960s Dendral, to advance organic chemistry, and Feigenbaum and others’ early 1970s MYCIN in medical diagnostics and therapeutics. These AI scientific triumphs stood out and lent momentum for expert systems, as did fears of Japan’s Fifth Generation (early 1980s—government and industry partnership in AI/systems). In the 1980s, elite US CS departments again received strong federal support for AI. Work in expert systems in science, medicine, warfare, and computer intrusion detection abounded (Yost, 2016).
Some AI systems are born biased; others learn it—from algorithmic tweaks to expert system inference engines to biased data. Algorithmic bias is just one of the many problematic byproducts of valuing innovation over maintenance (Vinsel and Russell 2020, Yost 2017).
Human Factors and Ergonomics
The pre-history/early history of human-machine interaction dates back many decades to the control of workers and soldiers to maximize efficiency. The late-1950s-spawned Human Factors Engineering Society grew out late inter-war period organizational work of the Southern California aerospace industry. In the first half of the 20th century, human factors had meaningful roots in the scientific management thought, writings, and consulting of Frederick Winslow Taylor. This tradition defined the worker as an interchangeable part, a cog within the forces of production to efficiently serve capital. At Taylorist-inspired and organized factories, management oppressed laborers, and human factors has a mixed record in its targets, ethics, and outcomes. However, in HCI’s organizational start, early 1980s, the mantra was not merely of efficiency; it was the frequently uttered, “know the user.” This, importantly, was a setting of personal computing and GUI idealism, a trajectory insightfully explored by Stanford’s Fred Turner in From Counterculture to Cyberculture.
We’re on a Road to Intertwingularity, Come on Inside
Years before the National Science Foundation (NSF) took the baton to be the leading federal funder of basic CS research at universities, ARPA’s IPTO (following 1962 founding director’s J.C.R. Licklider’s vision), changed the face of computing toward interaction. Well known philosopher and sociologist Ted Nelson, a significant HCI contributor of the 1960s and 1970s, creatively coined the term “intertwingularity” of the symbiosis and all being intertwined or connected (networking, text through his term/concept of “hypertext,” human user with interactive computing)—it can aptly describe the multifaceted HCI work of 1960’s IPTO-funded SRI’s ARC and 1970s Xerox PARC.
The 1970-enacted Mansfield Amendment required direct and defined DoD function for all DoD research funding. It left a federal funding vacuum for years until NSF could ramp up to become a roughly comparable basic funder for the interactive computing that IPTO started. The vacuum, however, was largely filled by a short golden age of corporate industrial research in interactive computing at Xerox, a firm with a capital war chest, much dry powder, from its past photocopier patent-based monopoly, and seeking to develop the new, new thing(s). Xerox looked to its 1970-launched PARC to invent the office of the future. It hired many previously IPTO-supported academic computer scientists, it produced and employed a cadre of Turing Awardees, an unprecedented team far exceeding any single university’s CS department in talent or resources.
Inside the PARC Homeruns
Douglas Engelbart and the earliest work on the first mouse designed by him and SRI’s Bill English is addressed by French Sociologist Thierry Bardini in Bootstrapping, a biography of Engelbart. Journalists, such as Michael Hiltzik, have covered some major contours of technical innovation at PARC.
Central to Bardini’s and Hiltzik’s and others’ narratives is the important HCI work of Turing Awardees Douglas Engelbart at SRI; and Butler Lampson, Alan Kay, Charles Thacker, and Charles Simonyi at PARC. In this essay I look beyond oft-told stories and famed historical actors in GUIs and mice to briefly discuss a hitherto largely overlooked, highly impressive small PARC research team composed of Newell, Card, and Moran, and a larger team that Card later led. The incredible accomplishments of Lampson and others changed the world with the GUI. They hit the ball out park, so to speak—"a shot heard round the world” (1951 Bobby Thompson Polo Grounds, Don DeLillo immortalized, homerun sense) that very visibly revolutionized interactive computing.
Newell is one of the most famous of the first-generation AI scientists, a principal figure at John McCarthy’s famed Dartmouth Summer 1956 Workshop, in which McCarthy, Newell, Herbert Simon, Marvin Minsky, and others founded and gave name to the field—building upon earlier work of Alan Turing. On a project launched in 1955, Newell, as lead, co-invented (with Simon and Clifford Shaw) “The Logic Theorist” in 1956, the first engineered, automated logic or AI program. Many historian and STS colleagues I have spoken with associate Newell solely with AI, and they are unaware of his PARC HCI work. Unlike Turing and Simon, Newell does not have a major biography documenting the full breadth of his work. Newell’s HCI research has been neglected by historians, as has that of his two top students, Card and Moran. They published many seminal HCI papers in Communications of the ACM and other top journals.
This oversight (by historians, they were revered by fellow scientists), especially neglecting career long contributions of Card and Moran, is a myopic favoring of first-recognized invention over subsequent ones, missing key innovations, and devaluing maintenance. It was not merely the dormouse (mouse co-inventors Engelbart and English, the recognized revolution), but multiple dormice (the science and engineering behind optimizing mice for users). Remember(ing) what the dormice said (and with an open ear of historical research), Card and Moran clearly conducted brilliant scientific research spawning many quiet revolutions.
Rookie Card to All-Star Card, Pioneering HCI Scientist Stuart Card
Stuart Card was first author of a classic textbook, Psychology of Human-Computer Interaction, with co-authors Newell and Moran. Card progressed through various research staff grades and in 1986 became a PARC Senior Research Scientist. Two years later, he became Team Leader of PARC’s User Interface Research Group. The breadth and contributions of Card and PARC’s HCI research in the 1970s to 1990s is wide in both theory and practice. The work fell into three broad categories: HCI Models, Information Visualization, and Information Retrieval—and major contributions in each is breathtaking. One early contribution in HCI models was Card and the team’s work on the mouse and its performance by an information-theoretical model of motor movement, Fitts’ Law, using a processing rate parameter of 10 bits/sec, roughly at the same performance ability as the hand, demonstrating performance was not limited by the device/mouse in terms of speed, but by the hand itself. It proved a mouse was optimized to interact with humans. This impacted the development of the Xerox Star mouse in 1981 and the earliest computer mice developed by Apple Computer. Card’s, and his team’s, work was equally profound on information visualization, in areas such as Attentive-Reactive Visualizer and Visualizer Transfer Functions. In information retrieval, they did advanced Information Foraging Theory.
While staying at PARC for decades, Card concurrently served as a Stanford University Professor of Computer Science. He became a central contributor to SIGCHI and was tremendously influential to academic, industrial, and government scientists.
In listening to Card’s interview responses (and deeply influenced by my Norman, Dray, and Butler Lampson interviews also, as well as by my past research), I reflected that many AI scientists could learn much from such a mindset of valuing users, all users—knowing users to help augment, for symbiosis, not to control. AI scientists, especially on large scale systems in corporations and government (much ethical AI research is done at universities), could benefit in not merely technical ways, as Steve Jobs and others did from their day in the PARC, but from Card and his team’s ethos and ethics.
Professionalizing HCI: Latent Locomotion to Blissful Brownian Motion
While SIGCHI unintentionally pushed out many non-scientists in the 1980s, it and the HCI field shed strictly a computer science and cognitive science focus to become ever more inclusive of a wide variety of academic scientists, engineers, social scientists, humanities scholars, artists, and others from the 1990s forward. CHI grew from about 1,000 at the first events in Gaithersburg and Boston to more than 3,600 attendees at some recent annual CHI meetings (and SIGCHI now has more than two-dozen smaller conferences annually). The SIGCHI/CHI programs and researchers are constantly evolving and exploring varying creative paths that from a 30,000-foot vantage might seem to be many random walks, Brownian motion. The research, designing to better serve users, contributes to many important trajectories. The diversity of disciplines and approaches can make communication more challenging, but also more rewarding, and to a high degree a Galison-like trading zone exist in interdisciplinary SIGCHI and HCI.
One example is the Creativity and Cognition Conference co-founded by artists/HCI scientists Ernest Edmonds and Linda Candy in 1993 that became a SIGCHI event in 1997. It brings together artists, scientists, engineers, and social scientists to share research/work on human-computer interaction in art and systems design. As Edmonds related to me, communication and trust between artists and scientists takes time, but is immensely valuable. Edmonds is an unparalleled figure in computer generative and interactive art, and a core member of the Systems Group of principally UK computer generative artists. In addition to many prestigious art exhibitions in the 1970s (and beyond), Edmonds published on adaptive software development, with critique of the waterfall method. His work—in General Systems in 1974—anticipated and helped to define adaptive techniques, later referred to as agile development. Edmonds, through his artist, logician, and computer science lenses insightfully saw interactive and iterative processes, a new paradigm in programming technique, art, and other design.
HCI research, and its applications, certainly is not always in line with societal good, but it has an idealistic foundation and values diversity and interdisciplinarity. Historians still are in the early innings of HCI research. Elizabeth Petrick has done particularly insightful scholarship on HCI and disability (2015).
Coding and Codifying, Fast and Slow
Nobel Laureate Daniel Kahneman has published ideas on human cognition that are potentially useful to ponder with regard to AI and HCI. Kahneman studies decision-making, and judgement, and how different aspects of these arise from how we think—both fast, emotionally, unconsciously, and instinctively; and slow, more deeply and analytically.
Programming projects for applications and implementation of systems are often behind schedule and over-budget. Code, whether newly developed or recycled, often is applied without an ethical evaluation of its inherent biases.
HCI often involves multiple iterations with users, usability labs, observation in various settings, ethnographic interviewing, and an effective blend of both inspiring emotional-response, fast thinking, and especially, deep reflective slow thinking. This slow and analytical thinking and iterative programming (especially maintenance, and endless debugging) could potentially be helpful in beginning to uproot underlying algorithmic biases. Meanwhile, slow, and careful reflection on how IT laws, practices, policies, culture, and data are codified is instructive. All of this involves ethically interrogating the what, how, why, and by and for whom of innovation, and valuing maintenance labor and processes, not shortchanging maintenance in budget, respect, or compensation.
Beyond “Laws” to Local Knowledge
In 1967 computer scientist, Melvin Conway, noted (what became christened) Conway’s Law—computer architecture reflects the communication structure of the underlying organization where it was developed (made famous by Tracy Kidder’s The Soul of a New Machine). Like Moore’s Law, Conway’s Law is really an observation, and a self-fulfilling prophecy. Better understanding and combatting biases at the macro is critical. Also essential is evaluation and action at the local and organizational levels. How does organizational culture structure algorithms/code? What organizational policies give rise to what types of code? What do (end) users, including and especially marginalized individuals and groups, have to say on bias? How do decisions at the organizational level reinforce AI/ML algorithmic and data biases, and reinforce and accelerate societal inequality? These are vital questions to consider through many future detailed cases studies in settings globally. The goal should not be a new “law,” but rather a journey to gain local knowledge and learn how historical, anthropological, and sociological cases inform on code and codes toward policies, designs, maintenance, and structures that are more equitable.
“Why Not Phone Up Robinhood and Ask Him for Some Wealth Distribution”
The lyric above from the 1978 reggae song “White Man in a Hammersmith Palais,” by The Clash, might be updated to why not open a Robinhood app… (at least until it suspended trading). How historians later assess the so-called Robinhood/Reddit “Revolution” a transfer of $20 billion away from hedge funds/banks/asset managers over several weeks in early 2021 (punishing bearish GameStop shorting by bidding up shares to force short covering), remains to be seen. Is it a social movement, and of what demographic makeup and type? For many, it likely, at least in part, is a stand against Wall Street, and thus Zuccotti Park comparisons seem apropos. Eighty percent of stock trading volume is automated—algorithmic/programmed (AI/ML)—contributing to why a 2021 CNBC poll showed 64 percent of Americans believe Wall Street is rigged. Like capitalism, equities markets and computers combine as a potent wealth concentrating machine—one turbocharged in pandemic times and fueled by accommodative monetary policy. “Smart” systems/platforms in finance, education, health, and policing all accelerated longstanding wealth, health and incarceration gaps and divergences to hitherto unseen levels. Not to dismiss volatility or financial risk to the Reddit “revolutionaries,” but the swiftness of regulatory calls by powerful leaders is telling. It begs questions on priorities: regulation for who, of what, when, and why? U.S. IT giants using AI to surveille, and to dominate with anti-competitive practices has gone largely unregulated (as has fintech) for years. Given differential surveillance, Blacks, Indigenous, People of Color (BIPOC) suffer differentially. The U.S. woefully lags Europe on privacy protections and personal data corporate taxes. U.S. racial violence/murders by police disgracefully dwarfs other democratic nations, and America stands out for Its (police and courts) embracement of racially biased facial recognition technology (FRT) and recidivist predictive AI—such as Clearview FRT and Northpointe’s (now Equivant) Corrective Offender Management Profiling for Alternative Sanctions (COMPAS).
Meanwhile parallel Chinese IT giants Baidu, Alibaba, and Tencent, dominant in search, e-commerce, and social networking respectively, use intrusive AI. These firms (fostered by the government), ironically, are also contributing to platforms enabling a “contentious public sphere.” (Lei 2017).
At times, users can appropriate digital computing tools against the powerful in unforeseen ways. Such historical agency is critical to document and analyze. History informs us that AI/ML, like many technologies, left unchecked by laws, regulations, and ethical scrutiny will continue to be powerfully accelerating tools of oppression.
Raging Against Machines That Learn
U.S. headquartered AI-based IT corporate giants’ record on data and analytics policy and practices have garnered increasing levels of critique by journalists, academics, legislators, activists, and others. The New York Times has reported on clamp downs on employees expressing themselves on social and ethical issues. The co-leader of Google’s Ethical AI Group Timnit Gebru tweeted in late 2020 she was fired for sending an email encouraging minority hiring and drawing attention to bias in artificial intelligence. Her email included, “Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset.” (Metz and Wakabayashi 2020).
On June 30, 2020, U.S. Senators Robert Menendez, Mazie Hirono, and Mark Warner wrote Facebook CEO Mark Zuckerberg critiquing his company for failing to “rid itself of white supremacist and other extremist content.” (Durkee 2020). A subsequent Facebook internal audit called for better AI—a tech fix. Deep into 2019 Zuckerberg (with a lack of clarity, as at Georgetown in October 2019) sought to defend Facebook’s policies on the basis of free speech. More concerning than his inability to execute free speech arguments is the lack of transparency and the power wielded by a platform with 2.5 billion users, it has immense power to subvert democracy and to differentially harm. It has a clear record of profits over principles. In mid-2020 The Color of Change, NAACP, National Hispanic Media Coalition and others launched the “Stop Hate for Profit” boycott on Facebook advertising for July 2020, more than 1,200 organizations participated. Pivoting PR in changing political winds, Zuckerberg is seeking to shift responsibility to Congress asking it to regulate (Facebook’s legal team likely will defend the bottom line).
Data for Black Lives, led by Executive Director Yeshimabeit Milner, is an organization and movement of activists and mathematicians. It focuses on fighting for possibilities for data use to address societal problems and fighting against injustices, stressing “discrimination is a high-tech enterprise.” It recently launched, Abolish Big Data, “a call to action to reject the concentration of Big Data in the hands of the few, to challenge the structures that allow data to be wielded as a weapon…” (www.d4bl.org). This organization is an exemplar of vital work for change underway, and also of the immense challenge ahead given the power of corporations and government entities (NSA, CIA, FBI, DoD, police, courts).
HCI, never the concentrating force AI has become, continues to steadily grow as a field—intellectually, in diversity, and in importance. It has a record of embracing diversity, helping to augment and advance human and computer symbiosis. More historical work on HCI is needed, but it offers a useful mindset.
Given AI historical scholarship to date, we know its record has been mixed from the start. From its first decades of 1950s and 1960s to today, DoD, NSA/CIA/FBI, Police, and criminal justice have been frequent funders, deployers and users of AI systems plagued with algorithmic biases that discriminate against BIPOC, women, the LGBTQIA, and the disabled. Some of the most harmful systems have been with facial recognition and predictive policing. Yet, properly designed, monitored, and maintained, AI offers opportunities for science, medicine, and social services (especially at universities and nonprofits).
The social science, humanities, and arts can have a fundamental positive role on the design, structuring, and policies with AI/ML. A handful of universities recently have launched interdisciplinary centers to focus on AI, history, and society. This includes the recently formed AI Now Institute at NYU (2017) and the Institute for Human-Centered AI at Stanford (2019). The Charles Babbage Institute has made the interdisciplinary social study of AI and HCI a focus (with “Just Code” and beyond)—research, archives, events, oral histories, and publications. In CS, ACM’s (2018 launched) Conference on Fairness, Accountability, and Transparency (FAccT), offers a great forum. Outside academe many are doing crucial research, policy, and activist work—a few examples: Data for Black Lives; Blacks in Technology; NC-WIT; AnitaB.org; Algorithmic Justice League; Indigenous AI.Net; Algorithmic Bias Initiative, (U. of Chicago).
The lack of U.S. regulation to date, discrimination and bias, corporate focus and faith on tech fixes, inadequate transparency, corporate imperialism, and overpowering employees and competitors have many historical antecedents inside and outside computing. History—the social and policy history of AI and HCI, as well as other labor, race, class, gender, and disability history—has much to offer. It can be a critical part of a broad toolkit to understand, contextualize, and combat power imbalances—to better ensure just code and ethically shape and structure the ghost in the machine that learns.
Acknowledgments: Deep thanks to Bill Aspray, Gerardo Con Diaz, Andy Russell, Loren Terveen, Honghong Tinn, and Amanda Wick for commenting on a prior draft.
Allen, Danielle and Jennifer S. Light. (2015). From Voice to Influence: Understanding Citizenship in a Digital Age. University of Chicago Press.
Alexander, Jennifer. (2008). The Mantra of Efficiency: From Waterwheel to Social Control. Johns Hopkins University Press.
Bardini, Thierry. (2000). Bootstrapping: Coevolution and the Origins of Personal Computing. Stanford University Press.
Benjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity.
Card, Stuart K., Thomas Moran, and Allen Newell (1983). The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates.
Card, Stuart K., Oral History (2020). Conducted by Jeffrey R. Yost, Los Altos Hills, CA, February 17, 2020. CBI, UMN.
Dick, Stephanie. (2020). “NYSIIS, and the Introduction of Modern Digital Computing to American Policing.” Just Code: Power, Inequality, and the Global Political Economy of IT (Symposium presentation: Oct. 23). [Hereafter “Just Code” Symposium]
D’Ignazio, Catherine and Lauren Klien. (2020). Data Feminism. MIT Press.
Dray, Susan, Oral History (2020). Conducted by Jeffrey R. Yost, CBI, Minneapolis, Minnesota, January 28, 2020. CBI, UMN.
Durkee, Alison. (2020). “Democratic Senators Demand Facebook Answer For Its White Supremacist Problem.” Forbes. June 30. (accessed online at Forbes.com).
Dryer, Theodora. (2020). “Streams of Data, Streams of Water: Encoding Water Policy and Environmental Racism.” Just Code” Symposium.
Edmonds, Ernest. (1974). “A Process for the Development of Software for Non-Technical Users as an Adaptive System.” General Systems 19, 215-218.
Eubanks, Virginia. (2019). Automating Inequality: How High-Tech Tools Punish, Police, and Punish the Poor. Picador.
Galison, Peter. (1999) “Trading Zone: Coordinating Action and Belief.” In The Science Studies Reader, ed. by Mario Biagioli. Routledge. 137-160.
Grudin, Jonathan. (2017). From Tool to Partner: The Evolution in Human-Computer Interaction. Morgan and Claypool.
Hiltzik, Michael. (2009). Dealers in Lightening: Xerox PARC and the Dawn of the Computer Age. HarperCollins.
Kahnman, Daniel. (2011). Thinking, Fast and Slow. Farrar, Straus, and Giroux.
Kidder, Tracy. (1981). Soul of a New Machine. Little, Brown, and Company.
Lampson, Butler, Oral History (2014). Conducted by Jeffrey R Yost, Cambridge, Massachusetts, December 11, 2014. Charles Babbage Institute, UMN.
Lauer, Josh and Kenneth Lipartito. (2020) “Infrastructures of Extraction: Surveillance Technologies in the Modern Economy.” Just Code” Symposium.
Light, Jennifer S. (2005). From Warfare to Welfare: Defense Intellectuals and Urban Problems in Cold War America. University of Chicago Press.
McIlwain, Charlton. (2020). Black Software. The Internet and Racial Justice: From AfroNet to Black Lives Matter. Oxford University Press.
Mendon-Plasek, Aaron. (2021). “Mechanized Significance and Machine Learning: Why It Became Thinkable and Preferable to Teach Machines to Judge the World.” In J. Roberge and M. Castelle, eds. The Cultural Life of Machine Learning. Palgrave Macmillan, 31-78.
Menking, Amanda and Jon Rosenberg. (2020). “WP:NOT, WP:NPOV, and Other Stories Wikipedia Tells Us: A Feminist Critique of Wikipedia's Epistemology.” Science, Technology, & Human Values, May, 1-25.
Metz, Cade and Daisuke Wakabayashi. (2020). “Google Researcher Says She was Fired Over Paper Highlighting Bias in AI.” New York Times, Dec. 2, 2020.
Norman, Don, Oral History. (2020). Conducted by Jeffrey R. Yost, La Jolla, California, February 12, 2020. CBI, UMN.
Noble, Safiya Umoja. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
Petrick, Elizabeth. (2015). Making Computers Accessible: Disability Rights and Digital Technology. Johns Hopkins University Press.
Turner, Fred. (2010). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University of Chicago Press.
Vinsel, Lee and Andrew L. Russell. (2020). The Innovation Delusion: How our Obsession with the New has Disrupted the Work That Matters Most. Currency.
Yost, Jeffrey R. (2016). “The March of IDES: Early History of Intrusion Detection Expert Systems.” IEEE Annals of the History of Computing 38:4, 42-54.
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry. MIT Press.
Yost, Jeffrey R. (March 2021). “Of Mice and Mentalité: PARC Ways to Exploring HCI, AI, Augmentation and Symbiosis, and Categorization and Control". Interfaces: Essays and Reviews in Computing and Culture Vol. 2, Charles Babbage Institute, University of Minnesota, 12-26.
About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor at the University of Minnesota. He has published six books (and dozens of articles), most recently Making IT Work: A History of the Computer Services Industry (MIT Press, 2017) and FastLane: Managing Science in the Internet World (Johns Hopkins U. Press, 2016) [co-authored with Thomas J. Misa]. He is a past EIC of IEEE Annals of the History of Computing, and current Series Co-Editor [with Gerard Alberts] of Springer’s History of Computing Book Series. He has been a principal investigator of a half dozen federally sponsored projects (NSF and DOE) on computing/software history totaling more than $2 million. He is Co-Editor [with Amanda Wick] of Interfaces: Essays and Reviews in Computing & Culture.
2021 (Vol. 2) Articles
Paul E. Ceruzzi, National Air and Space Museum, Smithsonian Institution
Abstract: The term “The Cloud” has entered the lexicon of computer-speak along with “cyberspace,” the Matrix,” the “ether,” and other terms suggesting the immateriality of networked computing. Cloud servers, which store vast amounts of data and software accessible via the Internet, are located around the globe. This essay argues that this “matrix” has an epicenter, namely the former rural village of Ashburn, Virginia. Ashburn’s significance is the result of several factors, including northern Virginia’s historic role in the creation of the Internet and its predecessor, the ARPANET. The Cloud servers located there also exist because of the availability of sources of electric power, including a grid of power lines connected to wind turbines, gas-, and coal-fired plants located to its west—a “networking” of a different type but just as important.
In In his recent book, Making IT Work, Jeffrey Yost quotes a line from the famous Joni Mitchell song, “Clouds”: “I really don’t know clouds at all.” He also quotes the Rolling Stones’ hit, “[Hey, you,] Get off my Cloud.” Why should a business or government agency trust its valuable data to a third-party, whose cloud servers are little understood? No thank you, said the Rolling Stones; not until you can explain to me just what the Cloud is and where it is. Yost gives an excellent account of how cloud servers have come to the fore in current computing. Yet Joni Mitchell’s words still ring true. Do we really know what constitutes the “Cloud”?
A common definition of the Cloud is that of sets of high-capacity servers, scattered across the globe, using high-speed fiber to connect the data stored therein to computing installations. These servers supply data and programs to a range of users, from mission-critical business customers to teenagers sharing photos on their smartphones. What about that definition is cloud-like? Our imperfect understanding of the term is related to the misunderstanding of similar terms also in common use. One is “cyberspace,” the popularity of which is attributed to the Science Fiction author William Gibson, from his novel Neuromancer, published in 1984. Another is “The Matrix”: the title of a path-breaking book on networking by John Quarterman, published in 1990 at the dawn of the networked age. The term came into common use after the award-winning 1999 Warner Brothers film starring Keanu Reeves. (Quarterman was flattered that Hollywood used the term, but he is not sure whether the producers of the film knew of his book.) In the early 1970s, Robert Metcalfe, David Boggs, and colleagues at the Xerox Palo Alto Research Center developed a local area networking system they called “Ethernet”: suggesting the “luminiferous aether” that was once believed to carry light through the cosmos.
These terms suggest an entity divorced from physical objects—pure software independent of underlying hardware. They imply that one may dismiss the hardware component as a given, just as we assume that fresh, drinkable water comes out of the tap when we are thirsty. The residents of Flint, Michigan know that assuming a robust water and sewerage infrastructure is hardly a given, and Nathan Ensmenger has reminded us that the “Cloud” requires a large investment in hardware, including banks of disk drives, air conditioning, fiber connections to the Internet, and above all, a supply of electricity. Yet the perception persists that the cloud, like cyberspace, is out there in the “ether.”
Most readers of this journal know the physical infrastructure that sustains Ethernet, Cyberspace, and the Cloud. I will go a step further: not only does the Cloud have a physical presence, but it also has a specific location on the globe: Ashburn, Virginia.
A map prepared by the Union Army in 1862 of Northern Virginia shows the village of Farmwell, and nearby Farmwell Station on the Alexandria, Loudoun, and Hampshire railroad. The town later changed its name to Ashburn, and it lies just to the north of Washington Dulles International Airport. In the early 2000s, as I was preparing my study of high technology in northern Virginia, Ashburn was still a farming community. Farmwell Station was by the year 2000 a modest center of Ashburn: a collection of buildings centered on a general store. The railroad had been abandoned in 1968 and was now the Washington and Old Dominion rail-trail, one of the most popular and heavily traveled rails-to-trails conversions in the country. Thirsty hikers and cyclists could get refreshment at the general store, which had also served neighboring farmers with equipment and supplies.
Cycling along the trail west of Route 28 in 2020, one saw a series of enormous low buildings, each larger than the size of a football field, and surrounded by a mad frenzy of construction, with heavy equipment trucks chewing up the local roads. Overhead was a tangle of high-tension electrical transmission towers, with large substations along the way distributing the power. The frenzy of construction suggested what it was like to have been in Virginia City, Nevada, after the discovery and extraction of the Comstock Lode silver. The buildings themselves had few or no markings on them, but a Google search revealed that one of the main tenants was Equinix, a company that specializes in networking. The tenants of the servers try to avoid publicity, but the local chambers of commerce, politicians, and real estate developers are proud to showcase the economic dynamo of the region. A piece on the local radio station WTOP on November 17, 2020, announced that “Equinix further expands its big Ashburn data center campus,” quoting a company spokesperson saying that “…its Ashburn campus is the densest interconnection hub in the United States.” An earlier broadcast on WTOP reported on the activities of a local real estate developer, that “Northern Virginia remains the ‘King of the Cloud’” In addition to Equinix, the report mentioned several other tenants, including Verizon and Amazon Web Services.
These news accounts are more than just hyperbole from local boosters. Other evidence that indicates that, although cloud servers are scattered across the globe, Ashburn is indeed the navel of the Internet.
In my 2008 study of Tysons Corner, Virginia, I mentioned several factors that led to the rise of what I then called “Internet Alley.” One was the development of ARPANET at the Pentagon, and later at a DARPA office on Wilson Blvd. in Rosslyn. Another was the rise of the proto-Internet company AOL, headquartered in Tysons Corner. Also, in Tysons Corner was the location of “MAE-East”—a network hub that carried a majority of Internet traffic in its early days. The root servers of the dot.com and dot.org registry were once located in the region, with the a: root server in Herndon, later moved to Loudoun County. The region thus had a skilled workforce of network-savvy electrical and computer engineers, plus local firms such as SAIC and Booz-Allen who supported networking as it evolved from its early incarnations.
Around the year 2000, while many were relieved that the “Y2K” bug had little effect on mainframe computers, the dot.com frenzy collapsed. The AOL-Time Warner merger was a mistake. But there was an upside to the boom-and-bust. In the late 19th and early 20th Century the nation experienced a similar boom and bust of railroad construction. Railroads went bankrupt and people lost fortunes, But the activity left behind a robust, if overbuilt, network of railroads that served the nation well during the mid and late 20th century. During the dot.com frenzy, small firms like Metropolitan Fiber dug up many of the roads and streets of Fairfax and Loudoun Counties and laid fiber optic cables, which offered high speed Internet connections. After the bust these became unused— “dark fiber” as it was called. Here was the basis for establishing Cloud servers in Ashburn. By 2010, little land was available in Tysons Corner, Herndon, or Reston, but a little further out along the W&OD rail-trail was plenty of available land.
That leaves the other critical factor in establishing Cloud servers—the availability of electric power. While some Cloud servers are located near sources of wind, solar, or hydroelectric power, such as in the Pacific Northwest, Northern Virginia has few of those resources. The nearest large-scale hydroelectric plant, at the Conowingo Dam, lies about 70 miles to the north, but its power primarily serves the Philadelphia region. (That plant was the focus of the classic work on electric power grids, Networks of Power, by Thomas Parke Hughes.) To answer the question of the sources of power for Ashburn, we return to the Civil War map and its depiction of the Alexandria, Loudoun, and Hampshire, later known as the Washington and Old Dominion Railroad.
The origins of that line go back to the 1840s, when freight, especially coal, from the western counties of Virginia were being diverted to Baltimore, Maryland over the Baltimore and Ohio Railroad. In response, Virginians chartered a route west over the Blue Ridge to the mineral and timber rich areas of Hampshire County. (After 1866 Hampshire County was renamed Mineral County, in the new state of West Virginia.) The Civil War interrupted progress in construction, and after several challenges to its financial structure, the line was incorporated as the Washington and Old Dominion Railway Company in 1911. It never reached farther than the summit of the Blue Ridge, and the proposed route to the west would have had to cross rugged topography. The line could never have competed with the B&O’s water level route. The shortened line soldiered on, until finally being abandoned in 1968, making way for the rail-trail conversion. One interesting exception was a short spur in Alexandria, which carried coal to a power plant on the shore of the Potomac. That plant was decommissioned in 2014, thus ending the rail era of the Alexandria, Loudoun, and Hampshire.
Margaret Dykens, MLIS, MS, Curator and Director of the Research Library San Diego Natural History Museum
With preface by Amanda Wick, Interim Archivist, Charles Babbage Institute Archives
Who was Charles Babbage?
Charles Babbage, Victorian scientist and mathematician, was born on December 26, 1791 to a family of London bankers. Fascinated with mathematics, and especially algebra, he studied the subject at Trinity College, Cambridge. While attending Cambridge, he co-founded the Analytical Society for promoting continental mathematics and reforming traditional teaching methodologies of the time. Many of these methods are still used in some form today in the instruction of algebra.
Following completion of his degree, Babbage worked as a mathematician for the insurance industry. He was elected a Fellow of the Royal Society in 1816 and played a prominent part in the foundation of the Astronomical Society (later Royal Astronomical Society) in 1820. As a member of the Royal Society during the heady days of the early 1800s, Babbage came into contact with a number of great thinkers and engaged in a robust correspondence with fellow mathematicians, naturalists, and philosophers—including Sir William Herschel, Charles Darwin, and Ada Lovelace.
In 1821 Babbage invented the first of his two calculating machines, The Difference Engine, which would quickly become his singular passion and focus. The function of the Difference Engine was intended to compile mathematical tables and, on completing it in 1832, he began work on a more complex and multifunctional machine that could perform any kind of calculation. This was the Analytical Engine (1856) and its invention is widely considered to be the founding of the field of modern computing.
Today, little remains of Babbage's prototype computing machines and, unfortunately, critical tolerances required by his machines exceeded the level of technology available at the time. Though Babbage’s work was formally recognized by respected scientific institutions, the British government suspended funding for his Difference Engine in 1832, and after an agonizing waiting period, ended the project in 1842. Though Babbage's work was continued by his son, Henry Prevost Babbage, after his death in 1871, the Analytical Engine was never successfully completed, and ran only a few "programs" with embarrassingly obvious errors.
Despite his many achievements in mathematics, scientific philosophy, and his leadership in contemporary social movements, Babbage’s failure to construct his calculating machines left him a disappointed and embittered man. He died at his home in London on October 18, 1871.
What’s in a name?
The calculating engines of English mathematician Charles Babbage (1791-1871) are among the most celebrated icons in the prehistory of computing. Babbage’s Difference Engine No. 1 was the first successful automatic calculator and remains one of the finest examples of precision engineering of the time. Babbage is sometimes referred to as "father of computing." The International Charles Babbage Society (later the Charles Babbage Institute) took his name to honor his intellectual contributions and their relation to modern computers.
Where is Babbage in the Archives?
Materials related to Charles Babbage are scattered around the world, with the vast majority of his personal papers and library held at the Science Museum of London and the British National Library. Although the Charles Babbage Institute is named after Charles Babbage, we actually have very little material originating with our namesake. What we do have are first editions of many of his books and journal articles and a number of these are inscribed with dedications to his patrons by the author. These rare materials constitute the earliest materials in our repository and, while used in classroom settings and on exhibit, rarely leave our vault. Our holdings of Babbage’s work include the following:
- Babbage, Charles. On a Method of Expressing by Signs the Action of Machinery. London: [Royal Society of London], 1826.
- Babbage, Charles. Reflections on the Decline of Science in England, and on Some of Its Causes. London: Printed for B. Fellowes (Ludgate Street); and J. Booth (Duke Street, Portland Place), 1830.
- Babbage, Flather, Dodgson, Flather, John Joseph, and Dodgson, Charles. On the Economy of Machinery and Manufactures. London: C. Knight, 1832.
- Babbage, Charles. Passages from the Life of a Philosopher. London: Longman, Green, Longman, Roberts, & Green, 1864.
- Babbage, Charles. The Ninth Bridgewater Treatise: A Fragment. Second Edition., Reprinted. ed. Cass Library of Science Classics; No. 6. London: Cass, 1967.
What is the Ninth Bridgewater Treatise?
One of the titles in Babbage’s oeuvre that is uniquely significant is the Ninth Treatise of Bridgewater. This volume presents Babbage’s perspective on the Eight Treatises of Bridgewater—a series of work by multiple influential thinkers of the Victorian era on natural history, philosophy, and theology. Babbage’s contribution is not officially affiliated with the eight-volume series and was merely his own considerations on the topic. In his volume, which he titled the Ninth Bridgewater Treatise, he discusses his calculating machines and posits the idea of God as a divine programmer who established the rigid natural laws which govern humanity and civilization, in many it presents a case for Deux et Machina.
As a fragmentary piece, and one that does not dwell on mathematical or scientific subjects, this is rarity amongst Babbage materials. Our copy is a second edition and, while in excellent condition, it is not especially rare. Recently, the Curator and Director of the Research Center at the San Diego Natural History Museum, Margaret Dykens, experienced one of those once-in-a-lifetime finds when she reviewed an anomaly within their catalog, an edition of Babbage’s Ninth Treatise of Bridgewater that seemed to be a galley proof. As she notes in the following article, deep examination of the item by both herself and noted Babbage scholar, Dr. Doron Swade, made several incredible finds.
Charles Babbage Institute. (10 June 2020). “About Charles Babbage.” Charles Babbage Institute web site. http://www.cbi.umn.edu/about/babbage.html.
Swade, Doron. (12 June 2020). "Babbage, Charles (1791–1871), mathematician and computer pioneer." Oxford Dictionary of National Biography. 23 September 2004. https://www.oxforddnb.com/view/10.1093/ref:odnb/9780198614128.001.0001/odnb-9780198614128-e-962.
Amanda Wick (July 2020). “Charles Babbage’s Ninth Bridgewater Treatise.” Interfaces: Essays and Reviews in Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 17-22.
About the author: Amanda Wick is the interim archivist at the Charles Babbage Institute Archives (CBIA) at the University of Minnesota. Prior to working at CBIA, Amanda led major processing projects at the University of Minnesota and managed the archives of the Theatre Historical Society. She obtained her Bachelor’s degree in Environmental Studies from Lawrence University (Appleton, WI) and her Masters in Library and Information Science from Dominican University (River Forest, IL).
Charles Babbage’s Ninth Bridgewater Treatise in the SDNHM Library
Margaret Dykens, MLIS, MS, Curator and Director of the Research Library San Diego Natural History Museum
Abstract: As a foundational figure in the history of science, Charles Babbage is best known for his contributions to computing. In fact, his mechanical, programmable calculating machines are considered precursors to modern computers. These accomplishments were the primary reason for the naming of the Charles Babbage Institute, and its archivists have sought to honor its namesake through the purchase of rare books authored and inscribed by him. One such book is a fragmentary oddity, the Ninth Bridgewater Treatise, and a copy owned by the San Diego Natural History Museum that was recently examined by curatorial staff and prominent Babbage scholar, Dr. Doron Swade, holds curious clues to Babbage's approach to natural philosophy. (KW: Babbage, Charles; Swade, Doron; computing history; rare books; antiquities; archives.)
The Research Library of the San Diego Natural History Museum (SDNHM), founded in 1874, has extensive holdings of rare and antiquarian books, including natural history volumes dating back to 1514. The majority of these books were donated by various naturalists and philanthropists over the past one hundred years. One such naturalist was General Anthony Wayne Vogdes (1843-1923), a career Army officer with an active secondary career as a geologist and paleontologist. Vogdes was also an avid bibliophile and donated his extensive scientific library to the SDNHM after his death in 1923. One of the books from Vogdes’ library was a first edition of Charles Babbage’s Ninth Bridgewater Treatise (1837).
This particular volume was mentioned in a newspaper article published on January 11, 1896 in the San Francisco Bulletin, which described many of the most important books in Vogdes’ personal library. Babbage’s Ninth Bridgewater Treatise is mentioned in the list with the comment that it contained “annotations by the author.” The book in question appears to be a galley proof with wide margins and many hand-written pencil annotations, as well as marginalia likely written by the author.
There is also a portion of a hand-written letter bound into the book itself—Vogdes was an amateur book-binder and his library consists almost exclusively of his own bindings, many of which have notes, letters, images, or other memorabilia that he collected and bound into the text.
I was intrigued by the hand-written annotations and marginalia in Vogdes’ copy of the Ninth Bridgewater Treatise and contacted Dr. Doron Swade, preeminent Babbage scholar and retired curator of the Charles Babbage collection at the Science Museum of London, for verification of the handwriting. After emailing Dr. Swade several images of the annotations, he replied to me that it was highly likely that they were in Charles Babbage’s own hand, both because of the style of writing as well as the content itself. To quote Dr. Swade:
Having gone through the 7,000 manuscript sheet (ms) of Babbage Scribbling Books the handwriting in what is visible on the folded manuscripts interleaved on page 128, and in the third image, looks very much like Babbage’s, as do the pencilled annotations.
But there is stronger evidence for the annotations and ms being his: in the preface ‘advertisement’ to the second edition Babbage states that the chapter ‘On Hume’s Argument Against Miracles’ has been ‘nearly rewritten’. The first image you sent with the pencilled annotations, which are surely from the first edition, correspond to changes made in the second edition. It is not credible that anyone other than Babbage would have made what are essentially editorial instructions, and editorial amendments, that were carried through to the second edition.
There is even more conclusive evidence in the sample page 131 where the pencilled annotations appear verbatim in the second edition, and the several pencilled deletions have also been carried through.
The ms in the third of the images you sent starts with the same opening sentence that appears in the second edition at the top of page 127 though what follows has been edited and amended. It could be that this is a sheet from the original manuscript for the first edition though not having access to a first edition I am unable to confirm this.
It is fair to conclude that the annotations are Babbage’s. It is difficult to see any other explanation.
Although I do not know how General Vogdes came to have this particular annotated first edition of the Ninth Bridgewater Treatise in his collection, I am not surprised as his entire library constituted over seven thousand scientific volumes on topics related to geology, paleontology, and other scientific and philosophical disciplines. Indeed, his personal library included works by Darwin, Hume, Dana, Agassiz, and Lyell as well as many other well-known natural historians and intellectuals.
We are hopeful that this unique source might be of interest to some Babbage researcher or historian. Any scholars interested in pursuing this topic further should feel free to contact me directly at the SDNHM Research Library.
Swade, Doron, Dr. “’Ninth Bridgewater Treatise.’ Message requesting assistance in authenticating possible rare volume by Charles Babbage.” Message to Margaret N. Dykens. January 2020. E-mail.
Margaret N. Dykens (July 2020). “Charles Babbage’s Ninth Bridgewater Treatise in the SDNHM Library.” Interfaces: Essays and Reviews in Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 17-22.
About the author: Margaret N. Dykens received her Master’s degree in Biology from the College of William and Mary, Williamsburg, Virginia in 1980. Upon graduation, she was hired as Staff Illustrator at the Harvard University Herbarium. Margaret went on to earn a second graduate degree in Library Science from the University of Michigan School of Information in 1993. In 1997, she became the Director of the Research Library for the San Diego Natural History Museum (SDNHM). In addition to her work directing SDNHM, she has served as curator for two exhibitions; the first was The California Legacy of A.R. Valentien, based on the Museum’s fine art collection, where she toured with this exhibition to numerous venues across the U.S. In 2016, she also curated the permanent exhibition, Extraordinary Ideas from Ordinary People: A History of Citizen Science, based on fine art works, historical objects, and rare books from the Research Library.
Alejandro Ramirez, PhD, Sprott School of Business – Carleton University
Abstract: A series of wrong decisions precipitated the Y2K crisis: adopting the 6-digit date format, using COBOL as the standard in business computing and discontinuing COBOL-teaching in many American universities shortly after it was adopted. Did we learn anything from this crisis? (KW: Y2K crisis, COBOL, Internet history, Outsourcing.)
Twenty years ago, we averted the Y2K crisis. When we talk about the crisis, people are genuinely puzzled that it was a very expensive affair. They have a distorted idea about a crisis that did not happen, or how it was supposed to be the end of the world, but at the end, nothing actually happened. Then they wonder if something similar could happen again. That is really the crux of the matter: what did we learn from the Y2K crisis?
Knowing the history of this crisis is an important and serious endeavour. It is a benefit to understand how computer usage evolved, and what forces shaped our technology, our practices, and computers’ contribution to society. History becomes an indispensable light guiding us in this understanding.
What were they thinking?
Employment of personnel to use computers in businesses became widespread in North America with the introduction of the IBM 1401 in 1959. Before, if any, machine-based data processing generally was executed by electromechanical accounting machines. Calendar days, if needed, were fed via punched cards, indicating the date appropriate for that job. When programmers from the late 1950s to the mid-1960s decided that in order to save on memory costs (McCallum 2019), they will use only the last two digits of the year, i.e., 60 instead of 1960, they never imaging that their programs will still be running at the end of the 20th Century. After all, 40 years seemed a very long time, especially since they were saving approximately $16.00 USD per date by saving two bytes, 16 bits, of core memory valued at about one dollar per bit.
When IBM announced their new, more powerful System/360, with many innovative features compared to their 1400s technology, they also decided—in the interest of compatibility—that their system’s date would also be a 6-digit date. To cement this practice, on November 1, 1968, the U.S. Department of Commerce, National Bureau of Standards, issued a Federal Information Processing Standard which specified the use of 6-digit dates for all information exchange among federal agencies (FIPS 1968). The standard became effective in January 1, 1970, enshrining the 6-digit date standard by the bureaucracy of government, also with little to no thought of the year 2000.
It took about fifteen years for someone to realize that having a 6-digit date may be a problem. Unknown to most but a few programmers, Jerome and Marylin Murray published their call to arms Computers in Crisis: How to Advert the Coming Worldwide Computer Systems Collapse in 1984. They credited their daughter Rosanne, a senior research analyst at Systemhouse, Ltd., of Ottawa for the origins of the book: “This book may not have been undertaken were it not for a lengthy telephone discussion of the dating problem with Rosanne…Her interest and encouragement have been unflagging” (Murray & Murray 1984, p. xix).
Shortly after the book was published, Spencer Bolles posted on January 18, 1985, from his computer in Reed College in Oregon, the first recorded mention of the Year 2000 problem on a Usenet group: “I have a friend that raised an interesting question that I immediately tried to prove wrong. He is a programmer and has this notion that when we reach the year 2000, computers will not accept the new date” (Bolles 1985).
In March 1959 Burroughs Corporation computer scientist Mary Hawes called for an industry and government consortium in order to develop a standard programming language for business—promoting greater portability for organizational users transitioning mainframe computers. With appearances of Autocode, FLOW-MATIC, FORTRAN, ALGOL-58, and other 1950s programming languages, she recognized the high costs of proliferation.
Feeding the Beast
From the 1960s into the 1990s, many universities offered COBOL courses, as did companies and vocational schools like Control Data Institutes. Today, in an age where AI/analytics, games, robotics, cloud, and the internet of things are foremost for many computer science students, few consider learning legacy systems and legacy languages. Accordingly, COBOL courses are scarce. A Slate article quoted Prof. John Zeanchock, Robert Morris University, stating just 37 colleges and universities globally have a “mainframe course” on the curriculum. Most schools’ faculty are unable to suggest legacy specialist students/graduates when banks or local governments call. (Botella 2020). In our culture, Innovation is revered, and maintenance is not. In IT there is a myopic attention to the latest tech and a failure to recognize and value that IT maintenance requires great skill and can be innovative (new processes, new fixes, etc.). Privileging innovation over maintenance is also in part tied to gender stereotypes and discrimination as historically women have had greater opportunity in the critical areas of services, maintenance (both machines and debugging), and programming (from plug board to languages), and fewer opportunities in computer and software engineering (Yost, 2011, 2017).
The percentage of women majors in computer science declined sharply the past quarter century—from more than 35 percent in the 1980s to 18.1 percent in 2014, only varying slightly since (nsf.gov/statistics). The reasons are varied, but gender stereotyping, a male dominant computing culture, and educational and workplace discrimination are factors (Abbate, 2012; Hicks, 2017; Misa, 2011). This has furthered labor shortages (all areas, including legacy) and held back computer science. Labor shortages can become all the more profound in times of crisis, including the current health and economic crisis.
More than a Jersey Thing
On April 6, 2020, New Jersey Governor Phil Murphy made a public plea for volunteer “Cobalt” programmers (meaning COBOL) to aid New Jersey and help with glitches to an overburdened unemployment benefits computer system more than 40 years old. New Jersey was having difficulties with timely processing of unemployment payments to the flood of new filers. The increased burden (volume and parameters) on the unemployment system was a major bottleneck, or to borrow Thomas Hughes’ term, reverse salient, to timely and accurate data processing to respond to those in need (Hughes, 1983).
This sparked an onslaught of journalist articles as well many Twitter, Facebook and other social media posts. The critiques ranged from Governor Murphy/New Jersey having an antiquated unemployment insurance computer system to calling for volunteers from a population segment that would undoubtedly be the most susceptible to COVID-19 risk—the elderly. Meanwhile, social media erupted with jokes with ageist images of elderly individuals as potential volunteers.
Other states, including Connecticut and Kansas, had similar shortages of trained COBOL experts to confront unemployment insurance system challenges. Understandably, unemployed workers waiting for unemployment benefits are extremely frustrated and angry, expressing as much on the Kansas Department of Labor (KDoL) platform. Much is the matter with Kansas’ system, with its origins in the 1970s, and inadequate updates for flexibility and scale. In late April, KDoL indicated a timeline where processing could occur by late May (for many that will push the wait to months). For states that have prioritized investing in updating other computer systems, but not unemployment insurance, it amounts to neglecting infrastructure that serves the most vulnerable in society.
Why do so many states have ill-equipped IT systems for unemployment benefits processing? Replacing long existing systems is complex and expensive (hundreds of millions of dollars). Change is also disruptive to existing labor, existing skill sets. Unemployment systems serve those lacking political power; federal and state governments deprioritize them. Further, systems (in all their technical, political, economic, and other contexts) become entrenched, or to use Hughes’ concept, gain momentum (Hughes, 1983). Failures/pressures can redirect momentum, some states scrambled for cloud solutions once systems crashed in April—possibly the least bad option, but also suboptimal timing, new systems and processes on the fly are especially difficult. Regardless, the problem is one of infrastructure—not valuing maintenance, labor, and recipients. It is not merely COBOL versus the cloud, in fact, COBOL can and does integrate with AWS, Azure, and IBM clouds, hybrid cloud is common.
State IT Workers and Hired Guns’ Heroic Efforts
North Texas’ COBOL Cowboys staffing firm, larger IT services enterprises, and COBOL-skilled independent contractors are in great demand. The governors, state DoLs, and state CIOs are doing their best to staff up to address problems. For the systems analysts, programmers, and other state employees and contractors the hours are long, work difficult, and efforts truly heroic. The Federal CARES Act’s unemployment benefits, PUA/PECU, allows states to extend the duration of benefits, and include those usually not eligible—the self-employed. This adds greatly to both volume and complexity. In my playful title, “play” is used for where work plays/is performed (fewer coders choosing legacy) and to highlight coders’ creativity—in the spirit of CS metaphors like “sandbox” for building (non-live) code.
Domestic and Global Digital Divides
In the coming year, the overall percentage of Americans below the poverty line will peak higher than any time in more than 50 years— the impact for African-American, Hispanic, and Native-American populations is particularly severe. The disparity of access to health insurance, banking, loans, and information technology, as well as exposure to risk, and disparity of incidence and mortality with COVID-19, highlights extreme and growing race and class inequality in the United States.
Washington D.C.’s unemployment platform urges benefits filers to use Microsoft Explorer. Microsoft retired Explorer in January 2016, an unsupported version remains for computers, not smart phones. A Pew Research Center 2019 survey showed 54 percent of Americans under $30,000/year income have a computer, while 71 percent have a smart phone. For those making over $100,000, 94 percent have a computer/broadband at home. (Anderson and Kumar, 2019). Only 58 percent of African-Americans have a computer, versus 82 percent for whites. (Perrin and Turner, 2019) In digital, just like education, healthcare, housing, and other infrastructure, there are two Americas.
Y2K: Why to Care
An earlier crisis largely involving COBOL, one with a long and visible runway, is both consequential context and instructive to current challenges. About a quarter century ago, governments and corporations began seriously addressing the pending Y2K crisis—caused by two digits for date often in COBOL code—to avert risks to life and the economy, to make it a nonevent.
Investments and global cooperation were key and the International Y2K Cooperation Center played a meaningful role in fostering collaboration. The shortage of programmers knowledgeable in COBOL, and the lower expense and overwhelming volume of code, led to outsourcing to an emerging Indian IT services industry. This lent momentum to this trade, and to a shifting geography in IT work that remains impactful (though corporate decision-makers are accelerating artificial intelligence applications producing further labor transformations, ones detrimental to Indian IT laborers, developments standout ABD sociologist and CBI IDF Fellow Devika Narayan is insightfully analyzing). Gartner Inc. estimated U.S. government and business expenditures were up to $225 billion, a breathtaking sum indicative of costs of putting off maintenance until a time-sensitive crisis. In passing into the new millennium with few major problems, the situation lent credence to two diverging interpretations—that heavy investment in maintenance had been necessary to avert catastrophe, or more common (and less accurate), that it was an overhyped problem leading to squandered funds in preparing, in maintenances fixes. Offshoring saved money in the short run, but may not have in the longer run, it left a legacy of less and less current, on-shore COBOL expertise (for maintenance, updates, security, etc.), a workforce and talent helpful in global crises, particularly ones in which unfortunate (U.S.) nationalistic tendencies and policies have inhibited international cooperation.
CONNECT and Disconnects
Maintaining infrastructure is important. Anemic IT budgets have not only hurt opportunities to change and move to innovative new solutions, but also to best maintain existing systems and to better assure their ability to perform and to perform to scale in both normal times and crises. The reverse salient certainly is not always COBOL or COBOL alone. State auditors warned Florida Governor Ron DeSantis that Florida’s unemployment site, its “CONNECT” cyberinfrastructure, had more than 600 systems errors in need of fixing, but that state officials had “no process to evaluate and fix.” (Mower, 2020). It was a 2013 $77 million system, which he is quick to point out, his administration inherited. This underlines the challenge not just in Florida but many States—inadequate infrastructure is the predecessors’ fault, is not the current leaders’ problem, and fixes should lie with successors. Often the (now) multi-hundred million-dollar cost typical of major upgrades to new unemployment insurance systems (and ongoing refinement) is difficult without federal assistance. Florida’s CONNECT is a reminder of damaging disconnects, and leaders’ inattention to infrastructure for vulnerable people. The problem is also one of meager and dwindling federal support. Federal aid for state unemployment administration has been dropping for a quarter century with severe cuts in 2018 and 2019. In a survey (pre-COVID-19) more than half of the states responded their unemployment system problems were “serious” or “critical.” (Botella 2020).
Neglected Infrastructure and Crashes
Working two tenths of a mile from the site of the 2007 Interstate 35 West Mississippi River Bridge collapse in Minneapolis, is a frequent reminder that strong, safe, and well-maintained infrastructure is essential. Twenty-eight percent of infrastructure project funding at the state level comes from federal grants (primarily for physical infrastructure). States’ invisible software infrastructure is starved, especially unemployment systems. Hopefully the COVID-19 pandemic leads not only to evaluating our medical preparedness with ICUs, PPE, and unmet needs in free enterprise insurance and healthcare, but also greater evaluation of IT infrastructures. Ideally, the developments will lead all governors with poor unemployment insurance system performance to the same conclusion as Governor Murphy about the need for post-mortems on digital infrastructure. As he put it “how the heck did we get here when we literally needed COBOL programmers”— learning from the past is important.
One thing clear from the two COBOL crises is that history and archives matter—my thoughts here have at best just scratched the surface on fundamental IT infrastructure and contexts someone could analyze with tremendous depth using Charles Babbage Institute resources. CBI’s archival and oral history resources (most transcripts online, all free) to study the Y2K crisis and the history of CODASYL and COBOL (and many other topics and themes in the history and social study of computing) are the finest and most extensive in the world. A talented University of Pennsylvania doctoral candidate in the History and Sociology of Science, Zachary Loeb, has drawn on CBI’s International Y2K International Cooperation Center Records for his important dissertation on the cultural, political, and technical history of Y2K.
Over the years, a number of researchers have used our Conference on Data Systems Languages (CODASYL) Records. While it stands out on documenting COBOL and the group’s work with databases (what occurred in 1959 and far beyond), we have many other COBOL materials in a variety of collections. One such collection (a recent one) is our largest overall collection at more than 500 linear feet, the Jean Sammet Papers—Sammet may have been the single most important developer with COBOL. Likewise, our Frances E. (“Betty”) Holberton Papers has rich material on CODASYL and COBOL. There is also great COBOL content in our Burroughs Corporate Records, Control Data Corporation Records, Gartner Group Records, Auerbach and Associates Market and Product Reports, IBM SHARE, Inc., HOPL 1978, Charles Phillips Papers, Jerome Garfunkel Papers, Warren G. Simmons Papers, National Bureau of Standards Computer Literature, Computer Manuals, and many other collections. COBOL’s history is one of government, industry, and intermediaries’ partnerships, standards, maintenance, labor, gender, politics, culture and much more. In a technical area that always seems focused on the new, new thing, its 60-year past and its continuing presence deserve greater study.
Abbate, Janet. (2012). Recoding Gender: Women’s Changing Participation in Computing, MIT.
Allyn, Bobby. (2020). “COBOL Cowboys Aim to Rescue the Sluggish State Unemployment Systems." NPR, April 22, 2020.
Anderson, Monica and Madhumitha Kumar. (2020). “Digital Divide Persists…” Pew Research Center, May 7, 2020.
Botella, Ella. (2020). “Why New Jersey’s Unemployment System Uses a 60-Year-Old Programming Language.” Slate, April 9, 2020.
Charles Babbage Institute Archives (finding aids to the collections mentioned in final paragraph).
Hicks, Marie. (2017). Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing, MIT Press.
Hughes, Thomas P. (1983). Networks of Power: Electrification in Western Society, 1880 to 1930, Johns Hopkins University Press.
Kennelly, Denis. (2019) “Three Reasons Companies are only 20% Into Cloud Transformation.” IBM.com, March 5, 2019.
King, Ian. (2020). “An Ancient Computer System is Slowing Giant Stimulus.” Bloomberg.com, April 13, 2020.
Mazmanian, Adam. (2014). “DoD Plans Upgrade to COBOL-based Contract System” FCW, July 7, 2014.
Misa, Thomas J., ed. (2011). Gender Codes: Why Women are Leaving Computing, Wiley.
Mower, Lawrence. (2020). “Ron DeSantis…” Tampa Bay Times, March 31, 2020.
Perrin, Andrew and Erika Turner. (2019) “Smartphones Help Blacks and Hispanics Bridge Some—But Not All—Digital Gaps with Whites,” Pew Research Center, August 20, 2019.
Yost, Jeffrey R. (2011). “Programming Enterprise: Women Entrepreneurs in Software and Computer Services,” in Misa, ed. [full cite above].
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry, MIT Press.
Special thanks to CBI Acting Archivist Amanda Wick for discussion/insights on COBOL and our collections.
Jeffrey R. Yost (May 2020). “Where Dinosaurs Roam and Programmers Play: Reflections on Infrastructure, Maintenance, and Inequality.” Interfaces: Essays and Reviews on Computing and Culture Vol. 1, Charles Babbage Institute, University of Minnesota, 1 - 8.
About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor at the University of Minnesota. He has published six books (and dozens of articles), most recently Making IT Work: A History of the Computer Services Industry (MIT Press, 2017) and FastLane: Managing Science in the Internet World (Johns Hopkins U. Press, 2016) [co-authored with Thomas J. Misa]. He is a past EIC of IEEE Annals of the History of Computing, and current Series Co-Editor [with Gerard Alberts] of Springer’s History of Computing Book Series. He has been a principal investigator of a half dozen federally sponsored projects (NSF and DOE) on computing/software history totaling more than $2 million. He is Co-Editor [with Amanda Wick] of Interfaces: Essays and Reviews in Computing & Culture.