Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences.
2022 (Vol. 3) Table of Contents
2021 (Vol. 2) Articles
2021 (Vol. 2) Table of Contents
With its tendency to grip popular imaginaries and utopian fantasies, artificial intelligence has crystallized the enduring hope for easy technological solutions to the world’s greatest problems and fears (Haigh & Ceruzzi, 2021; Plotnick, 2018). It has been hailed as “the magic wand destined to rescue the global capitalist system from its dramatic failures” (Brevini, 2022, p. 28), and has been positioned as the linchpin of modern civil society. But, while developments in artificial intelligence technologies are commonly considered among the most important factors in shaping the modern condition, they also have exacerbated inequality, ushered in a new era of discrimination (D’Ignazio & Klein, 2020; Benjamin, 2018; Radin, 2017), left irreversible environmental damage (Brevini, 2022; Dauvergene, 2021), worsened labour struggles (Frey, 2021; Pasquale, 2020; Gray & Suri, 2019), and concentrated power – and wealth – in the hands of the privileged elite (Brevini, 2022; Crawford, 2020; Frey, 2019). As such, critically studying artificial intelligence requires a multifaceted understanding of it as being both controllable and controlling, dependent and autonomous, minimal and robust, submissive and authoritative, and determined and determinable.
To fully understand these binaries and their implications, artificial intelligence research undertaken in the humanities and social sciences warrants a long-term, historical approach that views artificial intelligence in the broader context of technological development, including the social, political, environmental, and cultural forces impacting it. This is especially the case given that the so-called “artificial intelligence boom” in academia has led to a bias towards works published in the last couple of years. But, if properly informed by the past, artificial intelligence research is more likely to prepare users for the future while also shedding light on the ways that we must act differently in the face of technological change.
Technology development and usage carries the imprint of political, ontological, and epistemological ideologies, such that every modern technology, including and especially artificial intelligence, is an infinitesimal representative of not just what users know, but how users come to know. Insofar as the humanities and social sciences are interested in technology as an instigator of cultural change, these disciplines must centralize its historical and epistemological dimensions, and investigate how, at every major historical moment in the development of modern technology and artificial intelligence/computational systems, users have adapted to new forms of knowledge-making.
Although most research in humanities and social sciences exhibits some kind of historical immediacy, it tends to be detached from larger epistemological considerations that align with major historical moments of change. Understanding, at each major technological juncture, how technology users come to know, may be crucial to developing better knowledge about technology (including artificial intelligence), its users, and the world.
This research would involve a multifaceted, interdisciplinary methodology that is both “anti-modern” and philosophical. Edwards (2003), for example, suggests that any historical and archival method to technological inquiry necessarily avoids falling into the trap of “technological determinism” that plagues so much current artificial intelligence research, especially those conducted through short-term analyses. Selective attention primarily to the “modern” aspects of infrastructures can produce blindness to other aspects that may, in fact, be “anti-modern”; as Golumbia (2009) contends, irrespective of “new” technologies, human societies remain bound by the same fundamental forces that they always have been, so technological changes are best seen as shifts in degree rather than in kind. For this reason, technology ought to be assessed with reference to the past, especially because the computer apparatus leaves “intact many older technologies, particularly technologies of power, yet puts them into a new perspective” (Bolter, 1984, p. 8).
This approach to artificial intelligence research would model a different kind of temporal orientation for the humanities and social sciences that is rooted in the recognition that both ethereal, “cloud-like” technologies and “resolutely material media” (Mattern, 2017) have always co-existed. Because the old and the new necessarily overlap, it is important to draw archival linkages to produce more precise and comprehensive evaluations of technology and technological change. As Chun (2011) notes, new media races simultaneously towards the future and the past in that “the digital has proliferated, not erased, [historical] media types” (p. 11, 139).
An historical way forward may also be key to confronting and dismantling algorithmic coloniality, the idea that colonial logics are replicated in computational systems, including in how sovereignty and land exploitation are embedded in the digital territory of the information age (Mohamed, Isaac, & Png, cited in Acemoglu, 2021; Lewis et al., 2020; Radin, 2017). Algorithmic coloniality suggests that the dominance and manipulative power of the world’s largest technology corporations mirrors traditional strategies of imperial colonizers (Brevini, 2022, p. 95). While the benefits of technological innovation accelerate economic gains for the privileged elite, Mohamed, Isaac, and Png (cited in Acemoglu, 2021) argue that any pathway to shared prosperity must address colonial legacies and three distinct forms of algorithmic harms: algorithmic oppression, exploitation, and dispossession (p. 61). Doing so is not only consequential for people who identify as being Indigenous; it may provide the tools necessary for intervening in the perpetuation of discrimination, generally (Radin, 2017). This, Lewis et al. (2020) claim, forms a powerful foundation to support Indigenous futurity (p. 65) while injecting artificial intelligence development with new ontologies whose imaginations and frameworks are better suited to sustainable computational futures (p. 6).
Extending from this, an historical approach may also be key to recognizing “non-Western,” alternative ways of knowing and being, including how “non-Western” technology may influence future iterations of artificial intelligence technologies. This is made clear in the Indigenous Protocol and Artificial Intelligence Working Group’s explanation of the potential links between artificial intelligence technologies and both the Octopus Bag – a multisensorial computing device – and Dentalium – tusk-like shells filled with “computational fluid dynamics simulations” (Lewis et al., 2020, pp. 58-69). This approach, however, may present methodological challenges as researchers try to embrace the nourishing aspects of our traditional value systems while still accommodating modernity.
An historical approach may also serve environmental considerations well, especially in the context of the humanities and social sciences. Adequate research on renewability, ecofuturisms, and the environmental costs of artificial intelligence should span the entire production chain, including the historical circumstances in which those “productive” relationships arose. This view is critically important to exposing the environmental effects of technology, while recognizing that both ecological and social precarity caused by technology is not just a timely and urgent idea, but also one with a rich history. Too much recent and short-term research looks at the ecological impacts of artificial intelligence as a “new” phenomenon, rather than one that replicates historical trends albeit through modern consumption rates (which make environmental effects seem historically unique). Informed by the past, environmental research about technology is more likely to prepare users for the future while also shedding light on the ways that we may want and need to act differently in the face of technological change.
An historical approach to studying artificial intelligence may also help us to: 1) re-evaluate the consumptive ideologies underpinning environmental AI discourse; 2) begin to view data as a non-renewable resource; 3) construct a new genealogy of contemporary technological culture that centers bodily subjects; and, 4) perhaps even consider acting against technological progressivism by halting the production of new “innovations” that “datafy” manual or semi-manual sectors and technologies, merely for the sake of it.
These suggestions would challenge the dominance of artificial intelligence technologies, provide different ways to imagine technological innovation and its cultural implications, and re-envision a world that may not rely on technology to solve the most pressing social, environmental, and political questions. These perspectives could also drastically change our view of the relationship between people, energy, and information. Although these considerations may seem radical and aspirational, they are necessary if we want to reorient perspectives in artificial intelligence research and think about the agents – both human and non-human – that are served and impacted by today’s dominant visions for the future of technology.
Utopian and idealistic views of artificial intelligence are justified by a host of corporate, governmental, and civil actors, who have four major reasons for supporting the continued use and development of artificial intelligence:
- Leveraging computational speed to make work more efficient;
- Appearing to improve the perceived accuracy, fairness, or consistency of decision-making (in light of so-called “human fallibility”);
- Similarly, appearing to depoliticize decision-making by placing it out of reach of human discretion; and,
- Deploying artificial intelligence technologies to solve pressing environmental issues.
These motivations, especially when replete of historical consideration, have led to an automation bias whereby humans tend to trust computational tools more than they probably should. This raises distinct concerns about oversight and responsibility and about the ability to seek recourse in the wake of computational error. In other words, any motivation to use and deploy artificial intelligence technologies necessarily presses up against regulatory, legal, and ethical questions because, at its core, artificial intelligence can distort peoples’ perception of each other and the structures and systems that govern their lives. This is especially true when such technology is viewed as being inherently modern, rather than merely part of a longer, historical lineage of technological advancement.
In this sense, studying artificial intelligence with an historical orientation is as much about people, culture, and the world, as it is about the technology itself. Artificial intelligence is people-populated. It is reliant on human bodies and brains. It is dependent on human hands and eyes. It is fueled by us. But technochauvinism and techno-optimism (both inherently modernist ideologies) hinder our ability to see this. Instead, artificial intelligence perpetuates the fantasy of ever-flowing, uninterrupted, and curated streams of information, technological solutionism, and optimism about artificial intelligences’ ability to solve the world’s most pressing questions – as long as it’s designed with “humans in the loop.” This framing, though, limits and constrains human agency and autonomy by positioning humans as a mere appendage to the machine. This view relies only on small tweaks to the current automated present and fails to account for artificial intelligence imaginaries informed by the past that may better address the harms and inequities perpetuated by current artificial intelligence systems.
A strictly modernist approach to artificial intelligence and automation in general has hampered people’s ability to imagine alternatives to artificial intelligence systems, despite overwhelming evidence that the integration of those systems into our everyday lives disproportionately benefits the wealthy elite and creates undue harm to vulnerable groups (Acemoglu, 2021; D’Ignazio & Klein, 2020; Benjamin, 2018; Radin, 2017). This is because, without an historical orientation, it is natural – and easy – to view artificial intelligence as not only representative of the future, but also as actively shaping it by both opening and closing imaginative possibilities of what the world can become with the “help” of new technologies.
Instead, I’d like to draw attention to an alternative vision: what if we resist the urge to build, deploy, and use new computational systems? What if we begin to realize that technology might not be our world’s saviour? What if we choose to slow down and work intentionally and mindfully instead of quickly? These questions are not meant to elide the important computational work currently carried out by and through artificial intelligence systems, including and especially in medical applications and in services that are too dangerous for human actors to perform. Instead, this alternative vision for the future, which is deeply rooted in historicity, simply resists viewing technology as determined, and instead sees it as being determinable. It reorients power in the favour of human agents rather than technological ones.
Perhaps the “AI question” can only be solved when people are empowered to imagine futures beyond the dominance of techno-utopianism. After all, new imaginaries are really mostly dangerous to those who profit from the way things currently are. Alternative futurisms have the power to show that the status quo is fleeting, non-universal, and unnecessary, and although artificial intelligence has changed the world, people have the ultimate power to shape it.
Acemoglu, D. (2021). Redesigning AI: Work, democracy, and justice in the age of automation. Massachusetts: MIT Press.
Benjamin, R. (2019). Race after technology. Cambridge: Polity Press.
Bolter, J. (1984). Turing’s man: Western culture in the computer age. University of North
Brevini, B. (2021). Is AI good for the planet? Cambridge: Polity Press.
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. Massachusetts: MIT Press.
Chun, W. (2011). Programmed visions: Software and memory. Massachusetts: MIT Press.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press.
Dauvergne, P. (2021). AI in the wild: Sustainability in the age of artificial intelligence. Massachusetts: MIT Press.
D’Ignazio, C., and Klein, L.F. (2020). Data feminism. Massachusetts: MIT Press.
Edwards, P.N. (2003). Infrastructure and modernity: Force, time, and social organization in the
history of sociotechnical systems. In Modernity and Technology (eds. Misa, T.J., Brey, P., and Feenberg, A.). Massachusetts: MIT Press.
Frey, C.B. (2021). The technology trap: Capital, labor, and power in the age of automation. New Jersey: Princeton University Press.
Golumbia, D. (2009). The cultural logic of computation. Massachusetts: Harvard University
Gray, M., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Mariner Books.
Haigh, T., and Ceruzzi, P.E. (2021). A new history of modern computing. Massachusetts: MIT Press.
Lewis, J. et al. (2020). Indigenous Protocol and Artificial Intelligence: Position Paper.
Indigenous Protocol and Artificial Intelligence Working Group. https://www.indigenous-ai.net/position-paper/
Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Massachusetts: Harvard University Press.
Radin, J. (2017). “Digital natives”: How media and Indigenous histories matter for big data. Osiris, 32(1).
Schwab, K. (2017). The fourth industrial revolution. New York: Penguin.
Helen A. Hayes (May 2022). “New Approaches to Critical AI Studies: A Case for Anti-Modernism and Alternative Futurisms.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 45-53.
About the Author: Helen A. Hayes is a Ph.D. Candidate at McGill University, studying the relationship between artificial intelligence, its computational analogs, and the Canadian resource economy. She is also a Policy Fellow at the Centre for Media, Technology, and Democracy at the Max Bell School of Public Policy. She can be reached at email@example.com or on Twitter at helen__hayes.
Cybernetics, an intellectual movement that emerged during the 1940s and 1950s, conceived of the body as an informational entity. This separation of the mind and body, and the prioritization of the mind as a unit of information, became a liberating quality as the capitalist world of industrialism, with its mechanical and earthly labor, bound the liberal subject in shackles. The cybernetic subject, in contrast, as “a material-information entity whose boundaries undergo continuous construction and reconstruction,” floated in the permeable ether of information and technology (How We Became Posthuman, 3). Marxist issues of social alienation and scarcity were resolved by the interconnectedness of information-based beings, and hierarchical labor relations were replaced with more communal forms of exchange. A new utopia was thus formed with the advent of digital communication (Brick, 348).
This dematerialized, cybernetic body converged with the creation of technology through the work of the industrial designer Henry Dreyfuss. Dreyfuss, who drafted what can be considered early user personas out of data collected from the military, utilized these imagined bodies for the testing of physical products. Dreyfuss’ designs, or what he labeled as “Joe and Josephine,” quantified the human experience. This model of testing and iterating designs based on dematerialized conceptions of the body was later incorporated into the development of technology by computer scientists such as Ben Shneiderman, who claimed in a 1979 paper that Dreyfuss’ emphasis on the human experience must be considered by engineers and designers. As scholars such as Terry Winograd and John Harwood claim, Dreyfuss’ methodology became the model for user testing that has remained relevant for interaction designers ever since its publication in 1955.
However, as Katherine Hayles argues in How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1999), the dematerialized body as conceived of by Dreyfuss is problematic. To put it simply, “equating humans and computers is especially easy” if the mind is both an egoless and informational resource to be shared (How We Became Posthuman, 2). Yet, this sort of epistemology neglects embodied and subjective experiences. Race, class, and gender relations cannot be erased by what she labels the “posthuman,” and while Hayles published her book over two decades ago, this issue is still pressing in the field of design. As Sasha Constanza-Chock describes in their book Design Justice: Community Led Practices to Build the Worlds We Need, a “nonbinary, trans, femme-presenting person,” is unable to walk through an airport scanner without getting stopped because the system has been built to represent “the particular sociotechnical configuration of gender normativity” (How We Became Posthuman, 2). The system, in identifying and classifying the body as information, misses crucial identities. In a paper published in 2018 titled “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” authors Timnit Gebru and Joy Buolamwini examined a similar problem of bodily erasure (Buolamwini and Gebru, 2018). Gebru and Buolamwini found that facial recognition systems trained on biased data sets representing faces of mostly white men will, unexpectedly, become biased. The bodies of women and people of color, in this example, are made invisible through their translation into information. As Aimi Hamraie writes in their book Building Access: Universal Design and the Politics of Disability:
Ask an architect about their work, and you may learn about the style, form, materials, structure, and cost of a building than the bodies or minds mean to inhabit it. Examine any doorway, window, toilet, chair, or desk in that building, however, and you will find the outline of a body meant to use it. From a doorframe’s negative space to the height of shelves and cabinets, inhabitants’ bodies are simultaneously imagined, hidden, and produced by the design of built worlds. (Hamraie, 19)
Architects, industrial designers, and interaction designers wield power when they craft who they imagine will use their built environments, and when they ignore their own biases, designs are built to reify hegemonic systems. There is thus a larger issue of disembodiment which needs to be researched as it relates to the contemporary methodologies of interaction designers.
The relationship between designers and human bodies has a long history. As Christina Cogdell argues in Eugenic Design, the “scientific possibilities of controlling heredity through environmental manipulation inspired reform-minded architects and designers” during the early twentieth century, specifically (Cogdell, 16). Cogdell finds that early industrial designers such as Dreyfuss were swept up in a movement to “streamline” design much in the same way that eugenicists looked to “streamline” the human body (Cogdell, 52 - 53). Cogdell cites examples such as the 1939 New York World’s Fair which featured Dreyfuss’ work against a backdrop that used streamlining as a medium through to promote democracy (Cogdell, 2004). Events such as these demonstrate that the industrial desire to create “perfect” environments and “perfect” bodies was not unique to the United States. In Eugenics in the Garden: Transatlantic Architecture and the Crafting of Modernity, Fabiola López-Durán argues that Lamarckian evolution was “an invitation to social engineering” for Latin American nations at the turn of the twentieth century (López-Durán, 4). This form of evolutionary theory fostered an “orientation toward environmental-genetic interaction, empowering an apparatus that made race, gender, class, and the built environment critical measurements of modernity and progress” (López-Durán, 4). While Dreyfuss was engaged with this period of industrial design, this paper departs from these histories by situating Dreyfuss within the post-war era. Nevertheless, this paper recognizes that Dreyfuss’ connection to streamlined bodies may have informed his notion of user-testing, and this is an important consideration when reviewing images of Joe and Josephine.
In this essay, I will explore the cybernetic conception of the body as it relates to the development of technology. More specifically, I will argue that user testing practices, conceived within the historical and cultural context of cybernetics, envisioned that any human figure might represent all human figures. However, as examined previously, this perception of the body as universal ignores the subjective, material, and embodied experiences of users, contributing to the biased systems we see today. This proposed paper will begin with an exploration of the cybernetic notion of the body. It will then explore how this concept converged with the advent of user testing practices and the development of user personas, or skeuomorphic designs used for the creation of digital products. It will, lastly, attempt to correct the histories of industrial design and interaction design by reconfiguring the work of Dreyfuss. These efforts will hopefully extend contemporary literature such as the work of Costanza-Chock, Gebru, Buolamwini, and Hamraie, through a re-examination of history.
Cybernetics emerged as a dominant field in the 1950s through the work of Norbert Wiener and the publication of The Human Use of Human Beings (1950). In The Human Use of Human Beings, Wiener describes a type of communicative society in which humans act as individual, informational units, or automata. These informational, monadic systems relay messages to one another, and through the process of feedback, establish homeostasis. There is thus both a teleological and biological aspect to early descriptions of cybernetics. Like a beehive which has been disrupted, or a flock of geese attempting to take flight, all units must find their place through the interaction and exchange of information with others. This artful dance prioritizes utilitarianism and positivism. The gathering of information through interaction is essential, and in this way, each monad learns to operate as a collective, resisting natural entropic dissolution. The body is thus an extension of and harbor for information. As Weiner writes:
...the individuality of the body is that of a flame rather than that of a stone, of a form rather than of a bit of substance. This form can be transmitted or modified and duplicated...When one cell divides into two, or when one of the genes which carries our corporeal and mental birthright is split in order to make ready for a reduction division of a germ cell, we have a separation in matter which is conditioned by the power of a pattern of living tissue to duplicate itself. Since this is so, there is no absolute distinction between the types of transmission which we can use for sending a telegram from country to country and the types of transmission which at least are theoretically possible for transmitting a living organism such as a human being. (Weiner, 102 – 103)
The mechanisms of the body, and their ability to maintain life and homeostasis, provide inspiration for the natural, organic order of cybernetics, but nothing more. Information, messages, and communication are key, while the embodied experience, insofar as it is not used to relay messages, is inconsequential.
As Katherine Hayles argues in her article “Boundary Disputes: Homeostasis, Reflexivity, and the Foundations of Cybernetics,” this divorce of the body from information was essential in the first wave of cybernetics. Hayles outlines three waves of cybernetics, the first two of which concern our argument here. The first wave from 1945 - 1960 “marks the foundational stage during which cybernetics was forged as an interdisciplinary framework that would allow humans, animals, and machines to be constituted through the common denominators of feedback loops, signal transmission, and goal-seeking behavior” (“Boundary Disputes”, 441 – 467).This stage was established at the Macy conferences between 1946 and 1953 and it was at the Macy conferences, Hayles argues, where humans and machines were “understood as information-processing systems” (“Boundary Disputes”, 442). It is also within this first-wave that homeostasis was perceived as the goal of informational units. Following the chaos and disillusionment of World War II, first-wave cyberneticians found stability to be paramount. The Macy conferences were thus focused on this homeostatic vision.
However, psychoanalytical insight at the conference helped sow ideas for second-wave cybernetics. If man is to be viewed as a psychological subject, in translating the output of one machine into commands for another, he introduces noise into the teleological goal of homeostasis. This reflexive process, or one in which the autonomy of both subjects is to be considered, disrupted the first-wave one-directional model. As Hayles describes, Lawrence Kubie, a psychoanalyst from the Yale University Psychiatric Clinic “enraged other participants [at the conference] by interpreting their comments as evidence of their psychological states rather than as matters for scientific debate” (“Boundary Disputes,” 459). Nevertheless, while the issue of reflexivity may not have won at the Macy conferences, it later triumphed over homeostasis through the work of biologist Humberto Maturana. Maturana rescued the notion of reflexivity by emphasizing that through the rational observer, the black box of the human mind might be quantified. This new feedback process introduced an autopoietic version of reflexivity in which both man and machine might improve through interaction, resolving the threat of subjectivity. Through both waves of cybernetics, cyberneticians instantiated the concept of the body as immaterial.
Designing for People, Joe, and Josephine
The cybernetic body converged with the development of technology in the 1950s through the work of the industrial designer Henry Dreyfuss. Dreyfuss, who was considered to be one of the most influential designers of his time, developed a model for user-testing through skeuomorphic designs which quantified the human experience. While Dreyfuss was not the first to conceive of user testing, he was the first to develop popular user personas. As Jeffrey Yost notes in Making IT Work: A History of the Computer Services Industry, the RAND cooperation’s Systems Research Laboratory conducted a simulation study labeled Project Casey that used twenty-eight students to test man-machine operations for early warning air defense (Yost, 2017). The practice of interviewing early adopters of a system continued into the 1960s in time-sharing projects such as Project MAC in which psychologists such as Ulric Neisser interviewed users about their phenomenological experience with a computer system. It was Dreyfuss, however, who developed pseudo-users that might be used on a wide scale. While command and control computing and human factors researcher demanded testing for specialized users, Dreyfuss aimed, as an industrial designer, to create products for the masses. He therefore looked to craft images of what he deemed lay people for the creation of physical products.
First recognized in his book Designing for People (1955), Joe and Josephine represent Dreyfuss’ perception of the “average'' man and woman. They have preferences and desires, they are employed, and most importantly, they are forms of a Platonic ideal that can be used for testing products. Like cyberneticians such as Maturana, Dreyfuss seems to have recognized the reflexivity between man and machine. Using Joe and Josephine, Dreyfuss tested the interaction between a product and its imagined user in order to improve its usability. Dreyfuss’ book was met with much praise, attesting to the importance of his new model. A review in The New York Times from 1955 titled “The Skin-Men and the Bone-Men” credits Dreyfuss for being a “skin man” who hides the complexity of a mechanism behind its skin (Blake, 1955). In a review from The Nation from the same year, author Felix Augenfeld also credits Dreyfuss for a “his fantastic organization and an analysis of his approach to the many-sided problems the designer must face” (Augenfeld, 1955). Joe and Josephine were thus considered innovative figures upon their publication.
As machine-like entities, Joe and Josephine reflect the discussions of the Macy conferences, and as models for user-testing, they resemble second-wave reflexivity. However, it is unclear what interactions Dreyfuss had with cybernetics during the 1950s. In an article titled “A Natural History of a Disembodied Eye: The Structure of Gyorgy Kepes's ‘Language of Vision’” author Michael Golec describes letters between the cybernetician Gyorgy Kepes and Dreyfuss from the early 1970s (Golec, 2002). Dreyfuss also illustrated a chapter of Kepes’ book Sign, Image, Signal (1966), indicating another touch point between the designer and the cybernetician (Blakinger, 2019). The cybernetician Buckminster Fuller wrote the introduction to a publication by Dreyfuss titled Symbol Sourcebook: an Authoritative Guide to International Graphic Symbols (1972), providing a final touch point between Dreyfuss and cybernetics. Nevertheless, there is no direct evidence that Dreyfuss knew of the Macy conferences, and this question needs more research.
Despite the question of Dreyfuss’ interaction with cybernetics, Dreyfuss’ new model was adopted into cybernetic software and hardware development processes by the 1970s. In a paper by computer scientist Ben Shneiderman titled “Human Factors Experiments in Designing Interactive Systems” (1979), Shneiderman cites Dreyfuss as someone who provides “useful guidance” for the development of computer systems (Shneiderman, 9). Shneiderman also credits Dreyfuss with a user centered approach that prioritizes the friendliness and compatibility of computer systems with their human users. He advocates for “personalizing” the computer by using human testers, and while he does not directly mention Joe and Josephine, he does state that designers should know their users (Shneiderman, 11). Shneiderman, additionally, cites various cybernetic articles, merging Drefyuss with cybernetics once again. This process of crafting personas to test prototypes, outlined by Shneiderman, is a practice which has continued into the present day.
The work of scholars such as John Harwood and Terry Winograd demonstrate the permanence of Joe and Josephine in the history of technology. In The Interface: IBM and the Transformation of Corporate Design, 1945 – 1975, Harwood describes The Measure of Man, a 1959 publication by Dreyfuss which expounded on Joe and Josephine. Harwood finds that The Measure of Man is the primary source for graphic and ergonomic standards within the United States, England, and Canada. He cites that it is “the first and most important, comprehensive collection of human engineering or ergonomic data produced explicitly for architects and industrial designers” (Harwood, 94). Winograd echoes Harwood’s claims in an article titled “Discovering America: Reflections on Henry Dreyfuss and Designing for People.” Winograd notes that Dreyfuss has been a key figure in the creation of courses for Stanford’s d.school, as he is understood as having created the model for empathizing with the user via Joe and Josephine (Winograd, 2008). Both Winograd and Harwood cast back a common perception that Dreyfuss initiated a Kuhnian paradigm shift in the field of design. Through Joe and Josephine, Dreyfuss assisted designers in moving away from the linear development model of Fordism and towards one of circular, iterative, feedback. Yet, it is precisely this heroic view of Dreyfuss that I wish to contest, for although Dreyfuss’ work is significant, Joe and Josephine introduced the use of biased data into product development. Indeed, Winograd makes mention of this flaw when he cites that with Joe and Josephine we must also “keep visible reminders of the subtler and less easily depictable social and cultural differences that determine the compatibility of people with products and interfaces…” (Winograd, 2008). However, I argue there is a deeper issue here which is emboldened by cybernetic theory and hidden in the construction of Joe and Josephine. While Joe and Josephine represent the “average” man and woman according to Dreyfuss, they also reflect his bias as a designer and his inability to recognize the quantified body as subjective.
The Designer as World Builder
In tracing the transition from homeostasis to reflexivity, Hayles makes note of a complication which elucidates this issue. In analyzing the work of Humberto Manturana and Francisco Vaerla, two second-order cyberneticians, she finds that Maturana and Varela were system builders that created a system by drawing boundaries to decipher what was to be included inside, and what was out (How We Became Posthuman, 188). As Hayles writes, “Consistent with their base in the biological sciences, Maturana and Varela tend to assume rational observers…Granting constructive power to the observer may be epistemologically radical, but it is not necessarily politically or psychologically radical, for the rational observer can be assumed to exercise restraint” (How We Became Posthuman, 188). The solution to reflexivity conceived in second-order cybernetics is therefore flawed. If the rational observer can quantify the human subject, who is it that edits the observer? An image by computer scientist Jonathan Grudin visualizes this idea. In “The Computer Reaches out: The Historical Continuity of Interface Design,” Grudin sketches the feedback process between the user and the computer (Grudin, 1989). In the image, a computer reaches out to a user, and the user reaches back. The user is also connected to a wider network of users, who reach back to the user, and therefore to the computer as well. In this system, there is an endless chain of interaction between the user/observer, calling into question who is observing whom. As such, no one user can claim to be a world-builder, as they are enmeshed in a socio-material environment.
Dreyfuss, however, claims this title. Joe and Josephine not only represent universal versions of man and woman like Adam and Eve, but they are the “hero” and “heroine” of Designing for People. Yet, as Russell Flinchum writes in the book Henry Dreyfuss, Industrial Designer: the Man in the Brown Suit, a “hodgepodge” of information was interpreted by Dreyfuss’ designer Alvin Tilley to construct Joe and Josephine (Flinchum, 87). Additionally, while the exact reports that Dreyfuss drew from are unclear, we can surmise from which reports he drew. In an oral history with Niels Diffrient, one of Dreyfuss’ designers who later iterated on Joe and Josephine, Diffrient states:
...Henry himself had the brilliance, after the Second World War, in which he had done some wartime work of carrying on what he'd learned about human factors engineering...You see, a lot of the war equipment had gotten so complex that people didn't fit into things and couldn't operate things well, like fighter planes, all the controls and everything...So a specialty grew up — it had been there, but hadn't gone very far — called human factors engineering...we found out about these people who were accumulating data on the sizes of people and began to get a storehouse, a file, on data pulled together from Army records, the biggest of which, by the way, and the start of a lot of human factors data, was the information they had for doing uniforms because they had people of all sizes and shapes. (Oral History with Niels Diffrient, Archives of American Art, 2010).
In a later letter written to Tilley, Tilley is asked about the specific type of Army data, helping to track which files Drefyuss may have obtained. The inquirer states that “‘...Douglas Aircraft called to ask if it [The Measure of Man] was available...He asked if the source or sources from which all this data was gathered has been noted’” (Archives of American Art, 2010). Dreyfuss, who had worked on projects for the Vultee Aircraft company during the war, is therefore likely to have used Air Force data as a major source for Joe and Josephine (Flinchum, 1997). A report on anthropometric military processes from the war validates this claim. The report, titled “Sampling and Gathering Strategies for Future USAF Anthropometry” mentions that the work of Francis Randall at Wright Field was an excellent example of proper data collection practices during WWII (Churchill, Edmund, and McConville). Randall’s document, or “Human Body Size in Military Aircraft and Personnel Equipment,” involves countless drawings of fighter pilot dimensions (Randall, 1946). In the book The Measure of Man and Woman, which improved on the designs of Joe and Josephine, Dreyfuss’ team appears to have been inspired by the depictions of fighter pilots in Randall’s work. A comparison of an image of Joe in a compartment with images of fighter pilots demonstrates how closely aligned Dreyfuss was to military practices.
However, Randall’s report also indicates the long-standing practice of classifying and quantifying bodies based on normative standards prevalent within a specific cultural moment. The manipulation of bodies for military data collection practices, and the exclusion of bodies that do not fit a certain “norm,” from these data sets, has a long history that cannot be revisited here, but which indicates that the inspiration for Joe and Josephine was based on biased data. Consequently, the shapes of the Joe and Josephine personas, which influenced heavily both industrial design and computer design practices, represent biased images. There must be continued investigation into which reports Dreyfuss gathered, but it appears likely that he used skewed data to construct his influential designs.
It is difficult to measure the outcome of such flawed practices, but the work of Dreyfuss has resonated throughout the century. The ripple effect of Joe and Josephine, and the countless products drafted from these designs, brings forth a new variable to consider in the construction of digital products. This paper is therefore a response to the many accounts which have canonized Dreyfuss within the history of industrial design, and consequently, the history of interaction design. As demonstrated through the reference to Winograd, Dreyfuss’ efforts as are taught in the classroom. However, through the conception of both real and imagined spaces, designers envision an ideal user, and this user can either represent the multiplicity of complex, messy, and beautiful bodies, or it can represent a “universal” ideal which never truly existed. Tracing the genealogy for these imagined users to their origins is essential for improving the testing practices of our modern moment.
Augenfeld, F. (1955, August 6). Masterpieces for Macy's. The Nation.
Blake, P. (1955, May 15). The Skin Men and the Bone Men. The New York Times.
Blakinger, J. R. (2019). Gyorgy Kepes: Undreaming the Bauhaus. Cambridge, MA: The MIT Press.
Brick, H. (1992). Optimism of the mind: Imagining postindustrial society in the 1960s and 1970s. American Quarterly, 44(3), 348. doi:10.2307/2712981
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Conference on Fairness, Accountability, and Transparency, Proceedings of Machine Learning Research.
Churchill, E., & McConville, J. T. (1976). Sampling and data gathering strategies for future USAF anthropometry. Wright-Patterson Air Force Base, OH: Aerospace Medical Research Laboratory.
Cogdell, C. (2010). Eugenic design: Streamlining america in the 1930s. Philadelphia, PA: University of Pennsylvania Press.
Costanza-Chock, S. (2020). Design Justice. Cambridge, MA: The MIT Press.
Dreyfuss, H. (1976). Measure of Man. Watson-Guptill.
Dreyfuss, H. (2012). Designing for People. New York, NY: Allworth Press.
Dreyfuss, H. (2014). Posters, The Measure of Man (Male and Female) [Cooper Hewitt Design Museum]. Retrieved 2022, from https://collection.cooperhewitt.org/objects/51497617
Erickson, T., Winograd, T., & McDonald, D. (2008). Reflections on Henry Dreyfuss, Designing for People. In HCI Remixed: Essays on Works That Have Influenced the HCI Community. Cambridge, MA: MIT Press.
Flinchum, R. (1997). Henry Dreyfuss, Industrial designer: The man in the brown suit. New York: Rizzoli.
Golec, M. (2002). A Natural History of a Disembodied Eye: The Structure of Gyorgy Kepes's Language of Vision. Design Issues, 18(2), 3-16. doi:10.1162/074793602317355747
Grudin, J. (1989). The Computer Reaches Out: The Historical Continuity of Interface Design. DAIMI Report Series, 18(299). doi:10.7146/dpb.v18i299.6693
Hamraie, A. (2017). Building access: Universal design and the Politics of Disability. Minneapolis, MN: University of Minnesota Press.
Harwood, J. (2016). Interface: IBM and the Transformation of Corporate Design, 1945-1976. Univ Of Minnesota Press.
Hayles, N. K. (1994). Boundary disputes: Homeostasis, reflexivity, and the foundations of Cybernetics. Configurations, 2(3), 441-467. doi:10.1353/con.1994.0038
Hayles, N. K. (2010). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.
López-Durán, F. (2019). Eugenics in the garden: Transatlantic architecture and the crafting of modernity. Austin, Texas: University of Texas Press.
Oral history interview with Niels Diffrient. (2010). Retrieved March 7, 2022, from https://www.aaa.si.edu/collections/interviews/oral-history-interview-niels-diffrient-15875
Randall, F. E. (1946). Human Body Size in Military Aircraft and Personal Equipment. Dayton, OH: Army Air Forces Air Material Command.
Randall, F. E. (1946). Human body size in military aircraft and personal equipment. Dayton, OH: Army Air Forces Air Material Command.
Tilley, Alvin and Henry Dreyfuss and Associates. (1993) Drawing 36. The Measure of Man and Woman.
Shneiderman, B. (1979). Human Factors Experiments in Designing Interactive Systems. Computer, 12(12), 9-19. doi:10.1109/mc.1979.1658571
Vultee Aircraft, Inc., military aircraft. (n.d.). Retrieved March 7, 2022, from https://www.loc.gov/item/2003690505/.
Wiener, N. (1967). The Human Use of Human Beings: Cybernetics and Society. New York, NY: Avon Books.
Yost, J. R. (2017). Making IT Work: A History of the Computer Services Industry. Cambridge, MA: MIT Press.
Caitlin Cary Burke (March 2022). “Henry Dreyfuss, User Personas, and the Cybernetic Body.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 32-44.
About the author: Caitlin Burke is a Communication PhD student at Stanford University, where she studies user experience design, design ethics, media history, and human-computer interaction.
The first finding is that long before computers, the Internet, or social media became available, people on both sides of the Atlantic were heavily dependent on organized (usually published) information on a regular basis. Studies about the history of the week, children’s education, work-related activities, and devotion to religious and community practices have made that clearly obvious. The challenge for historians now, therefore, is to determine how best to catalog, categorize, and study this sprawling subject of information studies in some integrative rationale fashion. Do we continue to merely study the history of specific pieces of software or machinery, ephemera such as newspapers and books, or providers of information such as publishers and Google?
A Framework for Studying Information’s History
In my three-volume Digital Hand (2004-2008) and subsequently in All the Facts: A History of Information in the United States Since 1870 (2016), I shifted partially away from exploring the role of providers of information and its ephemera toward how people used facts—information and data. In the research process, categories of everyday information began to emerge. First, periods—think epochs, eras—did too. Second, as with most historical eras these overlapped as well, signaling a changing world.
The same held true for the history of the types and uses of information, and, of course, the technologies underpinning them. We still use landlines and smartphones, we still fill out paper forms asking for information requested for a century and fill out online ones, and of course use First Class mail and e-mail. Publisher’s Weekly routinely reports that only 20 percent of readers consume some e-books; 80 percent of all readers still rely on paper books, so old norms still apply. Apple may post a user manual on its website but buy an HP printer and you are likely to find a paper manual in its shipping box.
All the Facts reported that there were types of information ephemera that existed from the 1800s to the present, supplemented by additional ones that did not replace earlier formats. Obvious new additions were electrified information, such as the telegraph, telephone, radio and TV. Paper-based information was better produced by people using typewriters, better quality pens and pencils and stored in wood later metal file cabinets, 3 x 5 cards, still later computers, PCs, smartphones, and now digital doorbell databases. Each improvement also made it more flexible to store information in logical ways, such as on 3 x 5 cards or folders.
The volume of their use grew massively; humble photographs of the interiors of homes and offices taken over the past 150 years illustrates that behavior as does the evolution of the camera, which too is an information-handling device. Commonly used ephemera across the entire period include, specifically, newspapers, magazines, books, telegraphy, telephones, radios, television, personal computers, smartphones and other digital devices, all arriving in that order. So, any chronology or framework should take into account their use. If you are reading this essay in the 2020s, you are familiar with the myriad uses to which you have relied on information and the appropriation of these devices with the probable exception of the telegraph, which passed into history by the early 1970s.
A second category of activities that any framework needs to incorporate, because they remained constant topics of concern across the entire two centuries, concerns information people needed with which to lead their personal lives, such as medical information to cure illnesses, political information to inform their opinions and voting practices, and so forth. Historians better understand that work-related activities required massively increased uses of information to standardize work processes, run ever-larger organizations, and to provide new products and services. I, and others, continue to study those realms of information use, because they kept evolving and expanding across the past two centuries—a process that shows no signs of slowing. The historical evidence points, however, to several categories of information evident in use in this period in private life. These include consulting published—I call it organized—information on taking care of one’s home and raising children, sports and hobbies, vacations, and interacting with one’s church, community and non-profits organizations, and with government agencies at all levels. Participation in civic and religious institutions, in particular, represented a “growth industry” for information across the two centuries. Sales volumes for books and magazines provide ample evidence of this, just as today sales statistics do about PCs and smartphones the same. People also relied on information available in public spaces. These included public libraries, billboard advertisements and government signs and messages along roads and highways, both painted and digitized, advertisements on the side of buildings, a massive increase in the use of maps available from publishers, state highway departments, and as metal signs on roads. Users worried about privacy issues, a concern expressed in North America as early as the 1600s and still with us today.
Role of the Internet
But what about the Internet? By now the reader will have concluded that everything mentioned already had rapidly migrated to the Internet too, certainly by the very early 2000s. We have already created frameworks for phases in the development and use of the Internet, such that we accept as 1994-1996 as phase one of wide use (adoption or appropriation), 1997-1998 as a second phase with the ability to conduct interactive information exchanges, a third with the introduction of feedback loops that began in about 2002-2003, and yet another involving the adoption of social media applications soon after. Each had its applications: Phase 1 with product brochures, mailing addresses, telephone numbers and some e-mail; Phase 2 intranets, databases, order taking, and organizational news; Phase 3 seeking feedback, customer engagement, business partner collaboration, and in Phase 4 posting of personal information (remember the photographs of cats on Facebook?), communities of practice and customers sharing information, including churches, civic organizations and clubs, and the rise of information analytics. Historical data documented the rapid diffusion of these practices, such that over half the world today uses the Internet to share information (more on that later). Usage began a new central facet of people’s daily lives.
Because we are discussing the Internet’s use, the two most widely sought-after Internet-sourced information in its early stages that continue to the present is political information and even more about pornography and health. Increasingly, too, people seek out games and always “how to” advice. Libraries became spaces one could go to for access to the Internet. Starting in 2007, people across the word were able to access information quicker and more often than before due to the introduction of the smartphone.
In All the Facts we published a photograph of a public library in San Antonio, Texas, from 2013 that had no books; rather, it looked like an Apple Store with rows of screens. Today, such spaces are common in most public, school and university libraries in scores of countries. Increasingly since the early 2000s, people received growing amounts of news from sites on the Internet and today, news aggregators pull that together by a user’s preferences for topics and timelines. Religion and raising children are widely covered by information sources on the Internet. In fact, by about 2015 the public expected that any organization had to have a presence on the Internet: civic organizations, every government agency one can imagine, schools, universities, industry and academic associations, stores (including brick-and-mortar versions), political parties, clubs, neighborhood associations, and even children’s playgroups. I found few exceptions to this statement when writing All the Facts.
Historians began to catalog the types of information that became available from these organizations, beginning in the 1950s. Following the lead of librarians who had started the process that we are familiar with today in the 1800s, these included types of ephemera (e.g., books, magazines, journals) and by topics (e.g., physics, history, economics). Historians are now beginning to go further, such as William Aspray and I with our current research about the types of fake information and their impact on truth and authenticity, others exploring what information people rely upon sought through the Internet, and too, how people use information on social media.
As to category of information: for example, by 2014 the Ford Motor Company was providing online information about the company, news, its products, role of innovation, people and careers, media postings, space for customer feedback, contact addresses, stock data and investor facts, a social media sub-site, facts about customer support, automotive industry facts, and declarations about privacy policies. Meticulously documenting these categories of information for thousands of such organizations demonstrates the diversity—and similarity—of types of information that one came to expect. Note, however, that the information and functions cataloged about Ford had been available in paper-based forms since the 1920s, just not as easily or quickly accessible.
Returning to the pre-Internet era, think in terms of eras (phases) by going beyond when some form of ephemera became available. The ephemera or technologies themselves added diversity, convenience, speed, less expensive communications, and capability of moving ever-increasing volumes of information. Historians have done considerable research on these five features. However, information historians are just beginning to realize that by focusing their concerns on the information itself, pivoting away from the technologies themselves (e.g., books and computers) they see the endurance of some topics—access to medical information, other facts about raising children, or cooking recipes—regardless of format or technology used.
Thinking this way expands our appreciation for the extent of a society’s use of information and just as relevant, how individuals did too. In a series of books produced by Aspray, one could see how data-intensive the lives of people of all ages, socio-economic status, and interests became over time. I have argued in All the Facts and elsewhere that this kind of behavior, that is to say, ever-increasing reliance on organized information, had been on the rise since the early 1800s.
Recent Findings and Thinking
While All the Facts lays out the case for how we could come to the conclusion that we lived in yet a second information age—not THE Information Age of the post World War II period—that book was published in 2016 and so much has happened since then. Rapid changes in realities facing historians of information keep pounding the shores of their intellectual endeavors on three beaches: Internet usage, fake news and misinformation, and the changing forms of information.
In 2021 the Pew Research Center reported that 95 percent of American adults living in urban centers used the Internet, 94 percent of suburban and 90 percent of rural residents. In comparison to usage in 2015, when writing All the Facts wrapped up, urbanites were at 89 percent, suburbanites at 90 percent, and rural residents at 81 percent. Since 2000, users doubled as a percent of the total population. The overall number of Americans using the Internet in 2021 had reached 93 percent of population. Smartphone usage also increased, now one of the top drivers of Internet usage, thanks to both the increased availability and affordability of this technology. Similar overall statistics could be cited for other OECD, Asian, and South American societies. Convenience and affordability combined are driving use all over the world, no longer just in the wealthiest societies.
Other surveys conducted in the United States by the Pew Foundation reported that over 70 percent of residents thought the information they obtained was accurate and trustworthy in 2012, just before the furor over misinformation became a major issue of concern in American society expressed by both the politically energized Right and Left, and by students of misinformation and many in the media and in senior government positions. But the types of information people sought were the same as in prior decades.
The problems survey respondents expressed emanated from where fake news or misinformation resided. First, fake news and misinformation was not constrained to sources on the Internet; these appeared in books, television programs, magazines, and radio programs, often promulgated by agents operating across multiple digital and paper-based platforms. Information scholars are increasingly turning their attention to this problem, as have Aspray and I, reporting our results in a series of books and papers. However, as he and I have emphasized and documented, this has been a concern and overt activity since the eighteenth century.
In Fake News Nation (2019) we focused largely on political and industry-centered examples, while in a sequel, From Urban Legends to Political-Fact Checking: Online Scrutiny in America (2019) we began documenting the nation’s response to this growing problem. The physical world’s battles over politics and such issues as the role of tobacco, oil, and environmental damage had moved to the Internet, but also represented terrain fought over long before the use of the web. If anything, the role of misinformation has spilled over into myriad issues important to today’s citizens: health, vaccines, historical truths, racism, product endorsements and descriptions, among others. Trusted sources for impartial news and information competed for attention with mischievous purveyors of misinformation and people at large opining on all manner of subjects. These activities disseminating false or misleading information represent a new development of the past decade because of their sheer volume of activity, even though their patterns are becoming increasingly familiar to historians studying earlier decades, even centuries.
But perhaps for historians the most interesting new research interest is the nature of how information changes. To make All the Facts successful, it was enough and highly revelatory to document carefully the existence, extent of, and use of information across essentially all classes, ethnic and racial groups and ages, and to present a framework for gaining control over what otherwise were massive collections of organized information. That exercise made it possible to argue that modern society (i.e. since at least the start of the Second Industrial Revolution) had to include on any short list of research priorities the role of information in all manner of activity. During that project, it became evident, however, that information itself (or, what constituted information) was changing, not simply increasing or becoming more diverse and voluminous. Second, that transformation of information and new bodies of fact were leading to the emergence of new professions and disciplines, along with their social infrastructures, such as professorships and associations and literature.
For example, regarding changing information, it became increasingly electrified, beginning with the telegraph in the 1840s and today the “signals” of which computer scientists and even biologists explore. There are biologists and other scientists who argue that information is a ubiquitous component in the universe, just as we have accepted that same idea regarding the presence of energy. Intelligence could no longer be limited to the anthropomorphic definitions that humans had embraced, emblematically called artificial intelligence. Trees communicate with each other, so do squirrels and birds about matters relevant to their daily lives.
Regarding the second point—development of new professions—before the 1870s there was insufficient knowledge about electricity to create the profession of electrician, but by the 1880s, it existed and rapidly developed its own body of information, professional practices, and rapidly became a licensed trade. In the years that followed, medical disciplines, plumbing, accounting, business management, scientists in all manner of fields, even later airplane pilots, radio engineers, and astronauts became part of modern society. They all developed their associations, published specialized magazines and journals, held annual conventions and other profession-centered meetings, and so forth. Probably every reader of this essay is a product of that kind of transformation.
Prior to the mid-nineteenth century, most professions had been relatively stable for millennium, such as the percent of populations engaged in subsistence agriculture, law, religion, warfare, and the tiny cohort of artisans. That reality has been thoroughly documented by economic historians, such as by Angus Maddison in his voluminous statistical collections (2005, 2007), all of whom pointed out that national income levels, for example, or increase in both economic productivity and population that radically did not change until more and different information began arriving. This was not a coincidence.
Understanding how information transformed and its effects on society is a far more important subject to investigate than what went into All the Facts because, like the investigations underway about misinformation, we are reaching into the very heart of how today’s societies are shaped and function. The earlier book was needed to establish that there was a great deal more going on that could be communicated by historians than by limiting their studies to the history of books or newspapers, or to the insufficient number of studies done about academic and discipline-centered institutions.
Now we will need to explore more carefully how information changed. I propose that this be initially done by exploring the history of specific academic disciplines and the evolution of their knowledge base. That means understanding and then comparing to other disciplines the role of, for instance, economics, physics, chemistry, biology, history, engineering, computer science, and librarianship. This is a tall order, but essential if one is to understand patterns of emerging collections of information and how they were deployed, even before we can realistically jump to conclusions about their impact. Too often “thought leaders” and “influencers” do that, in the process selling many books and articles but not with the empirical understanding that the topic warrants.
That is one opinion about next steps. Another is that the democratization of information creation and dissemination is more important. The argument holds that professionals and academics are no longer the main generators of information, millions of people instead. There are two problems with this logic, however. First, such an observation is about today’s activities, while historians want to focus on earlier ones, such as information generation prior to the availability of social media. Second, there is a huge debate underway about whether all of today’s “information generators” are producing information, misinformation, or are simply opining. As a historian and an avid follower of social media experts, I would argue that the issue has not been authoritatively settled and so the actions of the experts still endure, facilitated by the fact that they control governments, businesses, and civic organizations.
I am close to completing the first of two books dealing precisely with the issue of how information transformed. It took me 40+ years of studying the history of information to realize that understanding how it changed was perhaps the most important aspect of information’s history to understand. That realization had been obfuscated by the lack of precision in understanding what information existed. We historians approached the topic in too fragmented a way; I am guilty, too, as charged. But that is not to say that the history of information technology—my home sub-discipline of history and work—should be diminished, rather that IT’s role is far more important to understand, because it is situated in a far larger ecosystem that even transcends the activities of human beings.
Aspray, William (2022). Information Issues for Older Americans. Rowman & Littlefield.
Aspray, William and James W. Cortada (2019). From Urban Legends to Political Factchecking. Rowman & Littlefield.
Aspray William and Barbara M. Hayes (2011). Everyday Information. MIT Press.
Bakardjeva, Maria (2005). Internet Society: The Internet in Everyday Life. Sage.
Blair, Ann, Paul Duguid, Anja Silvia-Goeing, and Anthony Grafton, eds. (2021). Information: A Historical Companion. Princeton.
Chandler, Alfred D., Jr. and James W. Cortada, eds. (2002). A Nation Transformed by Information. Oxford.
Cortada, James W. (2016). All the Facts: A History of Information in the United States Since 1870. Oxford.
Cortada, James W. (2021). Building Blocks of Society. Rowman & Littlefield.
Cortada, James W. (2004-2008). The Digital Hand. Oxford.
James W. Cortada (2020). Living with Computers. Springer.
James W. Cortada (2002). Making the Information Society. Financial Times & Prentice Hall.
Cortada, James W. and William Aspray (2019). Fake News Nation. Rowman & Littlefield.
Gorichanaz, Tim (2020). Information Experience in Theory and Design. Emerald Publishing.
Haythornwaite, Caroline and Barry Wellman, eds. (2002). The Internet in Everyday Life. Wiley-Blackwell.
Maddison, Angus (2007). Contours of the World Economy, 1-2030 AD. Oxford.
Maddison, Angus (2005). Growth and Interaction in the World Economy: The Roots of Modernity.
Ocepek, Melissa G. and William Aspray, eds. (2021). Deciding Where to Live. Rowman & Littlefield.
Zuboff, Shoshanna (2019). The Age of Surveillance Capitalism. Public Affairs.
James W. Cortada (February 2022). “What We Are Learning About Popular Uses of Information, The American Experience.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 19-31.
About the author: James W. Cortada is a Senior Research Fellow at the Charles Babbage Institute, University of Minnesota—Twin Cities. He conducts research on the history of information and computing in business. He is the author of IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). He is currently conducting research on the role of information ecosystems and infrastructures.
Editors’ note: This is a republication of an essay (the second one) from a newly launched blog of essays Blockchain and Society by CBI Director Jeffrey Yost. As a one-time only crossover at the launch of the blog and site, Interfaces is republishing an essay Yost wrote on gender inequity and disparity in participation in the development and use of cryptocurrency. This one-time republication is to introduce Interfaces readers to the blog and its topic is an especially good fit with Interface’s mission. Please consider also subscribing to the blog https://blockchainandsociety.com/
Few Women on the Block: Legacy Codes and Gendered Coins
Jeffrey R. Yost
Abstract: Despite major differences in levels of participation in computing and software overall, the decentralized cryptocurrency industry and space is far more skewed with roughly 90 percent men and 10 percent women (computer science overall is around 20 percent women). This article explores the history of gender in computing, gender in access control systems, gender in intrusion detection systems, and the gender culture of Cypherpunks to historically contextualize and seek to better understand contemporary gender disparity and inequities in cryptocurrency.
Given decentralization is at the core of the design and rhetoric of cryptocurrency projects, the field often highlights, or hints at, small to mid-sized flat organizations and dedication to inclusion. Crypto coin and platform projects’ report cards on diversity, however, are uneven. While an overall diversity of BIPOC exists in cryptocurrency, it is quite unequal, as the founding and leadership of Bitcoin (team, the creator is anonymous) and the top 30 altcoins (alternative to Bitcoin) is disproportionately white North Americans, Europeans, and Australians, along with East Asians. With gender, inequalities are especially prevalent, in participation and resources. A half dozen or so surveys I found, spanning the past few years, suggest (in composite) that women’s participation in the crypto workforce is at slightly less than 10 percent. There are few women on the block, far fewer percentagewise in cryptocurrency than the already quite gender skewed low ratios in computing and software. On the adoption side, twice as many men own cryptocurrency as women.
This essay, on women in cryptocurrency, concentrates on gender inequities, as well as intersectionality. It discusses early research in this area, standout women leaders, and organizational efforts to address gender imbalances and biases. It places this discussion in larger historical contexts, including women in computing, women in security, women in cryptography, and women in, or funded by, venture capital. It also highlights the rare instances of a female CEO of cryptocurrency. Achieving greater gender balance is a critically important ethical issue. It also is good business. Many studies show corporations with gender balance on boards and women in top executive positions outperform. My essay posits that historical, terminological, spatial, and cultural partitions and biases block gender inclusion and amplify inequality in cryptocurrency development, maintenance, and use.
Major Gender Disparities in Cryptocurrency
A major study by international news organization Quartz surveyed the 378 cryptocurrency projects between 2012 and 2018 that received venture capital funding (Hao, 2018). Many cryptocurrency projects do not have this luxury or take this path, as they raise funds from friends and family, bootstrap, or rely on other means at the start. Venture capital funded projects tend to have greater resources and key connections to grow. Most of the largest coin projects have taken on venture capital support at some point in their young histories. It is self-reinforcing as rich projects tend to grow richer through R&D and marketing, and through the momentum of network effects, Metcalf’s Law (Value of a Network = N of Users-Squared), and under-resourced coin projects, often cease within several years as capitalizations descend to near $0.
Of these 378 venture funded projects, only 8.5 percent had a woman founder or co-founder. Venture capital (VC) is dominated by men, about 90 percent, and in terms of partners and senior positions at major VC firms, disparities are even starker (as reported in NBER Digest 9/2017). The venture domain is also very heavily biased toward funding of projects of white male entrepreneurs, and this is even more skewed in terms of capital offered or deployed. To illustrate, a study by the software and finance data firm Pitchfork found that in 2018 women founders received just 2.3 percent of total venture capital funding raised in the crypto universe—reported on by Forbes (Gross, 2019).
In the information technology (IT) field broadly, roughly 18 percent of projects have a woman leader or co-leader. Even with this quite low percentage in IT, crypto is disturbingly far lower than this, in fact, it is well less than half of that level.
On the adoption and use side, and unlike with BIPOC, where adoption nearly doubles that of whites in the US (participation rate of owners, at any level, so this is not crypto wealth), women holders of crypto are only half that of men. Men are two-thirds of crypto holders/users and women are just one-third.
Looking Backward at Backward, Gendered Computing Cultures
Computing is a field that has had substantial and important technical contributions by women from the start. This dates to the women who programmed the ENIAC, the first meaningful electronic digital computer, in mid 1940s to early 1950s. At the same time, the field and the industry has been held back by discrimination in hiring, and there have been heavily male gendered environments from the beginning. This has been true in the U.S. as documented in the tremendous scholarship of Janet Abbate (Abbate, 2012) and others, and for the United Kingdom, in the masterful work of Mar Hicks (Hicks, 2017).
Gender in IT remains substantially understudied, especially in some geographies. There is also a dearth of literature regarding some industry segments, corporations, and organizations on the production side, as well as much in the maintenance and use domains. Discriminatory practices against women and transgendered people have been and remain pronounced in the military, national and local governments, industry, national laboratories, nonprofits, universities, and beyond.
Thomas Misa’s path breaking, deeply researched work, published in Communications of the ACM and part of a larger book project, indicates there was not a golden age of women's participation in the early years, but continuously evident but steady low and range bound participation--between the high single digits to the upper teens percentwise--from the middle 1950s to the middle 1970s (Misa, 2021). His research draws on the best available data for the early years, user groups (and for the above I am giving extra weight to IBM’s User Group SHARE, Inc. in combining Misa’s totals for groups since it was 60 percent plus of the industry and its nearest competitor was always under 15 percent). Following this two-decade span, was a gradual upward trend that ramped up in the 1980s when late in the decade women's participation in computer science education and the workforce peaked at 37 to 38 percent. In 1990s it fell sharply, as Misa, and other authors, explored in his important edited volume, Gender Codes (2010).
Participation, environment, culture, and advancement are all important. My own work has contributed to show gender inequality in promotion to leadership roles in software and services companies in the US, especially pre-2000 (Yost, 2017). In recent years and decades, women's participation as computer science majors at US universities has been hovering around 20 percent. Why the huge drop and recovery to only about half the former peak? The widespread adoption of PCs, gendering of early PCs, gendered gaming (especially popular shooter games), rise of male geek culture, inhospitable environments for women are among the likely factors, as the publications of Abbate, Hicks, Misa, and others richly discuss. More attractive opportunities in law, medicine, and business outside IT likely are factors too, as participation in these areas rose as computing participation fell. And far from being free of discrimination, on a relative basis, these professional areas may have had less.
Gender in Different Computer Security Environments
In co-leading a major computer security history project for the National Science Foundation (NSF Award Number: 1116862) a half decade ago (and I am thrilled, just yesterday, we received another multiyear grant from NSF, on privacy and security, a CBI project I am co-leading with G. Con Diaz), I published “The March of IDES: A History of Intrusion Detection Expert Systems.” (Yost, 2015). I highlighted gender in one important area of computer security, intrusion detection. Early intrusion detection involved manual printing out logs and painstaking review of the printouts as security officers, auditors, and systems administrators' (who did this work) eyes glazed over. As computer use grew, fan folded printouts would grow in multiple stacks toward the ceiling at many computer centers, it soon overwhelmed. In the 1980s automated systems were developed to flag anomalies to then be selectively reviewed by humans. In the 1980s the artificial intelligence of expert systems was first applied in pioneering work to help meet growing challenges (Yost, 2016).
The National Security Agency had a very important pioneering research program in the 1980s and 1990s to fund outside intrusion detection work, called Computer Misuse and Anomaly Detection, or CMAD. This program was led by Rebecca Bace. The dollar amounts were not huge, they did not need to be, and Bace, with great energy and skill, expertly worked with the community to get much pioneering work off the ground toward impactful R&D, at universities, national labs, and nonprofit research corporations like SRI. In conducting oral histories with Dorothy Denning, Teresa Lunt, and Becky Bace (full text of these published interviews are at the CBI Website/UL Digital Conservancy), I got a sense of the truly insightful scientific and managerial leadership of the three of them (Yost, 2016).
The accelerating, sometimes playful, but also quite malicious and dangerous hacking of the 1970s and 1980s (for those Gen Xers and boomers reading this, remember War Games, and some non-fictional scares written about in newspapers?) became a serious problem. The US Government often was a core target of individual and state sponsored hackers in the Cold War. This fostered a need (and federal contracts) for this field of intrusion detection systems. As such, and over time, increasingly there were funds and contracts to complement the modest grant sizes, often under $100,000, provided from Bace’s NSA (CMAD) program.
This resulted in essentially a new computer science specialty opening in the 1980s and 1990s at universities, a subset of computer security, intrusion detection. There were some standout male scientists also, but at the origin, and for years to follow, women computer scientists disproportionately were the core intellectual and project leaders. Women scientists such as Denning, Lunt, Bace, as well as Kathleen Jackson (NADIR at Los Alamos) and other women scientists headed the top early projects and provided the insightful technical and managerial leadership for this computer security and computer science specialty to thrive (Yost, 2016).
Another computer security area I researched for NSF was access control systems and standards. This was all about knowing how operating systems worked, secure kernels, etc. It was by far the largest computer security field in terms of participants, papers, funding, and standard setting efforts, and it was overwhelmingly male. Operating systems (OS) was an established research area prior to access control becoming a key domain in it. Access control as an area within the larger OS domain was in response to breeches in the earliest time-sharing systems in government and at universities. MIT’s early 1960s pioneering Compatible Time-Sharing System (CTSS) had little security, with its successor of the late 1960s and beyond, MULTICS, project leader Fernando Corbato, and other top computer scientists at MIT like Jerome Saltzer, made security design central to the effort.
Operating systems research and development, in academia, industry, the DoD, DOE, etc. was overwhelmingly male and very well-funded. It followed that access control became an overwhelmingly male specialty of computer security and received strong federal research program and contract support.
Reflecting on this prior scholarship—women as the key leaders of the new (1980s) intrusion detection area and men the leaders of many of the most important operating system and access control projects—I have been pondering whether it provides any context or clues as to why, to date, the founders of cryptocurrency projects have largely been men? At very least I think it is suggestive regarding established and new specialties, connections between them, historical trajectories, and gender opportunities and participation. A wholly new area, when a dominant more visible and better funded other area exists, can lead to greater opportunities at times for newcomers to the new area of security, including for women.
Following from this, I have begun to consider the related question of: to what extent is cryptocurrency a new area offering new demographics and dynamics, and to what extent is it a continuation of the evolving field of cryptography? And how was this influenced by older cryptography and its renaissance in impactful new form, its new direction?
In the mid-1970s and 1980s with the emergence and rapid growth of a new form cryptography, public key developed a strong intellectual and institutional foundation, especially thanks to the work six men who would later win the Turing Award, early crypto pioneers Whitfield Diffie and Martin Hellman (and the “New Directions…” 1976 landmark paper); Ron Rivest, Adi Shamir, Leonard Adleman, the three from MIT known as RSA; and Silvio Micali, also MIT. Rivest, Shamir, and Adelman in addition to the RSA Algorithm would start a company RSA Data Security, and it would launch a pivotal event, the RSA Conference, and spin off an important part, authentication, as Verisign, Inc. After some initial managerial and financial stumbles, highly skilled James Bidzos would successfully lead RSA Data Security, and as Chair of the Board, Verisign.
In addition to his Turing Award, Micali had earlier won the Gödel Prize. In 2017, Micali became the founder of a now more than $10 billion “Proof-of-Stake” altcoin project Algorand and along with running this, he is a Computer Science Professor at MIT. Algorand offers much in being environmentally sound (low energy to mine), scalable, and possesses strong security.
Cryptocurrency: Both a New and an Older Space
The excellent book by Finn Brunton, Digital Cash (2019) and other articles and books addressing the cypherpunks—the cryptographic activists focused on privacy who sought to retake control through programming and systems—overwhelmingly have male actors. In addition to Diffie and Hellman, appropriately revered for inventing public key (in the open community), most of the high profile cypherpunks are male—Timothy May, Eric Hughes, John Gilmore, etc.
Yet it was one of the co-founders, Judith Milhon, known as “St. Jude,” who coined the term Cypherpunk. The cypherpunks, who journalist Steven Levy referred to as the “Code Rebels” in his book Crypto, were inspired in part by the work of Diffie and Hellman. The response of the National Security Agency (NSA) was to try to prevent private communications it could not surveille, and thwart or restrict development and proliferation of crypto it could not easily break. This included its work with IBM to keep the key length at a lower threshold for the Data Encryption Standard, or DES. This made it subject to the “brute force” of NSA’s unparalleled computing power. Further, it is widely believed that NSA also worked to have a back door in IBM’s DES, code containing a concealed and secret way into the crypto system, to enable surveillance of the public.
St. Jude: A Creative Force Among Early Cypherpunks
Born in Washington, DC in 1939, St. Jude was a self-taught programmer, hacker, activist, and writer. As a young adult she lived in Cleveland and was a part of its Beat scene. She volunteered in organizing efforts and took part in the Selma to Montgomery Civil Rights March in 1965, for which she was arrested and jailed. Her mug shot is a commonly published photo of her, symbolic of her commitment to civil rights throughout her life. She moved from the East Coast to San Francisco in 1968, embracing the counterculture movement of the Bay Area. In the late 1960s she was a programmer for the Berkeley Computer Company, an extension from the University of California, Berkeley’s famed time-sharing Project Genie.
Active in Computer Professionals for Social Responsibility (CBI has the records of this important group), she was an influential voice in this organization. She was also one of the leaders of Project One’s Resource One, the first public computer network bulletin board in the US, which existed in the San Francisco area. She was known for her strong belief that network computer access should be a right not a privilege. She was an advocate for women technologists and acutely aware of the relative lack of women "hackers.” (the term meant skilled programmer, not necessarily its later meaning associated with malicious hacks).
St. Jude was a widely recognized feminist in computing and activism circles. She was among the founders of the "code rebels" and in giving the group the name that stuck, cypherpunks, it is suggestive of her having a voice in this male space (her writing and interviews suggest this strongly as well), but this was not necessarily (and probably not indicative of) a general acceptance of women in the group. Some of St. Jude’s views were at odds with academic feminism and gender studies areas but may have fit more with the cypherpunks’ ethos. She abhorred political correctness she saw in academic communities and educational and political elites. She believed technology would fix many problems, including social problems of gender bias and discrimination, “Girls need modems,” was her answer, and oft repeated motto and rallying statement. It was what she felt was needed to level the playing field (Cross, 1995).
The lack of women among the cypherpunks, St. Jude’s great frustration more women did not adopt her active hacker approach and ethic, likely suggests a dominant male and biased culture that only opened to certain great talent, creativity, and interactive style she possessed.
St. Jude became a senior editor and writer at Mondo in 2000, a predecessor publication that Wired drew from in style in writing about information technology. She also was lead author (with co-authors R.U. Sirius and Bart Nagel, Random House, 1995) of The Cyberpunk Handbook: The Real Cyberpunk Fakebook, (the subtitle a bit prophetic without intent of terminology given later formed Facebook and its profiteering off fake news) and along with her journalism she wrote science fiction. Judith “St. Jude” Milhon, passed from cancer in 2003.
There definitely is a need for more historical research on gender and the cypherpunks as well as the sociology of gender in recent cryptocurrency projects, related intermediaries, and investors and users. Rudimentary contours nonetheless can be gently and lightly sketched from what is known. Names from the cypherpunks mailing list appearing in articles and handful of books addressing the topic are about 90 percent male. At the start St. Jude was the sole woman as a part of this core group. If limited to those directly interested and investigating possibilities with digital currencies before the advent of Satoshi Nakamoto’s Bitcoin in 2008, it was even more male dominated.
As such, women role models were very few in early public key efforts, and more broadly among the code rebels or cypherpunks overall. There are deep connections of the cypherpunks to Bitcoin, but also other early coins as well. Those young crypto entrepreneurs and activists of recent years and of today of course were never a part of the group, but nonetheless often grew an interest in it. They were motivated by its past activity, and had reverence for Tim May, Ed Hughes, John Gilmore, and others. This perhaps led to fewer opportunities perceived to be, or, open to women, and likely less of a recognition and consideration of pursuing this space among women.
Of the two exceptions of women in the upper echelons of cryptocurrency, one came from an equally talented and active wife and husband team (the Breitmans). The other was a truly exceptional individual, possibly deserving the term genius, who like Vitalik Buterin (Ethereum’s lead founder) achieved amazing things at a young age, was exposed to potential need for crypto, and was driven by the goal of socially impactful career success.
Tezos Co-Founder Kathleen Breitman
There are more than 14,000 altcoins, the top 30 are capitalized at $4 billion or more currently (value of circulating coins), and those not in the in the top 200 generally are less than $40 million in capitalization and in a precarious spot if they do not rise at least five-fold in the coming years. Many in the investment community have pejoratively labeled lesser capitalized altcoins (and for some Bitcoin enthusiasts, all altcoins) as “sh*t coins.” The cryptocurrency industry has resulted in a growing specialized trade and investment journalists, following Ethereum founder Vitalik Buterin’s initial pre-Ethereum pursuit of coin journalism, in creating Bitcoin Magazine. These include journalists, analysts, and evangelists (often all wrapped into one) in e-magazines such as The Daily HODL and Coin Telegraph, two of the more respected larger publications among many others. They write mainly on the top 50 coins, what most in the investment community cares about, and thus are writing very heavily about men, a reinforcing mechanism hindering perceived and real opportunities for women.
In the top 30 coins, only two have a woman founder or principal co-founder, none have a sole woman founder or sole leadership team in the top 30, and many are all male at the top. A few coins have a longer list in the founder’s group, upper single digits. The two principal co-founders of major altcoins are Kathleen Breitman of Tezos and Joyce Kim of Stellar Lumens. Tezos is $4 billion in capitalization and ranks 28th in altcoin cap., Stellar Lumens is at $4.8 billion and ranks 22nd.
The coin project “Proof-of-Stake”-modeled Tezos, was co-founded by Kathleen Breitman and her husband Arthur Breitman in 2018, along with a Tezos foundation created by Johann Gevers. Kathleen Breitman studied at Cornell University before joining a hedge fund and working as a consultant, Arthur Breitman is a computer scientist who worked in quantitative finance prior to Tezos. A dispute with the foundation and Gevers led the Breitmans into a lawsuit which delayed the launch and hurt the project, ultimately a payout settled the matter. Kathleen Breitman has stated that she has been underestimated in the industry as some assume her husband is the real creator when they very much co-created Tezos, technically and organizationally.
Stellar Lumen’s Co-Founder Joyce Kim
To say Joyce Kim’s career is impressive is an understatement, stellar is in fact quite fitting. Kim, a second-generation Korean American, grew up in New York City attending High School for the Humanities and graduated from Cornell University at age 19. Kim followed this with graduate school at Harvard University and Law School at Columbia University. She became a corporate M&A attorney as she also did pro bono work for Sanctuaries for Families and for the Innocence Project. Back in high school she witnessed the trouble and expense of lower income people globally sending money to families, it also was likely evident in work at Sanctuaries for Families.
After success with co-founding Simplehoney, a mobile ecommerce firm, as well as founding and serving as CEO of a Korean entertainment site, she became one of the rare (percentagewise) women in venture capital working at Freestyle Capital. Focusing on the power of social capital, she soon partnered with stable coin (crypto pegged to government fiat currency) Ripple founder Jed McCaleb in 2014 to found open source blockchain-based coin, network, and platform project Stellar Lumens, an effort of the nonprofit Stellar Development Foundation.
Kim’s motivation and vision for Stellar was driven by the fact that 35 percent of women adults globally (World Bank statistics) do not have a bank account despite many of them saving regularly. As such, they have trouble protecting, sending, and receiving funds, difficulties paying bills, helping family. Stellar as a platform and network allows people to send funds at low costs and low sums as easily as sending a DM or email. With 6.3 billion in the world with smartphones, and perhaps as many as 20 percent of these people without a bank account Stellar Lumens addresses a critical problem and serves a great societal need. The coin Celo is also in this very important area, making a positive difference in the world. Stellar Lumens (and Celo) change lives and empower lower income people, especially women as women are less likely than men to have bank accounts due to discrimination and lesser resources. As Kim told Fast Company in an interview shortly after the founding, with Stellar, she “found her true north.” (Dishman, 2015). In addition to Stellar Lumens, Kim recently served as a Director Fellow at the famed MIT Media Laboratory.
In addition to the prestigious MIT senior fellowship, Kim has moved on from Executive Director of Stellar, and the day-to-day of the coin and is having an impact socially and financially in the venture capital arena in crypto, an area that could benefit from more women. Kim is the Managing Partner at SparkChain Capital.
Mining Deeper: Guapcoin’s Tavonia Evans and the African Diaspora Community
At coins not in the top 30, 50, or 100 in capitalization projects teams work toward and hope their technology and mission will one day carry them to much higher levels. There are people and projects behind the coins and that is sometimes disrespectfully forgotten when investors or others refer to coins and projects in derogatory terms.
I wanted to research a coin in the middle third of the 14,000 or so coins out there in current capitalization and was deeply moved by learning about Guapcoin and its tremendous mission. It was founded in 2017 by African American data scientist Tavonia Evans. Evans, a mother of eight, had founded a peer-to-peer platform company earlier but was unsuccessful at getting venture funding. Venture capital is not on a level playing field and far less than one percent of venture funding goes to African American women led businesses. At this intersection--African American women--societal bias in finance is particularly pronounced.
Inability to get funding for that business led her to move on and inspired her Guapcoin project, a cryptocurrency focused on addressing “the economic and financial concerns of the Global African Diaspora community.” Evan’s vision with Guapcoin is beyond merely being a means of exchange for the Global African Diaspora community, and for “Buying Black,” but also a property protection mechanism that combats gentrification, documents all forms of property ownership (from real estate, to copyright, to music licenses) so “the Black and Brown community will have its wealth protected by a transparent, immutable blockchain.”
In 2019, Evans and Guapcoin founded the Guap Foundation to permanently ensure the mission of the coin project is carried out. Many altcoins have associated foundations to both further and to protect the integrity of the mission for generations to come (guapcoin.org).
It is with amazing, social-oriented and green projects like Guapcoin, Stellar Lumens, and Celo that I realized my initial negative perspective of cryptocurrency several years back, because of my very critical views on the environmental impact of Bitcoin, was sorely misguided for many 2016 and later altcoins, and for 2015 Ethereum that is converting to Proof-of-Stake as a consensus model to become green.
“Meetups” and Standout Early Scholarship on Gender and Cryptocurrency
There are a mere handful of published scholarly studies to date examining gender and cryptocurrency. One stood out to me in being especially compelling in its creative methodology, insights, and importance. Simon Fraser University’s Philippa R. Adams, Julie Frizzo-Barker, Betty B. Ackah, and Peter A. Chow-White designed a project where they engaged in participant observation and interaction with over a half dozen “Meetup” events that were primarily, or at least in part, marketed to women, often to educate, encourage, or address gender disparity in cryptocurrency. All of these were in the Vancouver, British Columbia, metropolitan area.
Adams and her co-authors do a wonderful job of interpreting, analyzing, and eloquently conveying the meaning of these events. Some meetups were well designed and executed to offer support to women and empower women in this new industry and space. Others were far less effective, succumbing to the challenges of "trying to support adoption of a new technology," or they ended up presenting more resistance than support. I urge you to read this excellent work of scholarship (P. Adam, et al.), the chapter is in the recommended readings volume edited by Massimo Ragnedda and Giuseppe Destefanis, 2019, which is an excellent book overall and one of the first quality social science books on emerging Web 3.0).
Educational and Empowerment Organizations and Looking Forward
In addition to meetup events that are local in origin, a growing number of nationwide education and advocacy support organizations by and for women in cryptocurrency have emerged. Some foster local meetup events others have other supportive programs.
In Brooklyn, New York, Maggie Love founded SheFi.org in seeing blockchain as a powerful tool for more inclusive and equitable finance tools and systems. It engages in education to advance understanding and opportunities for women in blockchain and decentralized finance.
Global Women in Blockchain Foundation is an umbrella international organization without a central physical headquarters (in the spirit of the technology and decentralization). It is designed to accelerate women’s leadership roles in blockchain education and technology. The websites for these two organizations can be found on this site in the list of organizations.
Efforts to reduce the tremendous gender gap in cryptocurrency development projects and especially founder roles and leadership posts is extremely important, both ethically, and for the creativity, success, and potential of this field. Further, blockchain, and applications in crypto, are the heart of Web 3.0, the future of digital technology. If the field remains 90 percent male it will hurt the field of IT greatly by further reducing overall women's participation in IT, given blockchain greater share of the whole of our digital world.
There is not only a large gender gap in computer science, but also in finance, hedge funds, and venture capital, all which accentuate imbalances in power and opportunity in favor of men in crypto. The VC gender gap is especially problematic as it reinforces hurdles to women and BIPOC, independently and especially at these important intersections, for both small companies and cryptocurrency projects.
Joyce Kim and her leadership at SparkChain, funding crypto is so refreshing. The firm's staff is greatly diverse, in terms of both gender and race and ethnicity. More women in the VC leadership world and VCs with a crypto focus is incredibly important. It is also critical that education in both high school and college does not directly, or indirectly and inadvertently, create gendered spaces favoring men, or those inhospitable to women.
The excellent study by the team at Simon Fraser University looking at cryptocurrencies, and other studies looking at finance and hedge funds, have identified jargon and terminological barriers to entry. In crypto the barriers are many, from outright gender bias, to clubhouses, to other restrictive spaces, but terminology and cultures of exclusion are especially powerful in hindering inclusion, both intentionally and unintentionally.
One motivation for this blog and site and especially the site’s inclusion of a historical glossary of terms (continually added to) and a Cryptocurrency Historical Timeline is to contribute in a small way to education and first steps to remove barriers or blocks to inclusion based on terminology and cultural elements important to communication in this area. Anyone interested in this area and devoting time to it will soon move far beyond these resources, but they might help understanding a bit initially, at least that is a goal. I also see these as tools that can greatly benefit from the community.
I am continually learning from readings, correspondence, and meetings with others in this space. I have added to the readings already from useful comments and suggestions people have sent me after my first post last week. I hope these sources accelerate as community-used and community-influenced tools and thus I very much encourage and welcome feedback. I will take the timeline and glossary, through additions and tweaks, thus, many editions or iterations, but for now it gets at some of the technical and cultural terminology and basics (Why does the mantra of HODL, Hold On for Dear Life, keep coming up as crypto coins currently plummet? The glossary provides a historical context).
[Republished with only slight adjustment from Blockchain and Society: Political Economy of Crypto (A Blog), January 25, 2022) http://blockchainandsociety.com
[Please consider subscribing to the free blog at the URL above]
Abbate, Janet (2012). Recoding Gender: Women’s Changing Participation in Computing, MIT Press.
Adams, Philippa R., Julie Frizzo-Barker, Betty B. Ackah, and Peter A. Chow-White (2019). In Ragnedda, Massimo and Giuseppe Destefanis, eds. Blockchain and Web 3.0: Social, Economic, and Technological Challenges, Routledge.
Brunton, Finn. (2021). Digital Cash: The Unknown History of the Anarchists, Utopians, and Technologists Who Created Cryptocurrency, NYU Press.
Celo Website. www.celo.org
Cross, Rosie (1995). “Modern Grrrl.” Interview with Judith “St. Jude” Milhon. Wired, February 1. www.wired/1995/02/st.-jude/
Dishman, Lydia. (2015). “The Woman Changing How Money Moves Around The World.” Fast Company February 6.
Hao, Karen. (2018). “Women in Crypto Are Reluctant to Admit There Are Very Few Women in Crypto.” Quartz (qz.com). May 5, 2018. https://www.qz.com
Hicks, Marie (2017). Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing, MIT Press.
Guapcoin Website. www.guap.org
Gross, Elana Lyn. (2019). “How to Close the Venture Capital Gender Gap Faster.” Forbes, May 20.
Klemens, Sam. (2021). “10 Most Influential People in Crypto: Kathleen Breitman.” Exodus. August 3.
Misa, Thomas J., Ed. (2010). Gender Codes: Why Women are Leaving Computing, Wiley-IEEE.
Misa, Thomas J. (2021). “Dynamics of Gender Bias in Computing.” Communications of the ACM 64: 6, 76-83.
St. Jude, R.U. Sirius, Bart Nagel (1995). The Cyberpunk Handbook, Random House.
Yost, Jeffrey R. (2015). “The Origin and Early History of the Computer Security Software Industry.” IEEE Annals of the History of Computing, 32:7, April-June, 46-58.
Yost, Jeffrey R. (2016). “The March of IDES: The Advent and Early History of the Intrusion Detection Expert Systems.” IEEE Annals of the History of Computing, 38:4, October-December, 42-54.
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry, MIT Press.
Jeffrey R. Yost (January 2022). “Few Women on the Block: Legacy Codes and Gendered Coins,” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 1-18.
About the Author: Jeffrey R. Yost is CBI Director and HSTM Research Professor. He is Co-Editor of Studies in Computing and Culture book series with Johns Hopkins U. Press, is PI of the new CBI NSF grant Mining a Useful Past: Perspectives, Paradoxes and Possibilities in Security and Privacy. He has published 6 books, dozens of articles, and has led or co-led ten sponsored projects, for NSF, Sloan, DOE, ACM, IBM etc., and conducted hundreds of oral histories. He serves on committees for NAE, ACM, and on two journal editorial boards.
2021 (Vol. 2) Table of Contents
Melissa G. Ocepek and William Aspray
Abstract: This essay introduces everyday information studies to historians of computing. This topic falls within the subdiscipline of information behavior, one of the main subject areas in information studies. We use our recent edited book, on Deciding Where to Live (Rowman & Littlefield, 2021), as a means to illustrate the kinds of topics addressed and methods used in everyday information studies. We also point the reader to some other leading examples of scholarship in this field and to two books that present an overview of the study of information behavior.
This essay introduces everyday information studies to historians of computing. The story of this field of study and its history are too large to tell in detail here. This topic falls within the subdiscipline of information behavior, one of the main subject areas in information studies – a field that began to be studied between the two world wars and took off in the 1960s. The reader interested in information behavior studies more generally should examine two well-regarded reference works on this subject (Case and Given 2016; Fisher, Erdelez, and McKechnie 2005).
Information Study Approaches
The early research on information behavior focused on human behavior in structured information environments, such as when a person went to a library to seek information or interacted with a database. But, of course, there were other, less structured environments for finding information, such as through conversations with friends and family; consulting religious or civic leaders, or area specialists such as financial advisors; and through consumption of the media. With the coming of the Internet and portable information devices, one could seek information anywhere, anytime, on any subject profound or frivolous. Information seeking, consumption, and analysis became an increasingly everyday part of ordinary people’s lives. The field expanded over time to not only include information needs, wants, and seeking, but also information avoidance and overload, and various kinds of affective as well as cognitive responses to information.
In fact, the everyday aspects of information were studied not only by information scholars but also by sociologists, communications scholars, and media scholars beginning as early as the 1930s. These studies about the roles information plays in one’s everyday life draw upon theorizing by such scholars as Michel de Certeau (1984), Henri Lefebvre (2008/1947), Dorothy E. Smith (1987), and Carolyn Steedman (1987). For an overview of the relevant theorizing, see Highmore (2001), Bakardjieva (2005, Chs. 1 and 2), and Haythornthwaite and Wellman (2002). Highmore also includes writing selections from many of these theorists. To make this introduction to everyday information studies more manageable, we focus here on our own work and primarily on our recent edited book, Deciding Where to Live (Ocepek and Aspray 2021). For a sample of other everyday information studies, see for example the work of Denise Agosto (with Sandra Hughes-Hassell, 2005), Karen Fisher (neé Pettigrew, 1999), Tim Gorichanaz (2020), Jenna Hartel (2003), Pam McKenzie (2003), and Reijo Savolainen (2008).
Our personal involvement with research on everyday information studies began with Everyday Information (Aspray and Hayes 2011), which injected historical scholarship into studies on everyday information. In a long study of “100 Years of Car Buying”, one of us (Aspray, pp. 9 – 70 in Aspray and Hayes 2011) introduced a historical model, showing how endogenous forces (e.g., the dealership model for selling automobiles, or the introduction of foreign automobiles into the American market) and exogenous forces (e.g., war, or women entering the workforce) shaped the information questions that people were interested in and sometimes even the information sources they consulted. This volume, presenting an historical approach to everyday information behavior, included contributions by the noted historians of computing James Cortada, Nathan Ensmenger, and Jeffrey Yost.
Our collaboration began when the two of us, together with our colleague George Royer (today a game designer in Austin, TX), wrote two books about food from the perspective of information studies. We did not follow the typical approaches of food scholars, studying such topics as food pathways or food security, but instead applied the lens of information studies to this topic of wide popular interest. In the two short books that we produced, Food in the Internet Age (Aspray, Royer, Ocepek 2013) and Formal and Informal Approaches to Food Policy (Aspray, Royer, and Ocepek 2014), we discussed a wide variety of topics, such as: the online grocer Webvan (the largest loser of venture capital in the dot-com crash of 2001); the harms that Yelp, OpenTable, and Groupon created for small brick-and-mortar businesses and customers; the different ways in which the Internet has been used to represent and comment upon food and food organizations; the regulation of advertising of sweetened cereals to children; and the strategies of informal, Bully Pulpit persuasion compared to formal regulation of food and nutrition – carried out through a pair of studies: one of Eleanor and Franklin Roosevelt, and the other of Michele and Barak Obama.
This work on food, and some of our subsequent research, falls into the field of information studies. We carefully use that term instead of information science because our work is more informed by humanities (critical theory, cultural studies) and social science disciplines (sociology, psychology, organizational and management studies, work and labor studies) than by computer science, natural science, and engineering disciplines. We both have worked in information schools, part of a movement toward the interdisciplinary study of computing and information that has emerged in the past quarter century out of (1) library schools becoming more technical, (2) computer science departments becoming more interested in people and human institutions and their social impact, and (3) newly created interdisciplinary enterprises. These information schools offer a big tent for many different kinds of methods, theories, and approaches. The breadth of these studies can be seen in the wide range of papers delivered at the annual meeting of ASIST (for example, https://www.conftool.org/asist2020/index.php?page=browseSessions&path=adminSessions) and the annual "iConference" (https://ischools.org/Program). Also see the types of scholarship presented as the specialty biennial conference on "Information Seeking in Context" (ISIC, e.g., http://www.isic2018.com).
So far, there is little cross-usage of methods or approaches by people studying everyday information, e.g. by a traditional information studies scholar who studies information literacy incorporating research from data science or ubiquitous computing, but this cross-fertilization is just beginning to happen. In our own research, we do the next best thing through edited volumes which include chapters using a variety of approaches, so as to gain multiple perspectives on an issue. This is true, for example, in our book on where to live (discussed in detail below) and the book on information issues in aging (mentioned below).
Deciding Where to Live
In our recent edited book, Deciding Where to Live, we are continuing our study of everyday phenomena through an information lens. We describe this book in some detail here to give our readers a better sense of the ways in which information studies scholars operate. All of the chapters in this book were written by people associated with leading information schools in the United States (Colorado, Illinois, Indiana, Syracuse, Texas, Washington). As with our food studies, we have taken multiple perspectives – all drawn from information studies – to investigate various aspects of housing. These studies, for example, employ work studies and business history; information, culture, and affective aspects of information issues; community studies; information behavior; and privacy.
Information scholars are often interested in the results of scholarship by labor, management, and organization scholars; and sometimes they adopt their theories and methods. These scholars are interested in such issues as the growing number of information occupations, the increased percentage of a person’s job tasks on information activities, and the ways in which tools of communication and information have changed firm strategies and industry structures. Everyday information scholars, too, are interested in these results, but primarily for what they have to say about the everyday or work lives of individuals.
The work of real estate firms, realtors, and home buyers and sellers have been profoundly changed by the massive adoption of information and communication technologies in recent years. Let us consider two chapters, by James Cortada and Steve Sawyer, from the Deciding Where to Live book. One major change in the 21st century has been the rise of websites, such as Zillow and Realtor.com, that enable individuals to access detailed information about housing without having to rely upon a realtor or the Multiple Listing Service. Using a business history approach, Cortada shows how these changes have changed the structure of the real estate industry, altered the behavior of individual firms, made the buyer and seller more informed shoppers, lowered commissions on house sales, and introduced new business models such as Zillow buying homes themselves and not just providing information about them. Some people believe that the rise of companies such as Zillow means that the imbalance between the information held by realtors and buyers is largely a thing of the past, that disintermediation by realtors is also largely over, and that the need for realtors is greatly diminished – and that we will see a radical shrinking in this occupation in the same way that the numbers of telephone operators and travel agents has plummeted. (See Yost 2008.)
Sawyer argues, however, that the work of the real estate agent is evolving rather than being eliminated. As he states his argument: “real estate agents have been able to maintain, if not further secure, their role as market intermediaries because they have shifted their attention from being information custodians to being information brokers: from providing access to explaining.” (Sawyer 2021, p. 35) As he notes, the buying of a house is a complex process, involving many different steps and many different participants (selecting the neighborhood and the particular house, inspecting the property, checking on title and transferring it, obtaining financing, remediating physical deficiencies in the property, etc.). One might say that it takes a village to sell a house in that village; and an important role of the real estate agent is to inform the buyers of the many steps in the process and use their network of specialists to help the buyers to carry out each step in a professional, timely, and cost-effective way.
How do these changes affect the everyday life of the individual? There are more than 2 million active real estate agents in the United States. Their work has changed profoundly as they adapt real-estate-oriented websites and apps in their work. Even though most real estate agents work through local real estate firms, to a large degree they act largely as independent, small businesspeople who carry out much of their work from their cars and homes, as much as from their offices. So, they rely on websites and apps not only for information about individual homes, but also for lead generation, comparative market analysis, customer relationship management, tracking their business expenses such as mileage, access to virtual keys, video editing of listings, mounting marketing campaigns, and a multitude of other business functions. For those who are buyers and sellers, they can use Zillow or its online competitors to become informed buyers before ever meeting with a real estate agent, learning how much their current home is worth, figuring out how large a mortgage they can qualify for, checking out multiple potential neighborhoods not only for housing prices but also for quality of schools and crime rates, checking out photos and details of numerous candidate houses, and estimating the total cost of home ownership. Interestingly, many individuals who are not looking to buy or sell a home in the near term are regular users of Zillow. It is a way to spy on neighbors, try out possible selves, plan for one’s future, or just have a good time. In our introductory chapter, we address these issues.
Another chapter, by Philip Doty, reflects upon the American dream of the smart home. Drawing upon the scholarship in surveillance capitalism Soshanna Zuboff (2019), feminist scholarship on privacy, Anita Allen (1988), Patricia Bolling (1996), Catherine MacKinnon (1987), gender studies in history of science and technology, Ruth Cowan (1983), geography of surveillance, Lisa Makinen (2016), and other scholarly approaches, Doty reflects on the rhetorical claims about technological enthusiasm related to smart cities and smart homes, and discusses some of the privacy and in particular surveillance issues that arise in smart homes.
Information is not merely used by people in cognitive ways; it can also bring joy, sadness, anxiety, and an array of other emotions. Deciding where to live can be an exciting, fraught, and stressful experience for many people. When one is searching for a home in a particularly competitive housing market, the addition of time pressures can amp up the emotional toll of house hunting and discourage even the most excited home buyer. In her chapter, Carol Landry recounts how the high stakes decision making of home buying becomes even more complicated when time pressure and emotions come into play. Her research is based on an empirical study of home buyers in the highly competitive Seattle real estate market. The chapter describes the experience of several home buyers dealing with bidding wars that required quick decision making and many failed attempts at securing a home. The stories shared in this chapter highlight the despair and heartbreak that made continuing the home search difficult to participants described as going from enthusiastic information seekers to worn out information avoiders. This chapter highlights how internal and external factors can impact the home buying process and the information behaviors associated with it.
A competitive real estate market is but one of myriad experiences that can further complicate the process of deciding where to live. There are times in most people’s lives where the unique attributes of a life stage play an outsized role in decision-making around housing; one of these times is retirement. In Aspray’s chapter, the realities of retirement complicate the lives of individuals lucky enough to be able to retire with new considerations that shape decision making. Retirement adds new complexity to deciding where to live because the stability of work that binds many people’s lives is no longer there, creating many exciting new opportunities and constraints. Different elements shape questions around where to live for retired people including the emotional ties to their current homes, the financial realities of retirement income, and the physical limitations of aging.
During times of societal uncertainty, a home can be a comforting shelter that keeps the external world at bay. Even when a lot of uncertainty stems from the housing market, as it did during the Housing Crisis of 2007 and the recession that followed. As more and more people lost their homes to foreclosures or struggled to pay their mortgages, home and garden entertainment media provided a pleasant, comfortable escape for millions of Americans. Ocepek, in her chapter on home and garden sources, found that, throughout the housing crisis, recession, and recovery, home and garden sources grew or maintained their popularity with viewers and readers – likely due to the social phenomenon of cocooning or taking shelter in one’s space when the world outside becomes uncertain and scary. Both home and garden magazines and HGTV made changes to their content to represent the new home realities of many of their readers and viewers, but they also largely stayed the same, presenting comforting content about making whatever space you call home the most comfortable.
The financial hardships throughout the housing crisis, recession, and recovery were not experienced by all Americans in equal measure. Several authors in the book presented multiple examples where housing policies, economic conditions, and social unrest disproportionately affected marginalized communities throughout the United States. One is Pintar’s chapter about Milwaukee, mentioned below. Although some of the legal frameworks built to segregate cities and communities throughout the country have changed, the experience of deciding where to live for Black and African Americans adds additional layers of complexity to the already complicated process. Drawing on critical race theory, Jamillah Gabriel delineates how Black and African American house searchers (renters and buyers) create information seeking and search strategies to overcome the historic and contemporary discriminatory policies and practices of housing segregation. The chapter analyzes specialized information sources the provide useful information to help this group of house searchers find safer communities where they have the greatest chance to prosper. These sources include lists of the best and worst places for African American and Black individuals and families to live. The lists draw on research the compares communities based on schools, employment, entertainment, cost of living, housing market, quality of life, and diversity. Drawing on historic and contemporary account, the analysis provided in this chapter highlights that, “the housing industry can be a field of land mines for African American in search of home” (Gabriel 2021, p. 274).
It is often said that information and information tools are neither inherently good or bad, but that they can be used for both good and bad purposes. Two chapters in the book illustrate this point. In a study of the city of Milwaukee, Judith Pintar shows how HOLC maps, which were created to assess the stability of neighborhoods, were used to reinforce the racist practice of redlining. In another chapter, by Hannah Weber, Vaughan Nagy, Janghee Cho, and William Aspray, the authors show how information tools were used by the city of Arvada, Colorado and various groups (such as builders, realtors, parents, activists, and the town council) to improve the city’s quality of life in the face of rapid growth and its attendant issues such as traffic problems, rising housing prices, the need to build on polluted land, and the desire to protect the traditional look and feel of this small town. A third chapter, by David Hopping, showed how an experiment in Illinois was able to repurpose military housing for non-military purposes for the social good. His empirical study is seen through the lens of the theoretical constructs of heterotopia (Foucault 1970), boundary objects (Star and Griesmer 1989), and pattern language (Alexander 1977).
Both of us are continuing to pursue work on everyday information issues. One (Aspray) is continuing this work on information studies in everyday life, through an edited book currently in progress on information issues related to older Americans (Aspray, forthcoming in 2022). This book ranges from traditional Library and Information Science approaches about health information literacy on insurance for older Americans, the variety of information provided by AARP and its competitors, and the use of information and communication technologies to improve life in elderly communities; to more technologically oriented studies on ubiquitous computing, human-computer interaction, and Internet of Things for older people. Meanwhile, Ocepek is building on her work in her doctoral dissertation (Ocepek 2016), which examined from both social science and cultural approaches the everyday activity of grocery shopping. Her new study is examining what has happened to grocery shopping during the pandemic.
We are pleased to see the broadening in mission of the Babbage Institute to consider not only the history of computing but also the history and cultural study of information. For example, many scholars (including some computer historians) since 2016 have been studying misinformation. (See, for example, Cortada and Aspray 2019; Aspray and Cortada 2019.) This study of everyday information is another way in which the Babbage Institute can carry out its broadened mission today.
In particular, there are a few lessons for computer historians that can be drawn from the scholarship we have discussed here, although many readers of this journal may already be familiar with and practicing them:
- One can study information as well as information technology. On the history of information, see for example Blair (2010), Headrick (2000), Cortada, (2016), and Ann Blair et al. (2021). For a review of this scholarship, see Aspray (2015).
- One can study everyday uses of information and information technology, even if they may be regarded by some as quotidian – expensive, complex, socially critical systems are not the only kinds of topics involving information technology that are worth studying.
- This past year has taught all of us how an exogenous force, the COVID-19 pandemic, can quickly and radically reshape our everyday lives. In the opening chapter of our book, we briefly discuss the earliest changes the pandemic brought to real estate. We are also seeing the grocery industry as well as the millions of consumers learning, adapting, and changing their information behaviors around safely acquiring food.
- In order to study both historical and contemporary issues about information and information technology, one can blend historical methods with other methods from computer science (e.g., human-computer interaction, data science), social science (qualitative and quantitative approaches from sociology, psychology, economics, and geography), applied social science (labor studies, management and organization studies), and the humanities disciplines (cultural studies, critical theory).
These are exciting times for the historians of computing and information!
Agosto, Denise E. and Sandra Hughes-Hassell. (2005). "People, places, and Questions: An Investigation of the Everyday Life Information-Seeking Behaviors of Urban Young Adults." Library & Information Science Research, vol. 27, no. 2, pp. 141-163.
Alexander, Christopher. (1977). A Pattern Language. Oxford University Press.
Allen, Anita L. (1988). Uneasy Access: Privacy for Women in a Free Society. Rowman & Littlefield.
Aspray, William. (2015). The Many Histories of Information. Information & Culture, 50.1: 1-23.
Aspray, William. (forthcoming 2022). Information Issues for Older Americans. Rowman & Littlefield.
Aspray, William and James Cortada. (2019). From Urban Legends to Political Factchecking. Springer.
Aspray, William and Barbara M. Hayes. (2011). Everyday Information. MIT Press.
Aspray, William, George W. Royer, and Melissa G. Ocepek. (2013). Food in the Internet Age. Springer.
Aspray, William, George W. Royer, and Melissa G. Ocepek. (2014). Formal and Informal Approaches to Food Policy. Springer.
Bakardjieva, Maria. (2005). Internet Society: The Internet in Everyday Life. Sage.
Blair, Ann. (2010). Too Much to Know. Yale.
Blair, Ann, Paul Duguid, Anja Silvia-Goeing, and Anthony Grafton, eds. (2021). Information: A Historical Companion. Princeton.
Boling, Patricia. (1996). Privacy and the Politics of Intimate Life. Cornell University Press.
Case, Donald O. and Lisa M. Given. (2016). Looking for Information. 4th ed. Emerald.
Cortada, James and William Aspray. (2019). Fake News Nation. Rowman & Littlefield.
Cowan, Ruth Schwartz. (1983). More Work for Mother. Basic Books.
De Certeau, Michel (1984). The Practice of Everyday Life. Translated by Steven F. Rendall. University of California Press.
Fisher, Karen E. Sandra Erdelez, and Lynne McKechnie. (2009). Theories of Information Behavior. Information Today.
Foucault, Michel. (1970). The Order of Things. Routledge.
Gorichanaz, Tim (2020). Information Experience in Theory and Design. Emerald Publishing.
Hartel, Jenna. (2003). "The Serious Leisure Frontier in Library and Information Science: Hobby Domains." Knowledge Organization, vol. 30, No. 3-4, pp. 228-238.
Haythornthwaite, Caroline and Barry Wellman, eds. (2002). The Internet in Everyday Life. Wiley-Blackwell.
Headrick, Daniel. (2000). When Information Came of Age. Oxford.
Highmore, Ben ed. (2001). The Everyday Life Reader. Routledge.
Lefebvre, Henri. (2008). Critique of Everyday Life. vol. 1, 2nd ed. Translated by John Moore. Verso.
MacKinnon, Catherine. (1987). Feminism Unmodified. Harvard University Press.
Makinen, Lisa A. (2016). "Surveillance On/Off: Examining Home Surveillance Systems from the User’s Perspective." Surveillance & Society, 14.
McKenzie, Pamela J. (2003). "A Model of Information Practices in Accounts of Everyday‐Life Information Seeking." Journal of Documentation, vol. 59, no. 1, pp. 19-40.
Pettigrew, Karen E. (1999). "Waiting for Chiropody: Contextual Results from an Ethnographic Study of the Information Behaviour Among Attendees at Community Clinics." Information Processing & Management. vol. 35, no. 6, pp. 801-817.
Ocepek, Melissa G. (2016). "Everyday Shopping: An Exploration of the Information Behaviors of the Grocery Shoppers." Ph.D Dissertation, School. Of Information, University of Texas at Austin.
Ocepek, Melissa G. and William Aspray, eds. (2021). Deciding Where to Live. Rowman & Littlefield.
Savolainen, Reijo. (2008). Everyday Information Practices: A Social Phenomenological Perspective. Scarecrow Press.
Smith, Dorothy E. (1987). The Everyday World as Problematic: A Feminist Sociology. Northeastern University Press.
Star, Susan Leigh and James R. Griesemer. (1989). "Institutional Ecology, Translations, and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39." Social Studies of Science 19, 3: 387-420.
Steedman, Carolyn. (1987). Landscape for a Good Woman: A Story of Two Lives. Rutgers University Press.
Yost, Jeffrey R. (2008). “Internet Challenges for Nonmedia Industries, Firms, and Workers.” pp. 315-350 in William Aspray and Paul Ceruzzi, eds., The Internet and American Business. MIT Press.
Zuboff, Shoshanna. (2019). The Age of Surveillance Capitalism. Public Affairs.
Aspray, William and Ocepek, Melissa G. (April 2021). “Everyday Information Studies: The Case of Deciding Where to Live." Interfaces: Essays and Reviews in Computing and Culture Vol. 2, Charles Babbage Institute, University of Minnesota, 27-37.
About the authors:
Melissa G. Ocepek is an Assistant Professor at the University of Illinois Urbana-Champaign in the School of Information Sciences. Her research draws on ethnographic methods and institutional ethnography to explore how individuals use information in their everyday lives. Her research interests include everyday information behavior, critical theory, and food. Recently, she co-edited Deciding Where to Live (Rowman & Littlefield, 2021) with William Aspray. Previously she published two books that address the intersection of food, information, and culture: Food in the Internet Age and Formal and Informal Approaches to Food Policy (both with William Aspray and George Royer). Dr. Ocepek received her Ph.D. at the University of Texas at Austin in the School of Information.
William Aspray is Senior Research Fellow at CBI. He formerly taught in the information schools at Indiana, Texas, and Colorado; and served as a senior administrator at CBI, the IEEE History Center, and Computing Research Association. He is the co-editor with Melissa Ocepek of Deciding Where to Live (Rowman & Littlefield, 2021). Other recent publications include Computing and the National Science Foundation (ACM Books, 2019, with Peter Freeman and W. Richards Adrion); and Fake News Nation and From Urban Legends to Political Fact-Checking (both with James Cortada in 2019, published by Rowman & Littlefield and Springer, respectively).
Of Mice and Mentalité: PARC Ways to Exploring HCI, AI, Augmentation and Symbiosis, and Categorization and Control
Jeffrey R. Yost, Charles Babbage Institute, University of Minnesota
Abstract: This think piece essay comparatively explores history and mindsets with human-computer interaction (HCI) and artificial intelligence (AI)/Machine Learning (ML). It draws on oral history and archival and other research to reflect on the institutional, and cultural and intellectual history of HCI (especially the Card, Moran, and Newell team at Xerox PARC) and AI. It posits the HCI mindset (focused on augmentation and human-machine symbiosis, as well iterative maintenance) could be a useful framing to rethink dominant design and operational paradigms in AI/ML that commonly spawn, reinforce, and accelerate algorithmic biases and societal inequality.
This essay briefly recounts the 1982 professional organizational founding for the field of Human-Computer Interaction (HCI) before reflecting on two decades prior in interactive computing—HCI’s prehistory/early history—and its trajectory since. It comparatively explores history and mindsets with HCI and artificial intelligence (AI). For both HCI and AI, “knowing users” is a common target, but also a point of divergent departure.
For AI—especially large-scale, deployed systems in defense, search, and social networking—knowing users tends to involve surveillance, data collection, and analytics to categorize and control in the service of capital and power. Even when aims are purer, algorithmic biases frequently extend from societal biases. Machines can be programed to discriminate or learn it from data and data practices.
For HCI—from idealistic 1960s beginnings through 1980s professionalization and beyond—augmenting users and human-machine symbiosis has been its core. While an HCI-type mindset offers no magic bullet to AI’s ills, this essay posits that it can be a useful framing, a reminder toward proper maintenance, stewardship, and structuring of data, design, code (software), and codes (legal, policy, and cultural). HCI systems, of course, can be ill designed, perform in unforeseen ways, or users can misapply them, but this likely is less common and certainly is of lesser impactful scale relative to AI. Historians and sociologists must research the vast topics of AI and HCI more fully in many contexts and settings.
HCI and Solidifying the Spirit of Gaithersburg
In mid-March 1982 IIT Programming Technology Center’s Bill Curtis and University of Maryland’s Ben Shneiderman held the first “Human Factors in Computing Systems” conference in Gaithersburg, Maryland. The inspiring event far exceeded the organizers’ expectations, attracting more than 900 attendees. It was the pivotal leap forward in professionalizing HCI.
Rich program content filled the three-day program, as impactful organizational work occurred at an evening, small group side meeting. At the latter, Shneiderman, Curtis, UCSD’s Don Norman, Honeywell’s Susan Dray, Northwestern’s Loraine Borman, Xerox PARC's (Palo Alto Research Center) Stuart Card and Tom Moran, and others strategized about HCI’s future and possibilities for forming an association within a parent organization. Borman, an information retrieval specialist in a leadership role at ACM SIGSOC (Social and Behavioral Computing), and Shneiderman, a computer scientist, favored the Association for Computing Machinery (ACM). Insightfully seeing an expedient workaround, Borman proposed SIGSOC transform—new name/mission—bypassing the need for a new SIG approval.
Cognitive scientist Don Norman questioned whether ACM should be the home, believing computer science (CS) might dominate. After debate, Shneiderman and Borman’s idea prevailed. Dray recalls, the sentiment was “we can’t let the spirit of Gaithersburg die,” and for most, SIGSOC’s metamorphous seemed a good strategy (Dray 2020). Borman orchestrated transforming SIGSOC into SIGCHI (Computer-Human Interaction). The CHI tail essentially became the dog (SOC’s shrinking base mainly fit under HCI’s umbrella). Interestingly, “Computer” comes first in the acronym, but likely just to achieve a pronounceable word in the ACM SIG style, as “HCI” appeared widely in early CHI papers (SIGCHI’s annual conference).
Norman’s concerns proved prescient. SIGCHI steadily grew reaching over 2,000 attendees by the 1990 Seattle CHI, but in its first decade, it principally furthered CS research and researchers. Scholarly standards rose, acceptance rates fell, and some practitioners felt crowded out. In 1991, practitioners formed their own society, User Experience Professional Association (UXPA). In the 1990s and beyond, SIGCHI blossomed into an increasingly (academic) discipline diverse organization.
As with all fields/subfields, HCI has a prehistory or an earlier less organizationally defined history (for HCI, the 1960s and 1970s). SIGCHI’s origin lay in the confluence of: past work in human factors; university “centers of excellence” in interactive computing created through 1960s Advanced Research Projects Agency (ARPA) Information Processing Techniques Office (IPTO) support; two particularly impactful laboratories (PARC and SRI’s ARC); Systems Group artists in the UK; and the promise of Graphical User Interface (GUI) personal computers (PCs).
Nonprofit corporation SRI’s Augmentation Research Center (ARC), and Xerox’s PARC were at the forefront of GUI and computer mouse developments in the 1970s and 1980s. Neither the GUI nor mouse R&D were secret at PARC; in the 1970s, many visitors saw Alto demos, including, in 1979, Steve Jobs/Apple Computer team. In 1980 Apple hired away PARC’s Larry Tesler and others. Jobs launched the Apple Lisa effort (completed in 1983, priced at $10,000), which like the even more expensive Xerox Star (1981), possessed a GUI and mouse. The 1984 Apple Macintosh, retailing at $2,500, initiated an early mass market for GUI personal computers—inspiring initiators, most notably, Microsoft Windows 2.0 in 1987.
In early 2020, I conducted in-person oral history interviews with three of HCI’s foremost intellectual and organizational pioneers—the pilot for a continuing ACM/CBI project. This included UCSD Professor Don Norman (SIGCH Lifetime Research Awardee; Benjamin Franklin Medalist), Xerox PARC Scientist and Stanford Professor Stuart Card (SIGCHI Lifetime Research Awardee; National Academy of Engineering), and Dr. Susan Dray (SIGCHI Lifetime Practice Awardee; UXPA Lifetime Achievement Awardee).
Don Norman is well-known both within and outside CS—extending from his 1988 book The Psychology of Everyday Things (POET), re-released as wide selling, The Design of Everyday Things. A student of Duncan Luce (University of Pennsylvania), he was among the first doctorates in mathematical psychology. Early in his career, he joined the UCSD Psychology Department as an associate professor. After stints at Apple and Hewlett-Packard, and at Northwestern, he returned to lead the UCSD Design Laboratory. Norman helped take design from its hallowed ground of aesthetics to establish it in science, and greatly advanced understanding and practice of usability engineering.
Norman stressed to me that there is one scientist so consistently insightful he never misses his talks at events he attends, PARC’s Stuart Card. Card was the top doctoral student of Carnegie Mellon Professor of Cognitive Psychology and Computer Science Allen Newell. While these two interviews were in California, my interview with Dr. Susan Dray was in Minneapolis, with the scientist who pioneered the first corporate usability laboratory outside the computer industry (IBM and DEC had ones) at American Express Financial Advisors (AEFA).
Dray took a different path after her doctorate in psychology from UCLA—human factors—on classified Honeywell Department of Defense (DoD) projects. In the early 1980s, Honeywell, a pioneering firm in control systems, computers, and defense-contracting, had a problem with ill-adapted computing in its headquarters for clerical staff, which Dray evaluated. This became path defining for her career, toward computer usability. After pioneering HCI work at Honeywell, Dray left for American Express, and later became a successful and impactful HCI consultant/entrepreneur. She applied observations, ethnographic interviewing, and the science of design to improve interaction, processes, and human-machine symbiosis in cultures globally, from the U.S., South Africa, Egypt and Jordan to India, Panama, and France.
Earlier, in the late 1980s, at American Express, Dray was seeking funds for a usability lab, and she creatively engaged in surreptitious user feedback. She bought a “carton” of Don Norman’s POET book, had copies delivered to all AEFA senior executives on the top/29th floor, and rode up and down the elevator starting at 6 am for a couple hours each morning for weeks, listening to conversations concerning this mysteriously distributed book on the science of design. Well-informed, she pitched successfully, gaining approval for her usability lab.
The Norman, Card, and Dray oral histories, another HCI interview I just conducted, with artist Dr. Ernest Edmonds, my prior interview with Turing Awardee Butler Lampson of Alto fame, preparation for these five interviews, and AI and HCI research at the CBI, MIT, and Stanford University archives inform this essay.
For AI and HCI, Is There a Season?
Microsoft Research Senior Scientist, Jonathan Grudin—in his valuable From Tool to Partner (2017) on HCI’s history—includes a provocative argument that HCI thrives during AI Winters and suffers during AI’s other seasons. The usefulness of the widespread Winter metaphor is debatable, it is based on changing funding levels to elite schools (Mendon-Plasek, 2021 p. 55), but Grudin’s larger point—only one of the two fields thrives at a time—hints to a larger truth: HCI and AI have major differences. The fields overlap with some scientists and some common work but have distinct mindsets. Ironically, AI, once believed to be long on promises and short on deliveries (the rationalized basis for AI Winters), is now delivering stronger, and likely, more harmfully than ever given algorithmic and data biases in far reaching corporate and government systems.
Learning How Machines Learn Bias
Increasingly more and more of our devices are “smart,” a distracting euphemism obscuring how AI (in ever-increasingly interconnected sensor/IoT/cloud/analytics systems) reinforces and extends biases based on race, ethnicity, gender, sexuality, and disability. Recent interdisciplinary scholarship is exposing roots of discriminatory code (algorithms/software) and codes (laws, policy, culture), including deeply insightful keynotes at the Charles Babbage Institute’s (CBI) “Just Code” Symposium (a virtual, major event with 345 attendees in October 2020) by Stephanie Dick, Ya-Wen Lei, Kenneth Lipartito, Josh Lauer, and Theodora Dryer. Their work contributes to an important conversation also extended in important scholarship by Ruha Benjamin, Safiya Noble, Matt Jones, Charlton McIlwain, Danielle Allen, Jennifer Light (MIT; and CBI Sr. Research Fellow), Mar Hicks, Virginia Eubanks, Lauren Klein, Catherine D’Ignazio, Amanda Menking, Aaron Mendon-Plasek (Columbia; and current CBI Tomash Fellow), and others.
AI did not merely evolve from a benevolent past to a malevolent present. Rather, it has been used for a range of different purposes at different times. Geometrically expanding the number of transistors on chips—the (partially) manufactured and manufacturing/fabrication trajectory of Moore’s Law—enabled computers and AI to become increasingly powerful and pervasive. Jennifer Light’s insightful scholarship on the RAND Corporation’s 1950s and 1960s operations research, systems engineering, and AI, created in the defense community, and later misapplied to social welfare, counters notions of an early benevolent age. Even if chess is the drosophila of AI—a phrase of John McCarthy’s from the 1990s—its six-decade history is one of consequential games, power contests. Work in computer rooms at the Pentagon’s basement and at RAND harmfully escalated Cold War policies as DoD/contractors simulated and supported notions of the U.S. rapidly “winning” the Vietnam War, and earlier, C-E-I-R (founded by ex-RAND scientists) used input/output-economics algorithmic systems to determine optimal bomb targets to decimate the Soviet Union industrially (Yost, 2017).
What helped pulled AI out of its first long (1970s) Winter was successes and momentum with expert systems—the pioneering work of Turing Awardee Stanford AI scientist Edward Feigenbaum and molecular biologist and Nobel Laureate Joshua Lederberg’s late 1960s Dendral, to advance organic chemistry, and Feigenbaum and others’ early 1970s MYCIN in medical diagnostics and therapeutics. These AI scientific triumphs stood out and lent momentum for expert systems, as did fears of Japan’s Fifth Generation (early 1980s—government and industry partnership in AI/systems). In the 1980s, elite US CS departments again received strong federal support for AI. Work in expert systems in science, medicine, warfare, and computer intrusion detection abounded (Yost, 2016).
Some AI systems are born biased; others learn it—from algorithmic tweaks to expert system inference engines to biased data. Algorithmic bias is just one of the many problematic byproducts of valuing innovation over maintenance (Vinsel and Russell 2020, Yost 2017).
Human Factors and Ergonomics
The pre-history/early history of human-machine interaction dates back many decades to the control of workers and soldiers to maximize efficiency. The late-1950s-spawned Human Factors Engineering Society grew out late inter-war period organizational work of the Southern California aerospace industry. In the first half of the 20th century, human factors had meaningful roots in the scientific management thought, writings, and consulting of Frederick Winslow Taylor. This tradition defined the worker as an interchangeable part, a cog within the forces of production to efficiently serve capital. At Taylorist-inspired and organized factories, management oppressed laborers, and human factors has a mixed record in its targets, ethics, and outcomes. However, in HCI’s organizational start, early 1980s, the mantra was not merely of efficiency; it was the frequently uttered, “know the user.” This, importantly, was a setting of personal computing and GUI idealism, a trajectory insightfully explored by Stanford’s Fred Turner in From Counterculture to Cyberculture.
We’re on a Road to Intertwingularity, Come on Inside
Years before the National Science Foundation (NSF) took the baton to be the leading federal funder of basic CS research at universities, ARPA’s IPTO (following 1962 founding director’s J.C.R. Licklider’s vision), changed the face of computing toward interaction. Well known philosopher and sociologist Ted Nelson, a significant HCI contributor of the 1960s and 1970s, creatively coined the term “intertwingularity” of the symbiosis and all being intertwined or connected (networking, text through his term/concept of “hypertext,” human user with interactive computing)—it can aptly describe the multifaceted HCI work of 1960’s IPTO-funded SRI’s ARC and 1970s Xerox PARC.
The 1970-enacted Mansfield Amendment required direct and defined DoD function for all DoD research funding. It left a federal funding vacuum for years until NSF could ramp up to become a roughly comparable basic funder for the interactive computing that IPTO started. The vacuum, however, was largely filled by a short golden age of corporate industrial research in interactive computing at Xerox, a firm with a capital war chest, much dry powder, from its past photocopier patent-based monopoly, and seeking to develop the new, new thing(s). Xerox looked to its 1970-launched PARC to invent the office of the future. It hired many previously IPTO-supported academic computer scientists, it produced and employed a cadre of Turing Awardees, an unprecedented team far exceeding any single university’s CS department in talent or resources.
Inside the PARC Homeruns
Douglas Engelbart and the earliest work on the first mouse designed by him and SRI’s Bill English is addressed by French Sociologist Thierry Bardini in Bootstrapping, a biography of Engelbart. Journalists, such as Michael Hiltzik, have covered some major contours of technical innovation at PARC.
Central to Bardini’s and Hiltzik’s and others’ narratives is the important HCI work of Turing Awardees Douglas Engelbart at SRI; and Butler Lampson, Alan Kay, Charles Thacker, and Charles Simonyi at PARC. In this essay I look beyond oft-told stories and famed historical actors in GUIs and mice to briefly discuss a hitherto largely overlooked, highly impressive small PARC research team composed of Newell, Card, and Moran, and a larger team that Card later led. The incredible accomplishments of Lampson and others changed the world with the GUI. They hit the ball out park, so to speak—"a shot heard round the world” (1951 Bobby Thompson Polo Grounds, Don DeLillo immortalized, homerun sense) that very visibly revolutionized interactive computing.
Newell is one of the most famous of the first-generation AI scientists, a principal figure at John McCarthy’s famed Dartmouth Summer 1956 Workshop, in which McCarthy, Newell, Herbert Simon, Marvin Minsky, and others founded and gave name to the field—building upon earlier work of Alan Turing. On a project launched in 1955, Newell, as lead, co-invented (with Simon and Clifford Shaw) “The Logic Theorist” in 1956, the first engineered, automated logic or AI program. Many historian and STS colleagues I have spoken with associate Newell solely with AI, and they are unaware of his PARC HCI work. Unlike Turing and Simon, Newell does not have a major biography documenting the full breadth of his work. Newell’s HCI research has been neglected by historians, as has that of his two top students, Card and Moran. They published many seminal HCI papers in Communications of the ACM and other top journals.
This oversight (by historians, they were revered by fellow scientists), especially neglecting career long contributions of Card and Moran, is a myopic favoring of first-recognized invention over subsequent ones, missing key innovations, and devaluing maintenance. It was not merely the dormouse (mouse co-inventors Engelbart and English, the recognized revolution), but multiple dormice (the science and engineering behind optimizing mice for users). Remember(ing) what the dormice said (and with an open ear of historical research), Card and Moran clearly conducted brilliant scientific research spawning many quiet revolutions.
Rookie Card to All-Star Card, Pioneering HCI Scientist Stuart Card
Stuart Card was first author of a classic textbook, Psychology of Human-Computer Interaction, with co-authors Newell and Moran. Card progressed through various research staff grades and in 1986 became a PARC Senior Research Scientist. Two years later, he became Team Leader of PARC’s User Interface Research Group. The breadth and contributions of Card and PARC’s HCI research in the 1970s to 1990s is wide in both theory and practice. The work fell into three broad categories: HCI Models, Information Visualization, and Information Retrieval—and major contributions in each is breathtaking. One early contribution in HCI models was Card and the team’s work on the mouse and its performance by an information-theoretical model of motor movement, Fitts’ Law, using a processing rate parameter of 10 bits/sec, roughly at the same performance ability as the hand, demonstrating performance was not limited by the device/mouse in terms of speed, but by the hand itself. It proved a mouse was optimized to interact with humans. This impacted the development of the Xerox Star mouse in 1981 and the earliest computer mice developed by Apple Computer. Card’s, and his team’s, work was equally profound on information visualization, in areas such as Attentive-Reactive Visualizer and Visualizer Transfer Functions. In information retrieval, they did advanced Information Foraging Theory.
While staying at PARC for decades, Card concurrently served as a Stanford University Professor of Computer Science. He became a central contributor to SIGCHI and was tremendously influential to academic, industrial, and government scientists.
In listening to Card’s interview responses (and deeply influenced by my Norman, Dray, and Butler Lampson interviews also, as well as by my past research), I reflected that many AI scientists could learn much from such a mindset of valuing users, all users—knowing users to help augment, for symbiosis, not to control. AI scientists, especially on large scale systems in corporations and government (much ethical AI research is done at universities), could benefit in not merely technical ways, as Steve Jobs and others did from their day in the PARC, but from Card and his team’s ethos and ethics.
Professionalizing HCI: Latent Locomotion to Blissful Brownian Motion
While SIGCHI unintentionally pushed out many non-scientists in the 1980s, it and the HCI field shed strictly a computer science and cognitive science focus to become ever more inclusive of a wide variety of academic scientists, engineers, social scientists, humanities scholars, artists, and others from the 1990s forward. CHI grew from about 1,000 at the first events in Gaithersburg and Boston to more than 3,600 attendees at some recent annual CHI meetings (and SIGCHI now has more than two-dozen smaller conferences annually). The SIGCHI/CHI programs and researchers are constantly evolving and exploring varying creative paths that from a 30,000-foot vantage might seem to be many random walks, Brownian motion. The research, designing to better serve users, contributes to many important trajectories. The diversity of disciplines and approaches can make communication more challenging, but also more rewarding, and to a high degree a Galison-like trading zone exist in interdisciplinary SIGCHI and HCI.
One example is the Creativity and Cognition Conference co-founded by artists/HCI scientists Ernest Edmonds and Linda Candy in 1993 that became a SIGCHI event in 1997. It brings together artists, scientists, engineers, and social scientists to share research/work on human-computer interaction in art and systems design. As Edmonds related to me, communication and trust between artists and scientists takes time, but is immensely valuable. Edmonds is an unparalleled figure in computer generative and interactive art, and a core member of the Systems Group of principally UK computer generative artists. In addition to many prestigious art exhibitions in the 1970s (and beyond), Edmonds published on adaptive software development, with critique of the waterfall method. His work—in General Systems in 1974—anticipated and helped to define adaptive techniques, later referred to as agile development. Edmonds, through his artist, logician, and computer science lenses insightfully saw interactive and iterative processes, a new paradigm in programming technique, art, and other design.
HCI research, and its applications, certainly is not always in line with societal good, but it has an idealistic foundation and values diversity and interdisciplinarity. Historians still are in the early innings of HCI research. Elizabeth Petrick has done particularly insightful scholarship on HCI and disability (2015).
Coding and Codifying, Fast and Slow
Nobel Laureate Daniel Kahneman has published ideas on human cognition that are potentially useful to ponder with regard to AI and HCI. Kahneman studies decision-making, and judgement, and how different aspects of these arise from how we think—both fast, emotionally, unconsciously, and instinctively; and slow, more deeply and analytically.
Programming projects for applications and implementation of systems are often behind schedule and over-budget. Code, whether newly developed or recycled, often is applied without an ethical evaluation of its inherent biases.
HCI often involves multiple iterations with users, usability labs, observation in various settings, ethnographic interviewing, and an effective blend of both inspiring emotional-response, fast thinking, and especially, deep reflective slow thinking. This slow and analytical thinking and iterative programming (especially maintenance, and endless debugging) could potentially be helpful in beginning to uproot underlying algorithmic biases. Meanwhile, slow, and careful reflection on how IT laws, practices, policies, culture, and data are codified is instructive. All of this involves ethically interrogating the what, how, why, and by and for whom of innovation, and valuing maintenance labor and processes, not shortchanging maintenance in budget, respect, or compensation.
Beyond “Laws” to Local Knowledge
In 1967 computer scientist, Melvin Conway, noted (what became christened) Conway’s Law—computer architecture reflects the communication structure of the underlying organization where it was developed (made famous by Tracy Kidder’s The Soul of a New Machine). Like Moore’s Law, Conway’s Law is really an observation, and a self-fulfilling prophecy. Better understanding and combatting biases at the macro is critical. Also essential is evaluation and action at the local and organizational levels. How does organizational culture structure algorithms/code? What organizational policies give rise to what types of code? What do (end) users, including and especially marginalized individuals and groups, have to say on bias? How do decisions at the organizational level reinforce AI/ML algorithmic and data biases, and reinforce and accelerate societal inequality? These are vital questions to consider through many future detailed cases studies in settings globally. The goal should not be a new “law,” but rather a journey to gain local knowledge and learn how historical, anthropological, and sociological cases inform on code and codes toward policies, designs, maintenance, and structures that are more equitable.
“Why Not Phone Up Robinhood and Ask Him for Some Wealth Distribution”
The lyric above from the 1978 reggae song “White Man in a Hammersmith Palais,” by The Clash, might be updated to why not open a Robinhood app… (at least until it suspended trading). How historians later assess the so-called Robinhood/Reddit “Revolution” a transfer of $20 billion away from hedge funds/banks/asset managers over several weeks in early 2021 (punishing bearish GameStop shorting by bidding up shares to force short covering), remains to be seen. Is it a social movement, and of what demographic makeup and type? For many, it likely, at least in part, is a stand against Wall Street, and thus Zuccotti Park comparisons seem apropos. Eighty percent of stock trading volume is automated—algorithmic/programmed (AI/ML)—contributing to why a 2021 CNBC poll showed 64 percent of Americans believe Wall Street is rigged. Like capitalism, equities markets and computers combine as a potent wealth concentrating machine—one turbocharged in pandemic times and fueled by accommodative monetary policy. “Smart” systems/platforms in finance, education, health, and policing all accelerated longstanding wealth, health and incarceration gaps and divergences to hitherto unseen levels. Not to dismiss volatility or financial risk to the Reddit “revolutionaries,” but the swiftness of regulatory calls by powerful leaders is telling. It begs questions on priorities: regulation for who, of what, when, and why? U.S. IT giants using AI to surveille, and to dominate with anti-competitive practices has gone largely unregulated (as has fintech) for years. Given differential surveillance, Blacks, Indigenous, People of Color (BIPOC) suffer differentially. The U.S. woefully lags Europe on privacy protections and personal data corporate taxes. U.S. racial violence/murders by police disgracefully dwarfs other democratic nations, and America stands out for Its (police and courts) embracement of racially biased facial recognition technology (FRT) and recidivist predictive AI—such as Clearview FRT and Northpointe’s (now Equivant) Corrective Offender Management Profiling for Alternative Sanctions (COMPAS).
Meanwhile parallel Chinese IT giants Baidu, Alibaba, and Tencent, dominant in search, e-commerce, and social networking respectively, use intrusive AI. These firms (fostered by the government), ironically, are also contributing to platforms enabling a “contentious public sphere.” (Lei 2017).
At times, users can appropriate digital computing tools against the powerful in unforeseen ways. Such historical agency is critical to document and analyze. History informs us that AI/ML, like many technologies, left unchecked by laws, regulations, and ethical scrutiny will continue to be powerfully accelerating tools of oppression.
Raging Against Machines That Learn
U.S. headquartered AI-based IT corporate giants’ record on data and analytics policy and practices have garnered increasing levels of critique by journalists, academics, legislators, activists, and others. The New York Times has reported on clamp downs on employees expressing themselves on social and ethical issues. The co-leader of Google’s Ethical AI Group Timnit Gebru tweeted in late 2020 she was fired for sending an email encouraging minority hiring and drawing attention to bias in artificial intelligence. Her email included, “Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset.” (Metz and Wakabayashi 2020).
On June 30, 2020, U.S. Senators Robert Menendez, Mazie Hirono, and Mark Warner wrote Facebook CEO Mark Zuckerberg critiquing his company for failing to “rid itself of white supremacist and other extremist content.” (Durkee 2020). A subsequent Facebook internal audit called for better AI—a tech fix. Deep into 2019 Zuckerberg (with a lack of clarity, as at Georgetown in October 2019) sought to defend Facebook’s policies on the basis of free speech. More concerning than his inability to execute free speech arguments is the lack of transparency and the power wielded by a platform with 2.5 billion users, it has immense power to subvert democracy and to differentially harm. It has a clear record of profits over principles. In mid-2020 The Color of Change, NAACP, National Hispanic Media Coalition and others launched the “Stop Hate for Profit” boycott on Facebook advertising for July 2020, more than 1,200 organizations participated. Pivoting PR in changing political winds, Zuckerberg is seeking to shift responsibility to Congress asking it to regulate (Facebook’s legal team likely will defend the bottom line).
Data for Black Lives, led by Executive Director Yeshimabeit Milner, is an organization and movement of activists and mathematicians. It focuses on fighting for possibilities for data use to address societal problems and fighting against injustices, stressing “discrimination is a high-tech enterprise.” It recently launched, Abolish Big Data, “a call to action to reject the concentration of Big Data in the hands of the few, to challenge the structures that allow data to be wielded as a weapon…” (www.d4bl.org). This organization is an exemplar of vital work for change underway, and also of the immense challenge ahead given the power of corporations and government entities (NSA, CIA, FBI, DoD, police, courts).
HCI, never the concentrating force AI has become, continues to steadily grow as a field—intellectually, in diversity, and in importance. It has a record of embracing diversity, helping to augment and advance human and computer symbiosis. More historical work on HCI is needed, but it offers a useful mindset.
Given AI historical scholarship to date, we know its record has been mixed from the start. From its first decades of 1950s and 1960s to today, DoD, NSA/CIA/FBI, Police, and criminal justice have been frequent funders, deployers and users of AI systems plagued with algorithmic biases that discriminate against BIPOC, women, the LGBTQIA, and the disabled. Some of the most harmful systems have been with facial recognition and predictive policing. Yet, properly designed, monitored, and maintained, AI offers opportunities for science, medicine, and social services (especially at universities and nonprofits).
The social science, humanities, and arts can have a fundamental positive role on the design, structuring, and policies with AI/ML. A handful of universities recently have launched interdisciplinary centers to focus on AI, history, and society. This includes the recently formed AI Now Institute at NYU (2017) and the Institute for Human-Centered AI at Stanford (2019). The Charles Babbage Institute has made the interdisciplinary social study of AI and HCI a focus (with “Just Code” and beyond)—research, archives, events, oral histories, and publications. In CS, ACM’s (2018 launched) Conference on Fairness, Accountability, and Transparency (FAccT), offers a great forum. Outside academe many are doing crucial research, policy, and activist work—a few examples: Data for Black Lives; Blacks in Technology; NC-WIT; AnitaB.org; Algorithmic Justice League; Indigenous AI.Net; Algorithmic Bias Initiative, (U. of Chicago).
The lack of U.S. regulation to date, discrimination and bias, corporate focus and faith on tech fixes, inadequate transparency, corporate imperialism, and overpowering employees and competitors have many historical antecedents inside and outside computing. History—the social and policy history of AI and HCI, as well as other labor, race, class, gender, and disability history—has much to offer. It can be a critical part of a broad toolkit to understand, contextualize, and combat power imbalances—to better ensure just code and ethically shape and structure the ghost in the machine that learns.
Acknowledgments: Deep thanks to Bill Aspray, Gerardo Con Diaz, Andy Russell, Loren Terveen, Honghong Tinn, and Amanda Wick for commenting on a prior draft.
Allen, Danielle and Jennifer S. Light. (2015). From Voice to Influence: Understanding Citizenship in a Digital Age. University of Chicago Press.
Alexander, Jennifer. (2008). The Mantra of Efficiency: From Waterwheel to Social Control. Johns Hopkins University Press.
Bardini, Thierry. (2000). Bootstrapping: Coevolution and the Origins of Personal Computing. Stanford University Press.
Benjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity.
Card, Stuart K., Thomas Moran, and Allen Newell (1983). The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates.
Card, Stuart K., Oral History (2020). Conducted by Jeffrey R. Yost, Los Altos Hills, CA, February 17, 2020. CBI, UMN.
Dick, Stephanie. (2020). “NYSIIS, and the Introduction of Modern Digital Computing to American Policing.” Just Code: Power, Inequality, and the Global Political Economy of IT (Symposium presentation: Oct. 23). [Hereafter “Just Code” Symposium]
D’Ignazio, Catherine and Lauren Klien. (2020). Data Feminism. MIT Press.
Dray, Susan, Oral History (2020). Conducted by Jeffrey R. Yost, CBI, Minneapolis, Minnesota, January 28, 2020. CBI, UMN.
Durkee, Alison. (2020). “Democratic Senators Demand Facebook Answer For Its White Supremacist Problem.” Forbes. June 30. (accessed online at Forbes.com).
Dryer, Theodora. (2020). “Streams of Data, Streams of Water: Encoding Water Policy and Environmental Racism.” Just Code” Symposium.
Edmonds, Ernest. (1974). “A Process for the Development of Software for Non-Technical Users as an Adaptive System.” General Systems 19, 215-218.
Eubanks, Virginia. (2019). Automating Inequality: How High-Tech Tools Punish, Police, and Punish the Poor. Picador.
Galison, Peter. (1999) “Trading Zone: Coordinating Action and Belief.” In The Science Studies Reader, ed. by Mario Biagioli. Routledge. 137-160.
Grudin, Jonathan. (2017). From Tool to Partner: The Evolution in Human-Computer Interaction. Morgan and Claypool.
Hiltzik, Michael. (2009). Dealers in Lightening: Xerox PARC and the Dawn of the Computer Age. HarperCollins.
Kahnman, Daniel. (2011). Thinking, Fast and Slow. Farrar, Straus, and Giroux.
Kidder, Tracy. (1981). Soul of a New Machine. Little, Brown, and Company.
Lampson, Butler, Oral History (2014). Conducted by Jeffrey R Yost, Cambridge, Massachusetts, December 11, 2014. Charles Babbage Institute, UMN.
Lauer, Josh and Kenneth Lipartito. (2020) “Infrastructures of Extraction: Surveillance Technologies in the Modern Economy.” Just Code” Symposium.
Light, Jennifer S. (2005). From Warfare to Welfare: Defense Intellectuals and Urban Problems in Cold War America. University of Chicago Press.
McIlwain, Charlton. (2020). Black Software. The Internet and Racial Justice: From AfroNet to Black Lives Matter. Oxford University Press.
Mendon-Plasek, Aaron. (2021). “Mechanized Significance and Machine Learning: Why It Became Thinkable and Preferable to Teach Machines to Judge the World.” In J. Roberge and M. Castelle, eds. The Cultural Life of Machine Learning. Palgrave Macmillan, 31-78.
Menking, Amanda and Jon Rosenberg. (2020). “WP:NOT, WP:NPOV, and Other Stories Wikipedia Tells Us: A Feminist Critique of Wikipedia's Epistemology.” Science, Technology, & Human Values, May, 1-25.
Metz, Cade and Daisuke Wakabayashi. (2020). “Google Researcher Says She was Fired Over Paper Highlighting Bias in AI.” New York Times, Dec. 2, 2020.
Norman, Don, Oral History. (2020). Conducted by Jeffrey R. Yost, La Jolla, California, February 12, 2020. CBI, UMN.
Noble, Safiya Umoja. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
Petrick, Elizabeth. (2015). Making Computers Accessible: Disability Rights and Digital Technology. Johns Hopkins University Press.
Turner, Fred. (2010). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University of Chicago Press.
Vinsel, Lee and Andrew L. Russell. (2020). The Innovation Delusion: How our Obsession with the New has Disrupted the Work That Matters Most. Currency.
Yost, Jeffrey R. (2016). “The March of IDES: Early History of Intrusion Detection Expert Systems.” IEEE Annals of the History of Computing 38:4, 42-54.
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry. MIT Press.
Yost, Jeffrey R. (March 2021). “Of Mice and Mentalité: PARC Ways to Exploring HCI, AI, Augmentation and Symbiosis, and Categorization and Control". Interfaces: Essays and Reviews in Computing and Culture Vol. 2, Charles Babbage Institute, University of Minnesota, 12-26.
About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor at the University of Minnesota. He has published six books (and dozens of articles), most recently Making IT Work: A History of the Computer Services Industry (MIT Press, 2017) and FastLane: Managing Science in the Internet World (Johns Hopkins U. Press, 2016) [co-authored with Thomas J. Misa]. He is a past EIC of IEEE Annals of the History of Computing, and current Series Co-Editor [with Gerard Alberts] of Springer’s History of Computing Book Series. He has been a principal investigator of a half dozen federally sponsored projects (NSF and DOE) on computing/software history totaling more than $2 million. He is Co-Editor [with Amanda Wick] of Interfaces: Essays and Reviews in Computing & Culture.
Paul E. Ceruzzi, National Air and Space Museum, Smithsonian Institution
Abstract: The term “The Cloud” has entered the lexicon of computer-speak along with “cyberspace,” the Matrix,” the “ether,” and other terms suggesting the immateriality of networked computing. Cloud servers, which store vast amounts of data and software accessible via the Internet, are located around the globe. This essay argues that this “matrix” has an epicenter, namely the former rural village of Ashburn, Virginia. Ashburn’s significance is the result of several factors, including northern Virginia’s historic role in the creation of the Internet and its predecessor, the ARPANET. The Cloud servers located there also exist because of the availability of sources of electric power, including a grid of power lines connected to wind turbines, gas-, and coal-fired plants located to its west—a “networking” of a different type but just as important.
In In his recent book, Making IT Work, Jeffrey Yost quotes a line from the famous Joni Mitchell song, “Clouds”: “I really don’t know clouds at all.” He also quotes the Rolling Stones’ hit, “[Hey, you,] Get off my Cloud.” Why should a business or government agency trust its valuable data to a third-party, whose cloud servers are little understood? No thank you, said the Rolling Stones; not until you can explain to me just what the Cloud is and where it is. Yost gives an excellent account of how cloud servers have come to the fore in current computing. Yet Joni Mitchell’s words still ring true. Do we really know what constitutes the “Cloud”?
A common definition of the Cloud is that of sets of high-capacity servers, scattered across the globe, using high-speed fiber to connect the data stored therein to computing installations. These servers supply data and programs to a range of users, from mission-critical business customers to teenagers sharing photos on their smartphones. What about that definition is cloud-like? Our imperfect understanding of the term is related to the misunderstanding of similar terms also in common use. One is “cyberspace,” the popularity of which is attributed to the Science Fiction author William Gibson, from his novel Neuromancer, published in 1984. Another is “The Matrix”: the title of a path-breaking book on networking by John Quarterman, published in 1990 at the dawn of the networked age. The term came into common use after the award-winning 1999 Warner Brothers film starring Keanu Reeves. (Quarterman was flattered that Hollywood used the term, but he is not sure whether the producers of the film knew of his book.) In the early 1970s, Robert Metcalfe, David Boggs, and colleagues at the Xerox Palo Alto Research Center developed a local area networking system they called “Ethernet”: suggesting the “luminiferous aether” that was once believed to carry light through the cosmos.
These terms suggest an entity divorced from physical objects—pure software independent of underlying hardware. They imply that one may dismiss the hardware component as a given, just as we assume that fresh, drinkable water comes out of the tap when we are thirsty. The residents of Flint, Michigan know that assuming a robust water and sewerage infrastructure is hardly a given, and Nathan Ensmenger has reminded us that the “Cloud” requires a large investment in hardware, including banks of disk drives, air conditioning, fiber connections to the Internet, and above all, a supply of electricity. Yet the perception persists that the cloud, like cyberspace, is out there in the “ether.”
Most readers of this journal know the physical infrastructure that sustains Ethernet, Cyberspace, and the Cloud. I will go a step further: not only does the Cloud have a physical presence, but it also has a specific location on the globe: Ashburn, Virginia.
A map prepared by the Union Army in 1862 of Northern Virginia shows the village of Farmwell, and nearby Farmwell Station on the Alexandria, Loudoun, and Hampshire railroad. The town later changed its name to Ashburn, and it lies just to the north of Washington Dulles International Airport. In the early 2000s, as I was preparing my study of high technology in northern Virginia, Ashburn was still a farming community. Farmwell Station was by the year 2000 a modest center of Ashburn: a collection of buildings centered on a general store. The railroad had been abandoned in 1968 and was now the Washington and Old Dominion rail-trail, one of the most popular and heavily traveled rails-to-trails conversions in the country. Thirsty hikers and cyclists could get refreshment at the general store, which had also served neighboring farmers with equipment and supplies.
Cycling along the trail west of Route 28 in 2020, one saw a series of enormous low buildings, each larger than the size of a football field, and surrounded by a mad frenzy of construction, with heavy equipment trucks chewing up the local roads. Overhead was a tangle of high-tension electrical transmission towers, with large substations along the way distributing the power. The frenzy of construction suggested what it was like to have been in Virginia City, Nevada, after the discovery and extraction of the Comstock Lode silver. The buildings themselves had few or no markings on them, but a Google search revealed that one of the main tenants was Equinix, a company that specializes in networking. The tenants of the servers try to avoid publicity, but the local chambers of commerce, politicians, and real estate developers are proud to showcase the economic dynamo of the region. A piece on the local radio station WTOP on November 17, 2020, announced that “Equinix further expands its big Ashburn data center campus,” quoting a company spokesperson saying that “…its Ashburn campus is the densest interconnection hub in the United States.” An earlier broadcast on WTOP reported on the activities of a local real estate developer, that “Northern Virginia remains the ‘King of the Cloud’” In addition to Equinix, the report mentioned several other tenants, including Verizon and Amazon Web Services.
These news accounts are more than just hyperbole from local boosters. Other evidence that indicates that, although cloud servers are scattered across the globe, Ashburn is indeed the navel of the Internet.
In my 2008 study of Tysons Corner, Virginia, I mentioned several factors that led to the rise of what I then called “Internet Alley.” One was the development of ARPANET at the Pentagon, and later at a DARPA office on Wilson Blvd. in Rosslyn. Another was the rise of the proto-Internet company AOL, headquartered in Tysons Corner. Also, in Tysons Corner was the location of “MAE-East”—a network hub that carried a majority of Internet traffic in its early days. The root servers of the dot.com and dot.org registry were once located in the region, with the a: root server in Herndon, later moved to Loudoun County. The region thus had a skilled workforce of network-savvy electrical and computer engineers, plus local firms such as SAIC and Booz-Allen who supported networking as it evolved from its early incarnations.
Around the year 2000, while many were relieved that the “Y2K” bug had little effect on mainframe computers, the dot.com frenzy collapsed. The AOL-Time Warner merger was a mistake. But there was an upside to the boom-and-bust. In the late 19th and early 20th Century the nation experienced a similar boom and bust of railroad construction. Railroads went bankrupt and people lost fortunes, But the activity left behind a robust, if overbuilt, network of railroads that served the nation well during the mid and late 20th century. During the dot.com frenzy, small firms like Metropolitan Fiber dug up many of the roads and streets of Fairfax and Loudoun Counties and laid fiber optic cables, which offered high speed Internet connections. After the bust these became unused— “dark fiber” as it was called. Here was the basis for establishing Cloud servers in Ashburn. By 2010, little land was available in Tysons Corner, Herndon, or Reston, but a little further out along the W&OD rail-trail was plenty of available land.
That leaves the other critical factor in establishing Cloud servers—the availability of electric power. While some Cloud servers are located near sources of wind, solar, or hydroelectric power, such as in the Pacific Northwest, Northern Virginia has few of those resources. The nearest large-scale hydroelectric plant, at the Conowingo Dam, lies about 70 miles to the north, but its power primarily serves the Philadelphia region. (That plant was the focus of the classic work on electric power grids, Networks of Power, by Thomas Parke Hughes.) To answer the question of the sources of power for Ashburn, we return to the Civil War map and its depiction of the Alexandria, Loudoun, and Hampshire, later known as the Washington and Old Dominion Railroad.
The origins of that line go back to the 1840s, when freight, especially coal, from the western counties of Virginia were being diverted to Baltimore, Maryland over the Baltimore and Ohio Railroad. In response, Virginians chartered a route west over the Blue Ridge to the mineral and timber rich areas of Hampshire County. (After 1866 Hampshire County was renamed Mineral County, in the new state of West Virginia.) The Civil War interrupted progress in construction, and after several challenges to its financial structure, the line was incorporated as the Washington and Old Dominion Railway Company in 1911. It never reached farther than the summit of the Blue Ridge, and the proposed route to the west would have had to cross rugged topography. The line could never have competed with the B&O’s water level route. The shortened line soldiered on, until finally being abandoned in 1968, making way for the rail-trail conversion. One interesting exception was a short spur in Alexandria, which carried coal to a power plant on the shore of the Potomac. That plant was decommissioned in 2014, thus ending the rail era of the Alexandria, Loudoun, and Hampshire.