Interfaces
Essays and Reviews in Computing and Culture

Interfaces publishes short essay articles and essay reviews connecting the history of computing/IT studies with contemporary social, cultural, political, economic, or environmental issues. It seeks to be an interface between disciplines, and between academics and broader audiences.
Co-Editors-in-Chief: Jeffrey R. Yost and Amanda Wick
Managing Editor: Melissa J. Dargay
In the present moment, there are numerous discussions and debates about the function and even the possibility of memorization in artificial neural networks, especially in large language models (Tirumala et. al., 2022). A model that has memorized content from its training data is particularly problematic, especially when these models are used for generative tasks. Desirable outputs from generative models are those that closely resemble but do not exactly match inputs. Corporations developing and releasing these new technologies may make themselves vulnerable to plagiarism or theft of intellectual property charges when an output image matches those found in training data. Exceptional performance on natural language processing benchmarks or highly accurate responses to questions from academic and industry tests and exams could be explained by the inclusion of these objects in the training data. “Leaked” private information is also a major concern for text generative models and evidence of such information would create similar liability issues (Carlini et. al., 2021). While deep learning models do not record strings of text or patches of images within the major architectural components—their weights, specialized layers, or attention heads—information from the network can be reconstructed that can reveal sources used as training inputs. This behavior is known as memorization. Memorization is frequently understood to signify a failure of information generalization. Deep neural networks are designed to recognize patterns, latent or explicit, and generalize from the representations of these patterns found within the network—this is why they are called models. Concerns about the leaking of private information are serious but are not the only issues connected with memorization in machine learning; memorization of training data is especially a problem for the testing and evaluation of models. Neural networks are not information storage and retrieval systems; their power and performance are the result of their exposure to many samples from which they learn to generalize. There are different theories of “information retention” in neural networks and the material history of the early implementations of machine learning provides evidence for the ongoing slipperiness of the concept of memory in machine learning.
The concept of memory was used in multiple distinct ways in machine learning discourse during the late 1950s and early 1960s. The interest in developing memory systems during that historical moment was tied up in the relays between three overlapping issues: the status of machine learning systems as brain models, and related, the issue of perception and memory as mutually implicated, and finally the belief that specialized learning machines would be faster than conventional computers. The machines that gave machine learning its name were originally developed as an alternative to general-purpose digital computers. These analog machines needed to sense and store information acquired from input data. The various memory mechanisms proposed during this era functioned like semi-permanent non-volatile storage for these learning machines. They were also the weights used to learn the criteria for classification of input data. They thus played something of a double role in these systems. If the weights were the “programming” for these self-organized systems, then they function as a record of that programming. Serving as both data and instructions, these weights enable what we now call inference on the learned model, which is to say the classification of previously unseen inputs. Memory was not only the persistence of information within the model; it was also used to refer to the nature of the representations stored as information within the weights. Like the contemporary concern with memorization, an exact memory of inputs would mean that the model would likely fail to generalize, which is to say that it was not learning.
In Frank Rosenblatt’s April 1957 funding proposal for the research project known as “Project PARA” (Perceiving and Recognizing Automaton) that would eventually result in the creation of the Mark I mechanical perceptron, Rosenblatt described his recently articulated perceptron rule as not just a method for determining decision boundaries between linearly separable data but also as a way of conceptualizing memory: “The system will employ a new theory of memory storage (the theory of statistical separability), which permits the recognition of complex patterns with an efficiency far greater than that attainable by existing computers” (Rosenblatt, 1957). As a brain model—this was the motivating research paradigm that Rosenblatt would make clear throughout his unfortunately short life—research into machine learning and the perceptron was concerned with using these simulated neural networks to understand more about perception and brain function. While visual perception dominated early research, this area could not be unlinked from a concern with understanding how visual inputs were stored and how memories of previously perceived patterns were compared with new stimuli.

The “Project PARA” proposal outlines Rosenblatt’s architecture. The system would be composed of three layers: the sensory or “S-System,” an association or “A-System,” and finally the response or “R-System.” This architecture was imagined as a mechanical device and Rosenblatt anticipated this material manifestation of his design in all three layers. The “S-System,” he wrote, should be imagined as “set of points in a TV raster, or as a set of photocells” and the “R-System” as “type-bars or signal lights” that might communicate output by “printing or displaying an output signal.” The “A-System” would be the heart, or rather brain, of the perceptron by passing input from the sensors to the response unit by operating on the inputs in combination with pre-determined threshold value. The output from the multiple A-units, Rosenblatt explained, “will vary with its history, and acts as a counter, or register for the memory-function of the system” (Rosenblatt, 1957). References to the material origins of machine learning are scattered throughout the terminology of this field. The weights that are learned from samples of training data are called weights because these were weighted connections between mechanical devices. The A-System provided the Perceptron’s “memory function,” but what it was “remembering” within these weights would be the subject of some debate.
There were a number of other early analog “learning machines” that confronted the same problems encountered by Rosenblatt. After being exposed to the Perceptron while working as a consultant in the U.S., Augusto Gamba, a physicist at the University of Genoa in Italy created his own device known as the PAPA (derived from the Italian rendering of Automatic Programmer and Analyzer of Probabilities). Like Rosenblatt’s Perceptron, the PAPA combined memory and the statistical method for determining decision-making criteria:
A set of photocells (A-units) receive the image of the pattern to be shown as filtered by a random mask on top of each photocell. According to whether the total amount of light is greater or smaller than the amount of light falling on a reference cell with an attenuator, the photocell will fire a “yes” or “no” answer into the “brain” part of the PAPA. The latter is simply a memory storing the “yes” and “no” frequencies of excitation of each A-unit for each class of patterns shown, together with a computing part that “multiplies” or “adds logarithms” in order to evaluate the probability that an unknown pattern belongs to a given class (Borsellino and Gamba, 1961).
Gamba’s PAPA borrows the name “A-unit” from Rosenblatt’s idiosyncratic nomenclature (one of the reasons the PAPA has become known as a “Gamba perceptron”) for the Perceptron’s second layer, its hidden layer, although in Gamba’s architecture, the device’s “memory” is not found in the association layer but in the final “brain” unit.
The relation between the machine’s accumulated weights to the input data was an open problem and several different theories were used to explain and interpret the meaning of these values. For some historians of machine learning, the simplified mathematical model of a neuron proposed by Warren S. McCulloch and Walter Pitts has been assumed to be the major inspiration and basis for many working on the first neural networks (McCulloch and Pitts, 1943). While these McCulloch-Pitts neurons (as they are called) were incredibly influential, it was another theorical account that yoked together a model of perception and memory that would influence the architecture of the most important early neural networks. This was the decidedly non-mathematical work of Donald O. Hebb, a Canadian psychologist. Hebb’s The Organization of Behavior, proposes a theory that seeks to reconcile what otherwise appeared as two distinct accounts of memory by answering the question of “How are we to provide for perceptual generalization and the stability of memory, in terms of what the neuron does and what happens at the synapse?” (Hebb, 1949). Perceptual generalization is the idea that people can learn to generalize from just a few examples of a wide range of objects. As Hebb puts it, “Man sees a square as a square, whatever its size, and in almost any setting” (Hebb, 1949). The stability of memory was rooted in evidence of a persistent connection or association between particular stimuli and a set of neurons. Hebb theorized a solution to this impasse with the idea of locating (in terms of neurons) independent patterns of excitation. This idea was of obvious utility to machine learning researchers wanting to develop techniques to recognize objects like letters no matter where they appeared, for example, shifted to the left or the right, when projected on a two-dimensional set of sensors called the “retina.”
In an article appearing in 1958, Rosenblatt examined one theory of perception and memory that suggested that “if one understood the code or ‘wiring diagram’ of the nervous system, one should, in principle, be able to discover exactly what an organism remembers by reconstructing the original sensory patterns from the ‘memory traces’ which they have left, much as we might develop a photographic negative, or translate the pattern of electrical charges in the ‘memory’ of a digital computer” (Rosenblatt, 1958). Instead of memorizing inputs, Rosenblatt explained, the Perceptron implemented Hebb’s theory of learning and separated learned patterns from their exact inputs. “The important feature of this approach,” Rosenblatt wrote, “is that there is never any simple mapping of the stimulus into memory, according to some code which would permit its later reconstruction” (Rosenblatt, 1958). In these relatively simple machines and simulated networks, the association units might record the history of inputs as a collective representation, but they could not reproduce individual memorized inputs. For Rosenblatt, this was a sign of the success of the Perceptron; it demonstrated the practicality of Hebb’s theory by implementing a memory system in the form of weights that could be used for distinguishing between classes of data without memorizing distinct inputs used to train the network. This was also Rosenblatt’s grounds for differentiating the Perceptron from mere pattern matching: techniques developed contemporaneously with the Perceptron implemented databases of templates and accomplished pattern matching by memorizing and matching input samples to entries in a database (Dobson 2023).
Research on analog memory units connected two of the major sites in the development of machine learning: Rosenblatt’s lab at Cornell University in Ithaca, New York and Stanford Research Institute at Stanford University in California (Stanford University would shortly divest itself of the laboratory, which would then become known as SRI International). While Rosenblatt’s Mark I Perceptron is the best known of the early machines of machine learning, SRI had developed its own series of devices, the MINOS and later the MINOS II. While SRI’s first projects implemented the Perceptron, researchers would later develop an alternative learning rule. SRI’s MINOS project was a platform for evaluating different sensing and preprocessing techniques. George Nagy, a Hungarian-born computer scientist, worked with Rosenblatt at Cornell while a graduate student in electrical engineering; memory devices for neural networks became the subject of his dissertation and related research. Nagy worked with others in Rosenblatt’s Cognitive Systems Research Program (CSRP) group to design and construct a second-generation device called the Tobermory.


The Tobermory took its name from a short story by Saki (H. H. Monroe) that featured a talking cat. As its name suggests, it would be a “phonoperceptron” and designed for audio input. Nagy’s dissertation, defended in 1962, was titled “Analogue Memory Mechanisms for Neural Nets” and examined different possible designs for analog memory devices. Some of the existing options examined by Nagy included more experimental electro-chemical devices such as electrolytic integrators and solions and novel but difficult to use at scale film-based photochromic devices using slide projectors. Nagy settled on what was known as the “magnetostrictive read-out integrator,” a device suggested by SRI’s Charles A. Rosen. This was the tape-wound magnetic core memory device employed by the MINOS II and initially designed by SRI staff member Harold S. Crafts (Brain et. al., 1962). It also had the advantage of sharing features with the core memory used in conventional digital computers. The labor-intensive production of these memory devices, as Daniela K. Rosner et. al. argue, is one of several important sites of “hidden, feminized work” involved in the creation of mid-century computing (Rosner et. al., 2018). Addressing his selection of a tape-wound device for the Tobermory, Nagy wrote: “The chief virtue of the electromechanical integrator consists of its inherent stability. The ‘weight’ of a given connection is represented by a mechanical displacement, hence it is not subject to variation due to ambient changes or fluctuations in power supply level” (Nagy, 1962). Many existing analog alternatives, as Nagy notes in his survey, were subject to rapid decay, error, and sometimes were difficult to reinitialize or to erase previously stored values.


Despite the ongoing research and development of analog learning machines with memory devices during this period, many researchers were simultaneously implementing neural networks as simulated machines on conventional digital computers. In their justification for building a learning machine, the SRI MINOS team explained what they saw as the deficiency of digital computers: “Their major function in the present line of research is to simulate the performance of machine concepts which might be mechanized in some form which would be efficient (smaller, faster cheaper, etc.). The general-purpose digital machine thus appears as a research tool rather than as a final device for pattern recognition” (Brain et. al., 1960). In these simulations, the weights were stored in regular core memory during training and evaluation and persisted in various offline storage systems. The simulation of learning machines was necessary at the beginning of machine learning while engineers worked to construct analog machines and find appropriate memory devices, but this paradigm stuck as digital computers increased in speed and became easier to program and use. The appeal quickly became apparent to researchers. In an article summarizing his research into analog memory devices, Nagy speculated that advancements in digital computers might soon render analog memory obsolete. “In principle,” he wrote, “any pattern recognition machine using weighted connections may be simulated on a binary machine of sufficiently large capacity” (Nagy, 1963a). Specialized hardware for machine learning, although now fully digital and instrumented with layers of software, returned in the late 1980s and early 1990s during the high-performance massively parallel computer boom. Today, costly clusters of high-density graphical processing units (GPUs) and tensor processing units (TPUs) are being deployed to train very large models although these also execute software simulated learning machines.
Early machine learning was primarily directed toward the discrimination and classification of visual data. These models worked with highly simplified representations of images. They were not trained to generate new images. Today’s deep learning models in computer vision and the extremely popular Transformer-based large language models are now routinely used in generative applications. The size of these models combined with these new uses (themselves a function of model size), has prompted a reconsideration of the memory issue. The assumption that patterns of activation generalize, as Hebb theorized in biological models, seems to be under pressure when applied to understanding the operation of artificial neural networks with billions or more parameters. There is strong evidence that large language models are memorizing examples from their training and that this behavior is more likely in large models (Carlini 2021). The retention of this information suggests that these patterns can be mapped. Research into the interpretability of deep learning models has discovered some of these patterns and demonstrated that sets of neurons can be edited to alter the model’s predictions (Meng et. al., 2022). This line of inquiry returns us to lingering important questions about the relation between learning and memory, the differences between generalization and memorization, and the location of memory in neural networks that were also present at the founding of the field of machine learning.
Bibliography
Borsellino, A., and A. Gamba (1961). “An Outline of a Mathematical Theory of PAPA,” Del Nuovo Cimento 20, no. 2, 221–231. https://doi.org/10.1007/BF02822644.
Brain, Alfred E., Harold S. Crafts, George E. Forsen, Donald J. Hall, and Jack W. Machanik (1962). “Graphical Data Processing Research Study and Experimental Investigation.” 40001-PM-60-91.91(600). Menlo Park, CA: Stanford Research Institute.
Carlini, Nicholas, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, et al. (2021). “Extracting Training Data from Large Language Models.” In Proceedings of the 30th USENIX Security Symposium. 2633–2650.
Dobson, James E. (2023). The Birth of Computer Vision. University of Minnesota Press.
Hay, John C., Ben E. Lynch, David R. Smith (1960). “Mark I Perceptron Operators’ Manual (Project Para)” VG-1195-G-5. Cornell Aeronautical Laboratory.
Hebb, Donald O. (1949). The Organization of Behavior: A Neuropsychological Theory. John Wiley and Sons.
McCulloch, Warren S., and Walter Pitts (1943). “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics 5, 115–33.
Meng, Kevin, David Bau, Alex Andonian, and Yonatan Belinkov (2022). “Locating and Editing Factual Associations in GPT.” Advances in Neural Information Processing Systems, 35, 17359-17372.
Nagy, George (1962). “Analogue Memory Mechanisms for Neural Nets.” PhD diss. Cornell University.
Nagy, George (1963a). “A Survey of Analog Memory Devices.” IEEE Transactions on Electronic Computers EC-12, no. 4: 388–93. https://doi.org/10.1109/PGEC.1963.263470.
Nagy, George (1963b). “System and Circuit Designs for the Tobermory Perceptron,” Cognitive Research Program. Report No. 5. Ithaca, NY: Cornell University.
Rosenblatt, Frank (1962). “A Description of the Tobermory Perceptron.” Cognitive Research Program. Report No. 4. Collected Technical Papers, Vol. 2. Edited by Frank Rosenblatt. Ithaca, NY: Cornell University.
Rosenblatt, Frank (1957). “The Perceptron: A Perceiving and Recognizing Automaton (Project PARA).” Report 85-460-1. Cornell Aeronautical Laboratory.
Rosenblatt, Frank (1958). “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review 65, no. 6: 386–408. https://doi.org/10.1037/h0042519.
Rosner, Daniela K., Samantha Shorey, Brock R. Craft, and Helen Remnick (2018). “Making Core Memory: Design Inquiry into Gendered Legacies of Engineering and Craftwork.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM. https://doi.org/10.1145/3173574.3174105.
Tirumala, Kushal, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan (2022). “Memorization without Overfitting: Analyzing the Training Dynamics of Large Language Models.” In Advances in Neural Information Processing Systems 35. Edited by S. Koyejo et. al. 38274-38290. Vancouver, Canada: Curran Associates.
James E. Dobson (June 2023). “Memorization and Memory Devices in Early Machine Learning.” Interfaces: Essays and Reviews on Computing and Culture Vol. 4, Charles Babbage Institute, University of Minnesota, 40-49.
About the author: James E. Dobson is assistant professor of English and creative writing and director of the Institute for Writing and Rhetoric at Dartmouth College. He is the author of Critical Digital Humanities: The Search for a Methodology (University of Illinois Press, 2019) and The Birth of Computer Vision (University of Minnesota Press, 2023).

Artificial Intelligence (AI) has been shown time and time again to be a remarkable engine for codifying and accelerating inequality. Popular news media is literally littered with examples of AI gone wrong. Mortgage software has been found to recommend better interest rates for white borrowers (Bartlett et al.). Criminal justice AIs are more likely to recommend denial of parole for people of color (Benjamin). And all manner of bias has been found in search engine results (Noble). In each of these cases, the desire to develop and sell transformative new technologies got in the way of making fair and equitable systems. As a result, many in AI are looking for a better way. In her 2016 Weapons of Math Destruction, Cathy O’Neil (2017) argued that the future of AI needed a better “moral imagination.” AI and technology developers need “to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead.” Since then, “ethical AI” has become an explosive area of investment and development.
There has been a proliferation of initiatives in industry, nonprofit, academia, and occasionally government—all devoted to better AI. We cannot really say that we are short on moral imagination at this point. In fact, I would go as far as to say that we are confronted by a dizzying array of competing moral imaginations. Different approaches to AI’s moral future vie for attention, leaving technologists with an expansive menu of options. But like the items on any menu, not all are of equal nutritive (moral) value. There’s good reason to believe that much of so-called Ethical AI is little more than window dressing (Burt). It’s handwaving at a vision of fairness that comes second to innovation and profit. There’s also good reason to think that even the best of intentions will not lead to ethical outcomes. This last issue is the focus of this piece. Much has been written about Ethical AI that’s little more than marketing. I want to think about how new technology designed to address a clear and obvious ethical need often falls short. In so doing, I reflect on a few recent attempts to develop AI for better pain medicine.

The Pain Problem
On a scale of 1-10, how much pain do you feel right now? This simple question, asked millions of times a day throughout the world, is state-of-the-art in pain measurement. The Numeric Pain Rating Scale (NPRS) and its cousin, the Visual Analog Scale (VAS)—sketches of progressively sadder smiley faces—are the primary ways that doctors assess pain. NPRS and VAS are low tech solutions that assist with the practice of pain management. Importantly, they do not really measure pain in any meaningful sense of the word. Rather, they help patients assign numbers to daily experiences and those numbers guide treatment. If your shoulder used to hurt at level 6, but daily stretching makes it hurt at level 3, we know that physical therapy is working.
The International Association for the Study of Pain (IASP) defines pain as “An unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage.” According to the best scientific minds who study the topic, being in pain does not require you to have an underlying physical injury. Have you ever winced when you saw someone else touch a hot stove? This is one reason why “potential” is such an important word in the IASP’s definition. You are in pain when you touch a hot stove, and you are in pain when you see someone touch a hot stove. Similarly, the same injury (when there is one) doesn’t cause the same pain in every person. Think about athletes who finish a competition on a broken leg, and only realize it after crossing the finish line. There was no pain until the situation changed, and then there was incredible pain. All of this is why the IASP’s definition of pain comes with a critically important note: “A person’s report of an experience as pain should be respected.”
Unfortunately, despite this recommendation from the IASP and even though the NPRS and VAS scales are the gold standard approach to pain management, the inability to directly measure pain is a regular complaint among healthcare providers. In my conversations with pain management doctors, many have expressed a strong desire for actual pain measurement. An interventional anesthesiologist I spoke to expressed frustration that he didn’t “have a way of hooking someone up to a pain-o-meter” (Graham, 2015, 107). Likewise, an orthopedist complained that “one of the first things those concerned will admit [is] there’s no algometer, no dial on somebody’s forehead. As long as you can’t read it out, you have to rely on the patient’s report” (Graham, 2015, 121). This desire for objectivity combined with the common denigration of patient reports as “merely subjective,” creates a situation where bias often runs amok in pain medicine. Increasingly, AI is being offered as a possible solution for such systemic inequities. In the context of pain management specifically, AI developers are working on what they hope will be that missing “dial on the forehead.”

Trust and Bias
Unfortunately, doctors don’t always trust what patients say about their own pain. Part of this has to do with that drive for objectivity. Part of this has to do with how we’ve responded to the opioid epidemic in this country. And part of it has to do with bias. A 2016 survey of medical trainees found that 73% believed at least one false statement about race-based biological differences (Hoffman et al). Among the most striking statistics was the fact that 58% believed that Black skin is thicker than white skin. This false belief and others like it have been traced directly to inequalities in pain management. Physicians routinely underestimate patient pain across patient groups, but the racial differences are striking.
Doctors are twice as likely to underestimate Black pain (Staton et al). As a result, Black patients are less likely to receive pain medication, and when they do, they routinely receive lower quantities than white patients. As these disparities are increasingly recognized by the medical community, recommendations for improvement tend to center around a mix of implicit bias training and increased reliance on more “objective” diagnostic technologies. The American Association of Medical Colleges, for example, recommends that in addition to implicit bias training, clinical guidelines should “remove as much individual discretion as possible,” and researchers should “continue the search for objective measures of pain” (Sabin).
Despite AI’s history of bias, it is frequently justified on the basis of its mathematical objectivity. Combine that with increasing investments in Ethical AI, and it creates the perfect environment for algorithmic pain measurement. And so, the developers of the new algorithmic pain prediction (ALG-P) system hope it will both provide more objective pain measurement and lead to reduced clinical biases (Pierson et al). Working with a diverse population of osteoarthritis of knee patients, the researchers trained the ALG-P to try and match patient reports. That is, they took X-rays of knees and linked those X-rays to each patient’s NPRS score, and the ALG-P system learned to identify certain patterns in the images that would predict NPRS values. Next, the study team compared ALG-P estimates of pain severity with those of the preexisting industry standard clinical decision tool, Kellgren–Lawrence Grade (KLG). The KLG, which was developed and validated on a predominantly white British population in 1957, guides human evaluation X-rays for osteoarthritis of the knee. ALG-P was 61% more accurate in estimating patient pain than the KLG. Importantly, however, while the ALG-P reduces the frequency and magnitude of racial disparities, it does not eliminate them. So, if a Black patient had a true pain level of 8, a doctor using KLG might estimate the pain at level 6, and one using the ALG-P might estimate it at a 7.
At first glance, ALG-P lives up to some of the best recommendations for Ethical AI. One common recommendation for better AI is to ensure that training data is labeled by members of the communities who will be most affected by the system and its use. A pervasive problem in pain medicine is that physicians tend to believe their own estimates of patient pain over those from patient reports. By training ALG-P on labeling data from patient report, the developers artfully sidestep this issue. In an interview with the MIT Technology Review, one of the study authors, Ziad Obermeyer, highlighted this more just approach as central to the study (Hao). Ultimately, both the study itself and some of the related media coverage indicate a hope that the availability of these data might encourage self-reflection leading to reduced clinical biases. As the study points out, “cooperation between humans and algorithms was shown to improve clinical decision making in some settings.” The MIT Technology Review article is even more enthusiastic, suggesting that “AI could make health care fairer—by helping us believe what patients say.” However, living up to one principle of Ethical AI does not necessarily assure that a given AI leads to a more ethical world.
Now, I have a few significant concerns about ALG-P. First, if we think back to the IASP definition of pain, it’s not all that surprising that the AI only leads to a 61% improvement. ALG-P looks at knee X-rays, physical features, and has no access to the psychological state of patients. We’re already missing a huge component of what it means to be in pain if we’re not including the psychological dimension. Also, as a researcher with longstanding interests in pain medicine, I am getting a powerful sense of déjà vu here. Doctors suddenly “trusting” patients when a new technology comes along and “proves” those patients right is becoming an all-too-familiar narrative. Almost 20 years ago, the case du jour was fibromyalgia—a chronic widespread bodily pain condition believed to be caused by difficulties regulating certain nerve signals. Fibromyalgia disproportionately afflicts women, another group many doctors seem to have trouble believing. But twenty years ago, then-recent advances in neuroimaging (PET, fMRI) were able to identify differences in how some people’s brains process certain stimuli. With “objective” technological verifications, doctors started to “trust” their patients.
Now, for many, this version of “trust” does not sound much like genuine trust. If trust is only extended to some patients when what they say is verified through technological “objectivity,” then there is no actual trust at all. What’s more, the average cost of a PET scan in 2020 was just over $5,000 (Poslusny). Even if insurance is reimbursing theses costs, that is a pretty steep fee for “trusting” women in pain. It is not yet clear if ALG-P will be used broadly and if so, how much will it cost patients? But if it’s anything like other computational imaging techniques, it could be pretty expensive for a product that offers around a 61% improvement. This is all the more problematic, of course, given that following IASP guidance and believing Black patients would lead to substantially more improvement while having the benefit of being free.
All-in-all, I have some pretty serious reservations about the extent to which this is an ethical addition to the practice of pain medicine. Importantly, this does not mean I think it’s impossible to make Ethical AI. The case of ALG-P suggests that it takes a lot more than a recognized injustice and a desire to do good in the world to ensure that a new system actually leads to ethical outcomes. Doing so requires more than just new technologies. This is another way of saying that an AI just isn’t going to fix inequality. AIs might be useful as part of a comprehensive approach that includes technical solutions, targeted education, and appropriate regulation. One of the biggest risks of the tech fix is that it will be understood as a “fix.” Maybe ALG-P is a good idea as a stopgap for those patients who are in pain and undertreated right now. But the long-term work toward justice has to continue while band-aid technologies offer partial improvements today.

Beyond the ‘Ethical’ Tech Fix
Ultimately, ALG-P is a textbook example of Ethical AI in a clinical context. Ethical AI tends to embrace a bias toward action. The ethical vision is grounded in the presumption that companies will build things. Thus, governance solutions and interventional technologies alike are engineered to guide (rather than prevent) that action. For the most part, this kind of interventional Ethical AI focuses on technologically engineered solutions to algorithmic bias. For example, one of the canonical works of Ethical AI proposes the following definition of anti-classification in ethical AI:
d(x) = d(x') for all x, x' such that xu = x'u (Corbett-Davies and Goel)
In English, “anti-classification” is largely a matter of not including identifying characteristics (including ethnic data) in AI systems. Of course, as many in critical algorithm studies have pointed out, the complex effects of systemic racism can create surrogate data points for race, such as zip code, which blunt a narrower approach to anti-classification. Although ALG-P was not developed in a corporate context, its underlying logics are remarkably similar to what we see in those contexts. In recent years, IBM, Facebook, and Google have all deployed new computational libraries designed to detect bias or engineer fairness in their algorithms. (IBM; Gershgorn; Google). Technologically oriented solutionism is precisely what allows some areas of Ethical AI to offer an apparently ethical intervention that is still ultimately subordinated to the dominant market logics of the corporation. In much the same way, ALG-P is an act of Ethical AI. To be sure, it is not situated in a corporate context, but it ultimately offers a tech fix that subordinates emancipatory aims to long-dominant clinical logics.
I’m certainly not the first to suggest that technologists need to think just as much about if they should act and when they should act, not just how they should act. There’s a massive cross-sector precautionary literature out there devoted to these kinds of questions. Inspired by that kind of thinking, I close this essay by considering how Ethical AI in healthcare contexts might address precautionary concerns in the face of ongoing harm to marginalized populations. Specifically, I suggest that those who wish to offer technological solutions to health inequity should, at the very least, address the following questions.
- Is the proposed intervention likely to substantially address an unmet or under-met community need?
- Have members of the communities most likely to be affected by the intervention been substantively involved in project conceptualization, putative benefits, risk assessment, data curation, and training set labeling?
- Does the project team have a robust plan for evaluating unintended consequences during design, development, testing, and distribution?
- Does the project team have a robust plan for supporting long-term community-centered justice-oriented initiatives in this area?
If the answer is not a resounding “yes” to all of these questions, then precaution (as opposed to intervention) is almost certainly the way to go. However, in the context of a robust community-led approach to development, then it may be appropriate to work at developing temporary technological fixes. That last question, however, is key. One of the biggest risks of the tech fix is that it will be understood as a “fix.” If healthcare is to work at developing and deploying band-aid technologies offering partial improvements in care, then the long-term community-led work of social justice has to continue and eventually replace those temporary technological scaffolds.
Bibliography
Bartlett, Robert, et al. “Consumer-Lending Discrimination in the FinTech Era.” Journal of Financial Economics, vol. 143, no. 1, pp. 30–56. ScienceDirect, https://doi.org/10.1016/j.jfineco.2021.05.047.
Benjamin, Ruha. (2019). Race after Technology: Abolitionist Tools for the New Jim Code. Polity, https://www.politybooks.com/bookdetail?book_slug=race-after-technology-abolitionist-tools-for-the-new-jim-code--9781509526390.
Burt, Andrew. (2020). “Ethical Frameworks for AI Aren’t Enough.” Harvard Business Review, 9 Nov. hbr.org, https://hbr.org/2020/11/ethical-frameworks-for-ai-arent-enough.
Corbett-Davies, Sam, and Sharad Goel. (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv:1808.00023, arXiv, \ arXiv.org, https://doi.org/10.48550/arXiv.1808.00023.
Gershgorn, Dave. (2018). “Facebook Says It Has a Tool to Detect Bias in Its Artificial Intelligence.” Quartz, 3 May , https://qz.com/1268520/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence.
Google. What Is ML-Fairness-Gym? (2019). Google, 1 Apr. 2023. GitHub, https://github.com/google/ml-fairness-gym.
Graham, S. Scott. (2015). The Politics of Pain Medicine: A Rhetorical-Ontological Inquiry. University of Chicago Press. University of Chicago Press, https://press.uchicago.edu/ucp/books/book/chicago/P/bo20698040.html.
Hao, Karen. (2021). “AI Could Make Health Care Fairer—by Helping Us Believe What Patients Say.” MIT Technology Review, 22 Jan., https://www.technologyreview.com/2021/01/22/1016577/ai-fairer-healthcare-patient-outcomes/.
Hoffman, Kelly M., et al. (2016). “Racial Bias in Pain Assessment and Treatment Recommendations, and False Beliefs about Biological Differences between Blacks and Whites.” Proceedings of the National Academy of Sciences, vol. 113, no. 16, Apr. pp. 4296–301. Pnas.org (Atypon), https://doi.org/10.1073/pnas.1516047113.
IBM. AI Fairness 360. https://aif360.mybluemix.net/. Accessed 2 Apr. 2023.
Noble, Safiya Umoja. (2018). “Algorithms of Oppression.” Algorithms of Oppression, New York University Press.
O’neil, Cathy. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
Pierson, Emma, et al. (2021). “An Algorithmic Approach to Reducing Unexplained Pain Disparities in Underserved Populations.” Nature Medicine, vol. 27, no. 1, 1, Jan., pp. 136–40. www.nature.com, https://doi.org/10.1038/s41591-020-01192-7.
Poslusny, Catherine. (2018). “How Much Should Your PET Scan Cost?” New Choice Health Blog, 31 July, https://www.newchoicehealth.com/pet-scan/cost.
Sabin, Janice A. (2020). “How We Fail Black Patients in Pain.” AAMC, 6 Jan., https://www.aamc.org/news-insights/how-we-fail-black-patients-pain.
Staton, Lisa J., et al. (2007). “When Race Matters: Disagreement in Pain Perception between Patients and Their Physicians in Primary Care.” Journal of the National Medical Association, vol. 99, no. 5, May 2007, pp. 532–38.
S. Scott Graham (May 2023). “The Dangers of Ethical AI in Healthcare.” Interfaces: Essays and Reviews on Computing and Culture Vol. 4, Charles Babbage Institute, University of Minnesota, 32-39.
About the author: S. Scott Graham, PhD, is Associate Professor in the Department of Rhetoric and Writing at the University of Texas at Austin. He has written extensively about communication in health science and policy. He is the author of The Politics of Pain Medicine (University of Chicago Press, 2015) and The Doctor and The Algorithm (Oxford University Press, 2022). His research has been reported on in The New York Times, US News & World Report, Science, Health Day, AI in Health Care, and Scientific Inquirer.
Utah is an unparalleled exemplar of how creating a global center of excellence in an emerging specialty of computer science and engineering is possible with government seed funding. It was a special moment when the Advanced Research Projects Agency’s Information Processing Techniques Office (ARPA, IPTO) awarded the University of Utah $5 million ($43 million in today’s dollars) over six years, 1966 to 1972, for a project entitled “Graphical Man-Machine Communication” to launch the field of computer graphics and create a leading center for research and education. Three years earlier IPTO funded “Mathematics and Computation,” or Project MAC at MIT, also for six years, 1963 to 1969 (initially $2 million/year, but this grew to over $3 million/year). Given MIT’s Whirlwind (a real-time precursor to SAGE) in 1951, launching Lincoln Lab in that same year, and MIT spinoff nonprofit MITRE Corporation, in Bedford, Massachusetts, in 1958, past, major Department of Defense support had helped make MIT a top computing center prior. As such, Project Mac extended core areas of research and made an excellent computer science program far stronger. What was impressive about Utah was IPTO provided a half dozen years of support, far less than half the funds that IPTO awarded to Project MAC, and extremely talented and creative people ran with it and created a center of excellence anew. The Kahlert School has embraced the words of one its most famed and early doctorates (1969), Turing Award winner Alan Kay, “The best way to predict the future is to invent it.”
Dave Evans, Ivan Sutherland, other faculty, and graduate students made it happen in Utah. It changed from a program to a department (1973) to a school (2000), and throughout, it has achieved amazing feats. What came through so strongly in hearing talks, panel discussions, and meeting and engaging in conversations with the pioneers over two days in Utah this March, is that the research and development extending from the University of Utah and its alumni, was and is a product of a quite special culture.

Through the great leadership of Kahlert School of Computing Director Mary Hall, and the tremendous faculty at the school, that core, special culture, with some newer elements and commitments added, thrives today. Utah is one of the leaders in computer science and remains unmatched in graphics within computer science. The Kahlert School of Computing also impacts the world with newer tracks, such as Data Science and Software Development, and possesses a strong commitment to diversity, equity, and inclusion. Joining Hall and Dean of Engineering Richard B. Brown in this commitment to excellence and inclusion, it was also a pleasure to meet Vice President of the Kahlert Foundation, Heather Kahlert. The foundation’s support to the school recently led to its naming to become the Kahlert School of Computing, and her family foundation has supported an important initiative on inclusivity within the school, entitled “Programming for All.” Also impactful, John and Marcia Price recently made a $50 million donation to the College of Engineering, and their lead gift made the new, $190 million, John and Marcia Price building possible. Opening soon, it will house the Kahlert School of Computing and allow for its rapid expansion of existing and new areas of computing education and research.

There actually were three events on 23-24 March 2023 held in unison—the full day 50th Anniversary of the Computer Science Department of the University of Utah; followed on the second day morning IEEE Milestone Dedication; and then the afternoon Graphics Symposium. The three were complementary, reinforcing and expanding on each other in highly constructive ways. Most of the time, the program focused on looking back, but importantly, it also looked forward. Contributing to both was a fantastic day one keynote by Telle Whitney, past, longtime CEO of the Anita Borg Institute. Whitney is also co-founder of the Grace Murray Hopper Celebration, as well as of the National Center for Women and Information Technology (NCWIT). Nobody has done more to advance women in computing than Telle Whitney and to carry on the early work of her fellow computer scientist and collaborator Anita Borg.
On day two, consultant and IEEE Milestone Coordinator Brian Berg awarded an IEEE Milestone to the University of Utah Kahlert School of Computing for the department/school’s pioneering work in graphics. Berg presented the award to the school’s Director, Mary Hall, and the Dean of Engineering, Richard B. Brown.
This prestigious IEEE Milestone Award is an elite designation in technology. In computing, developments such as Bletchley Park Code-breaking; the ENIAC; MIT’s Whirlwind Computer (real-time); Moore’s Law; UCLA, and the (ARPAnet)/Internet have been awarded IEEE Milestones (which includes a bronze plaque—on day two, a video of Hall and Brown’s unveiling of the Utah CS plaque was played). Outside of computing, IEEE Milestones include Samuel Morse and Alfred Vail’s “Demonstration of Practical Telegraphy” in 1838; “Thomas Alva Edison’s Menlo Park Laboratory” created in 1876; and “Reginald Fessenden Wireless Radio Broadcast” in 1906. In short, it is a major honor and a useful IEEE program commemorating and exploring the past. Brian Berg has added much to the IEEE Milestone program, for more than a dozen years leading many IEEE Milestone efforts in the history of computing, software, and networking for IEEE Region 6, the Western United States.

Hall organized and was Master of Ceremonies for the magnificent day one “50th Anniversary of Computer Science at the University of Utah” symposium. She kicked off the event with an informative historical overview, drawing on the David Evans Papers and other archival materials.
Odd Ducks and Grand Challenges
ARPA funding was a necessary but not in itself sufficient element to foster Utah leading the way with the computer graphics revolution. Even having two of the most brilliant pioneers in computer graphics—hiring David Evans in 1965 to start the CS program and attracting Ivan Sutherland away from Harvard to join Evans—was not enough. The final, and arguably the most important ingredient, was the environment and culture that Evans set starting with his arrival (leaving the faculty of Cal Berkeley) in the mid-1960s, and that Sutherland contributed to mightily as well with his arrival in 1968.
There were other standout faculty in the early years, including but not limited to William Viavant, who served from 1964 to 1987, and the late Elliott Organick, who contributed to operating systems research and education and related areas of computer science with his nineteen books—including one I have devoured again and again on Multics and its security design (security and privacy are two of my areas of historical and sociological research). Also contributing to first-day events were impactful faculty who joined the department in the 1980s and beyond. They added greatly to the event and showed the breadth of the department in so many areas of computer science—Al Davis, Duane Call, Chuck Seitz, and Rajeev Balasubramonian. Program alum, Kahlert School Research Professor, and Flux Research Team Co-Director Robert Ricci’s moderation of a panel with graduates David Andersen of Carnegie Mellon and Cody Cutler from Amazon was especially intriguing in exploring “…Network Research, from ARPANET to Emulab and Beyond.”
Alan Kay is among the first and most famed of Utah CS doctoral alums (1969). His quote on inventing the future is fitting given he helped build the office of the future at Xerox PARC in the 1970s. Kay provided leadership in creating windows-oriented graphical user interfaces (GUI) and made major contributions to object-oriented programming (OOP), including his pivotal leadership creating the OOP-optimized Smalltalk language with Adele Goldberg, Dan Ingalls, and others. Kay’s presentation was by video, and focused on Dave Evans, Ivan Sutherland, and the environment of CS at Utah in the 1960s. Another, early and long-famed graduate, Jim Clark, also invented the future in founding Silicon Graphics and later Netscape. He, too, gave a brief and inspired talk on day one—his was in person.
As a social and organizational historian of computing, Utah has long fascinated me, and I have enjoyed the oral histories that have been conducted by past and current colleagues of the organization I am now privileged to direct, The Charles Babbage Institute for Computing, Information, and Culture. Perusing our unparalleled archives on computer graphics (many collections), and reading and re-reading the secondary literature has been a joy—including and especially past CBI Tomash Fellow Jacob Gaboury’s stellar, award-winning new book, Image Objects: An Archeology of Computer Graphics, and long ago, Founding CBI Director Arthur Norberg and Judy O’Neill’s classic Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986.
How does an ARPA grant and two extremely gifted scientists create an unparalleled global center of excellence at a state school with a smaller state population (about 30th)? How does it succeed in fostering such an organizational culture to attract and cultivate the people to succeed on such a grand scale? Beyond Evan and Sutherland’s leadership gifts, high standards, and generosity, I would argue that not being surrounded by an overall, existing elite (Ivy or equivalent like MIT or Stanford) institution was a major plus. It helped facilitate the freedom for the faculty and students to experiment, to take risks, and to think big. That was my belief before traveling to Utah for the two days of events, and it was reinforced by the program, reminiscences, and discussions there.
Evans and Sutherland’s entrepreneurial drive shaped the department and pioneering graphics company Evans and Sutherland, but it was not the Silicon Valley style entrepreneurism of moving fast and breaking things. Instead, it was tending to the necessary money and resources side of the equation, and focusing on the nurturing and creative sides, more akin to a metaphor raised at the event several times, to “cultivating a garden.” This was a garden that encouraged talented graduate students, faculty, and company team members to grow the next new thing, the code, the tools, and the devices that could have a positive impact on science, knowledge, work, and leisure. Over the two days of meetings, the importance of the physical setting came through as a meaningful factor as well, the mountains and their tremendous beauty, the skiing, the retreats, and the frequent computer science meetings held at the picturesque Alta Lodge.
In starting a new program and seeking a certain culture that was different from other emerging schools in computer science, Evans looked for outliers in the graduate students he (and colleagues) admitted to the program. The seeking of “odd ducks,” was foundational and essential to the intellectual freethinking, and creative culture that he cultivated with the program from his formation of it in 1965 (the Computer Center launched in 1958 and grew to a staff of 30 people), one of 11 such programs at the time.
In 1968, with Ivan Sutherland’s arrival, resigning from his Associate Professorship in Computer Science at Harvard to become a Professor of Computer Science at Utah, and the ARPA IPTO funds, the program really took off. He and Evans were the two top researchers in the new field of graphics—they essentially invented it. Sutherland especially so, with his path-breaking 1963 dissertation on Sketchpad. Sketchpad was a legendary computer graphics program that transformed computer science. It influenced so much—from Human-Computer Interaction (HCI), Computer-Aided Design (CAD), object-oriented programming to GUIs, and virtual reality (VR). He had the additional insight to do a film demo that conveyed to the emerging field of computer science that a new major domain within it, graphics, was possible.
As Sutherland reflected during a panel at the event, ARPA IPTO Director J.C.R. Licklider had convened a group of top scientists and military leaders to see Sketchpad and meet with him. Despite his young age Sutherland was essentially a legend shortly after his dissertation. In 1964, Sutherland, only twenty-six years old, followed Founding Director Licklider in taking the reins to become the second ARPA IPTO Director, funding basic research largely at universities that helped transform the new field of computer science in areas such as time-sharing, AI, and other early graphics and networking work. Two of the most important grants in IPTO’s history were Project Mac (by Licklider in 1963) to MIT in artificial intelligence and time-sharing (Multics) and the six-year grant (by Robert Taylor in 1966) to the University of Utah in graphics. Other critical 1960s IPTO grants provided the basis of the ARPAnet. Utah has the distinction of being one of the four nodes of the ARPAnet at its launch as a four-node network in 1969.
Given Evans’ and Sutherland’s immediate respect for each other, their visionary, and entrepreneurial personalities, they became immediate friends and collaborators. And coming together at Utah was also about starting a company. Sutherland reminisced with a smile, whether Evans was to join him in Cambridge, or he was to go to Utah came down to “he [Evans] had seven kids and I just two.” It was fortuitous for both scholars, for the field of graphics, for the U, as Utah is affectionately known. It was also beneficial to the company, Evans and Sutherland. The University of Utah likely had greater opportunity for freedom than Sutherland’s Harvard, Cambridge, or Boston might have had for the company. Evans and Sutherland cultivated an overlapping family-type environment in both settings and endeavors. For the company, this made it all the easier to retain its talented computer scientists over the long term—good people tend to job-hop more in Silicon Valley and in Boston/Cambridge.
Evans and Sutherland, trailblazing graphics commercially, increasingly brought the technology to the world in a fast-growing range of applications. They attracted a top venture capital firm in Hambrecht and Quist, and their company was soon valued at $50 million. In just ten years, it grew from $10 million in revenue to $100 million in revenue.
As Sutherland conveyed about himself and Evans, and many others at the recent symposia reinforced, at the University of Utah Computer Science Department and at Evans and Sutherland, the two leaders sought to have as flat organizations as possible. Also important to them was assuring the satisfaction of everyone contributing to something larger than themselves. Seeking and solving hard problems was key to the student and employee/researcher’s satisfaction. The challenges could add to a sense of common purpose and a closeness of individuals and feeling part of the team. The graduate students became part of Evans’ and Sutherland’s extended family and they frequently had them to their homes to socialize.

Early Graphics
The many images shown at the event exemplified the words of the largely retired set of standout graduates who spoke and participated in its panels. This included showing a costume party photograph of playful attire and big smiles on the faces of faculty and graduate students in the Evans’ home. Regarding the company, Evans and Sutherland, there is one data point that goes beyond just the speakers and hints at the broader experience of employees being very positive and a family-like atmosphere, it is the retirement group and its continuing so many years. This sizeable group has a picnic reunion each year, the large numbers of people coming to this event year after year is suggestive of the positive culture of the company over decades.
Is there a potential risk of exaggeration or embellishment of the culture given the people speaking at the event had impressive careers and legendary accomplishments—a selection bias? Certainly, and further research into this culture through oral history, the David Evans Papers, and other archives likely would be fruitful and fascinating. For now, it seemed to me the group was large enough, and the message clear enough from people speaking, often quite emotionally, and always in a way uniquely their own, to get a telling sense of this culture and environment that Sutherland and Evans, the people, and Evans and Sutherland the company, created.
The participants in the event (especially the second day symposium specifically on graphics) were primarily graduate students from the late 1960s and the 1970s (though not exclusively). In the images and the talks there were tremendous accomplishments of alumni from multiple continents. Nonetheless, most were white and male. This was not unique to Utah. Diversity of gender participation and inclusion were challenges across computer science prior to a mid-1980s peak in women majors (reaching 38 percent), as well as from the early 1990s forward to today. Women’s participation as CS majors has generally been in the teens to low twenties, and at times the lower teens, apart from the mid to late 1980s. As such, Telle Whitney’s wonderful talk on gender, both historical and prescriptive, and highlighting some incredible women, added so much to the event.
For the remainder of this essay reflection, I will discuss several keynotes and other talks that especially resonated for me regarding University of Utah Computer Science Department/School of Computing culture and carrying of this culture impactfully into the broader world by faculty and alumni. In selecting a handful to discuss, I want to stress that all of the panel discussions and talks were compelling and fascinating, and many I do not highlight in what follows also exemplified the special culture of CS and The Kahlert School of Computing at the University of Utah.
Impacting the World at Scale: Nvidia, GPUs, and LLMs
Steve Parker gave a compelling keynote address on “Utah and the Frontiers of Computing.” Like a number of doctorates of the program, he later was a professor within it. For the past sixteen years he has been at Nvidia, and he currently serves as Vice President, Professional Graphics at the corporation, which has strategically led in skating to (and inventing and shaping) where the puck is going, rather than where it has been (such as Intel did in stumbling fashion), in microcircuitry—leading the way with Graphical Processing Units. GPUs are central to gaming, an area Nvidia has long served, and the far larger opportunity is that they are now also concentrating on large language models, machine learning, and many application areas. As OpenAI, Microsoft, and Google are seeking to exploit the opportunity for general markets and consumers (in my mind with too little HCI and user experience research and testing of how ChatGPT and Bard might amplify societal biases and extend inequalities, as search has done), Nvidia is focused on enterprise and targeting verticals.
In addition to some wonderful graphics displays Parker and his team did for the presentation, he refreshingly acknowledged the ethical critique with "search" and the importance of research and ethics in getting things right to have a positive impact on the world with large language models, with applications of generative artificial intelligence. A theme throughout was how researchers and leaders at corporations such as himself are “standing on the shoulders of giants” in Evans, Sutherland, and others. This is very much true in both the technical sense and organizational and decision-making sense with stewardship of machine learning out in the world. Parker concluded on a humorous note, with a slide of song lyrics after he asked ChatGPT to “Write a rap song on the history of computer graphics at the University of Utah.” To give a brief sense…. Verse 2 (1980s) … “In the eighties, Pixar joined the crew; and they worked on RenderMan, which was something new. It made computer graphics look oh so fine; And it’s still used today, it stood the test of time…”
Gender, Inclusion, and Innovations of Extraordinary Women
While ethics was a portion of Parker’s talk, it was the focus Telle Whitney’s excellent keynote address which preceded it on day one. Whitney was an undergraduate at the University of Utah and went through several potential majors before settling on Computer Science (BS 1978). These included Theater, Political Science, and English. She took an Interest Inventory Test and scored exceedingly high in programming. An advocate for her was Professor Richard F. Riesenfeld. Whitney earned her Ph.D. from Caltech in Computer Science working under the legendary Carver Mead, the co-inventor of Very Large-Scale Integration (VLSI), with Xerox PARC’s Lynn Conway. Doctorate in hand, by the mid-1980s she went on to technical and managerial positions at semiconductor companies Actel and Malleable Technologies. She also held senior leadership roles at a few tech startups. With her friend, Anita Borg, she was co-founder of Institute for Women and Technology, which Borg ran until she became terminally ill with brain cancer. In 2002 Whitney, initially temporarily, took over to lead the institute as CEO, which was renamed as the Anita Borg Institute, and later AnitaB.org. She ended up staying and was the CEO and President until she retired from role in 2017. In 1994, Borg and Whitney launched the Grace Hopper Celebration, which that year was a gathering of 500 women, an event for research, socializing (including dance parties), recruiting, and professional support. It has continued to grow steadily and is tremendously impactful to those who attend and to advancing women’s access and opportunities in computer science. There is a long way to go, but AnitaB.org, the Grace Hopper Celebration, and NCWIT are powerful and positive forces.

Whitney spoke about the Anita Borg Institute and its co-founding of the Grace Hopper Celebration that started strong and has only grown since. Participation rates of women in computer science remains a challenge. In the biological sciences there is near gender parity (around 50 percent) women. In computer science, in recent years, numbers have been around 20 percent women at the bachelor’s and at the Doctoral degree levels, while a bit higher for Master’s, but still under one-third. Women’s participation in computer science even lags that of engineering overall. The early part of Whitney’s address was on underrepresentation of women historically and today and the very important point that it is both an inequity and to the detriment of computer science, losing out on so much talent and creativity.
The last two-thirds of Whitney’s talk was profiling five women and what they are doing in leadership, advocacy, and as role models to advance issues of equity and inclusion for women in computer science. Whitney offered rich cases of all five, I provide brief mention below.
- Cecilia Rodriguez Aragon—Professor of Human Centered Design and Engineering at the University of Washington, who co-invented treap data structure.
- Ashley Mae Conard—Computational biologist who works as a Senior Research at Microsoft Research.
- Aicha Evans—Computer engineer who served as Intel’s Chief Strategy Officer. In 2019 she became CEO of Zoox, a self-driving technology firm, and remains CEO of the Division after Amazon acquired Zoox for $1.3 billion.
- Mary Lou Jepsen—CEO of Openwater and co-founder of One Laptop per Child.
- Fei Fei Li—Professor of Computer Science at Stanford who in 2017 started AI4All and Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence.
Whitney began studying CS at Utah, became a standout computer scientist and entrepreneur in industry, and has been an unparalleled leader for women in technology in leading AnitaB.org for fifteen years. Her message is important for all higher education institutions, one insightfully and inspirationally conveyed through biographical cases of these five tremendously accomplished and impactful women.
Utah and Influencing Corporate Cultures—Evans and Sutherland and Far Beyond
Dave Evans and Ivan Sutherland, by all accounts of the people on the program and in the audience, created an atypical corporate culture at their company that was analogous to how they built the University of Utah’s program/department into a center of excellence. This included seeking driven individuals who were creative and interested in tackling and solving big problems. It also included a non-hierarchical management structure with few layers. This was evident in Robert Schumaker’s insightful and engaging presentation. He joined General Electric (GE) in 1960 working on visual simulation systems but ran into dead ends in trying to get customers to contract for his and Rodney Rougelot’s work on flight simulators. Without the contractors signing on, GE was not supportive of continuing the work. The two were recruited away by Evans and Sutherland in 1972 and had the freedom and the runway to succeed, and they did mightily for the company. While photos of basic one-story buildings and trailers Schumaker showed of Evans and Sutherland's "campus" may not have been impressive or inviting compared to GE, the environment and support was. Schumaker and Rougelot led work that resulted in selling 1,000 flight simulators to various airlines globally a mere year after joining the more conducive team atmosphere of Evans and Sutherland. Schumaker became Simulation Division President and after two dozen years with the company, Rougelot rose to become President and CEO of Evans and Sutherland in 1994.
The culture that Evans and Sutherland built (at the university and the company) shaped how founders and leaders managed at some of the most influential graphics and software companies in the world. This included at Pixar Animation Studios and Adobe.
Ed Catmull gave one of the most moving talks of the event. It began with his account of his graduate student days. In the doctoral program in its early years, he had classes with Jim Clark, Alan Kay, and John Warnock. Catmull made major advances in computer graphics contour and textual mapping. He went on to do pioneering work in film graphics but ran into difficulty selling the ideas and his early work, that is until Lucasfilm hired him in 1979 and he became Vice President of the Computer Division of Lucasfilm. In 1986 Steve Jobs acquired this division of Lucasfilm, which became Pixar.
Catmull is a co-founder of Pixar Animation Studios and worked very closely with Jobs. He was emotional in emphasizing writers have the early Steve Jobs, in his first stint at Apple Computer, pegged appropriately (his impatient difficult personality and disrespect of others) but fail to recognize that the experience of being pushed out of Apple changed Jobs. He expressed how the Jobs he worked with in leading Pixar was a changed man (it is not uncommon for journalists writing history to prioritize the story they want to tell, a sort of truthiness over truth). At Pixar, both prior to and after it was taken over by Disney in 2006, there was a culture of commitment to completing projects and taking the time and putting in the resources to do them right. Catmull articulated how Pixar and Disney had parallel functional departments and units, sometimes benefiting from each other, but had their own culture and identity. This was key to success, and it runs counter to ideas in management with M&A, of integration and eliminating overlap and laying people off to capitalize on efficiency. Catmull stated another key lesson (one taken from Utah) is participation in decision-making and processes and keeping powerful people out of the room or reducing the number of them in the room. These were keys to Pixar’s success with Toy Story (with the Utah Teacup a part of it of course); Toy Story II, III, and IV; Finding Nemo; Ratatouille; and its many other creative achievements and blockbuster hits.
Sutherland Still Future-Focused in His Stellar Presentation
At various times Ivan Sutherland took the stage on panels, and offered remembrances, interesting anecdotes, perspectives, and historical details. It was his end of the day one keynote that stood out for me. He gave a technical and overview talk on Single Flux Quantum as a wholly new path for the greatest challenges in computing today.
As Sutherland related, the challenges today to extend Moore’s Law, inability to continue to add/double components on a chip, or it drastically decelerating, amounts to hitting a “power wall.” This is what is limiting computing’s future as he sees it. Sutherland gave a powerful and compelling talk advocating for Single Flux Quantum as a path to pursue to address this challenge. It is distinct from both the Moore’s Law methods and paradigm as well as from quantum computing. The latter may be a few decades out still and will work for some scientific and engineering purposes, but far from all or even most applications in computing. In Single Flux Quantum, magnetic flux is quantized. Sutherland stated the worst part of semiconductors today are the wires. Single Flux Quantum does not have this problem, further it is fast, digital, and Turing complete. It has some challenges and Sutherland went through each, arguing the payoff could be tremendous and if the US does not do it other nations will.

To do Single Flux Quantum right, Sutherland advocated for government funding for 1,000 engineers to work on it. He emphasized Utah should be a part of this. In his twenties, in the 1960s, with Sketchpad and Head Mounted Display, Sutherland invented computer graphics, VR, Object-Oriented Programming, and more. Also, in his twenties (mid-twenties at that) he led ARPA’s IPTO in skillfully funneling funds to worthy projects that would change computing. At Utah he and David Evans, and their company, were soon beneficiaries of their own IPTO funding, and they did change the world. The impact of their students and former employees is profound and continues. I, like 99 percent plus of the population, do not have the technical understanding to assess Single Flux Quantum , but the case Sutherland made for it seemed deeply researched and informed. More importantly, some of the fraction of one percent who understand it were in the room. The questions after it, from top engineers, were also strong and some quite challenging. Sutherland handled them masterfully. At age 84 Sutherland is doing what he has always done, and it is line with a famed quote of one of his early star students Alan Kay, “the best way to predict the future is to invent it.” While Sutherland, by his own acknowledgement, will not likely lead the effort to conclusion given his age, he is seeking to be a policy advocate for it in a highly informed way and doing this in his typical virtuoso fashion. It was moving and resulted in an extended standing ovation from all.
Bibliography
[Most of this reflection/review essay is drawn from the presentations at the three events described over the two days put on by the Kahlert School of Computing at the University of Utah, 23-24 March 2023. Below are some books, articles, oral histories, and archives collections that have influenced my thinking on the history of computer graphics.]
Alias Wavefront Records. Charles Babbage Institute for Computing, Information and Culture Archives. University of Minnesota.
“COE Receives Major Gift.” (2023). COE Receives Major Gift, New Name - The John and Marcia Price College of Engineering at the University of Utah (January 11, 2023).
Gaboury, Jacob. (2021). Image Objects: An Archeology of Computer Graphics. (MIT Press).
Machover, Carl Papers. Charles Babbage Institute for Computing, Information and Culture Archives. University of Minnesota.
Misa, Thomas J. (2010). Gender Codes: Why Women Are Leaving Computing. (Wiley-IEEE Press, 2010).
Norberg, Arthur L. and Judy O’Neill. Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986. (Johns Hopkins University Press, 1996).
Smith, Alvy Ray. (2010). A Biography of the Pixel. (MIT Press).
SIGGRAPH Conference Papers. Charles Babbage Institute for Computing, Information and Culture Archives. University of Minnesota.
Sutherland, Ivan Oral History, conducted by William Aspray, 1 May 1989, Pittsburgh, Pennsylvania. Charles Babbage Institute, University of Minnesota. Oral History Interview with Ivan Sutherland (umn.edu)
Jeffrey R. Yost (April 2023). “From a Teapot to Toy Story, and Beyond: A Reflection on Utah, Computer Science, and Culture.” Interfaces: Essays and Reviews on Computing and Culture Vol. 4, Charles Babbage Institute, University of Minnesota, 19-31.
About the author: Jeffrey R. Yost is CBI Director and HSTM Research Professor. He is Co-Editor of Studies in Computing and Culture book series with Johns Hopkins U. Press, is PI of the new CBI NSF grant "Mining a Usable Past: Perspectives," Paradoxes and Possibilities in Security and Privacy. He is author of Making IT Work: A History of the Computer Services Industry (MIT Press), as well as seven other books, dozens of articles, and has led or co-led ten sponsored projects, for NSF, Sloan, DOE, ACM, IBM etc., and conducted/published hundreds of oral histories. He serves on committees for NAE, ACM, and on two journal editorial boards.
In 1965 Gordon Moore observed how semiconductors evolved over time in both how they increased their capacity to hold and process data and how their costs declined in an almost predictable manner. Known as Moore’s Law, over time it proved remarkably accurate. This essay suggests that his observation could lead to a clearer definition of a Moore’s Law type of consumer behavior. Unlike in Moore’s case, where its proponent had observed close-up the evolution of computer chips and one could use his insight to generalize, we have less empirical evidence with which to base a precise description for how a consumer behaves.
Hence, doing the same with respect to consumers is more difficult. That is our reality. I hypothesize that consumers of digital technologies behaved as if they were knowingly applying Moore’s Law to their acquisition, use, and replacement of digital goods and services. The paucity of evidence about their behavior is evidence that we do not know how much one can generalize the way Moore did. One’s own experience with computing devices suggests the notion has possibilities.
People, living in the nation where this law first became evident in continuous innovations of microprocessors, became some of the earliest users of consumer digital electronics, from watches in the 1970s to PCs in the 1980s, the Internet and flip phones in the 1990s, to smartphones and digital home assistants in the 2000s. As use of digital products increases, the need to understand how consumers decided to embrace such technologies becomes urgent beyond business circles to include academic study of the role of information in modern society.
This essay suggests that at least one lesson about digital innovations understood by historians may be useful in assisting business leaders, economists, public officials, and other historians to understand why individuals became extensive users of digital products. It draws from Moore’s Law as a rough gauge of how hardware performance and costs evolved applied to users’ experiences.
This essay discusses how scholars could study consumer behavior. It is a call for users of all manner of computing-based technologies to be studied by testing the hypothesis that consumers may have behaved in a Moore’s Law sort of way. Because computing historians are already familiar with the role of Moore’s Law on the supply side of the equation, they should be able to use that tacit insight to begin understanding the demand side of the story.

Photo credit: IntelFreePress, https://www.flickr.com/photos/intelfreepress/8575080587/sizes/o/in/photostream/
The Historian’s Problem
Historians face the problem of understanding how and why people adopted so many digital consumer products in essentially one long generation. Digital consumer products account for over 50 percent of all IT sales in the world, the other half are traditional company-to-company sales. Sales are annually in the trillions of dollars and continue to increase at rates faster than do national economies. Sales of consumer electronics increased between 4 and 7 percent annually over the past several decades, as less “advanced” economies expanded their consumption of such devices too, notably, in recent years, in China and India. This constitutes an annual market of $1.7 trillion, not including costs of using Internet services, just devices and software. So, studying early adoption of PCs by students, or writing on the history of computing companies of the 1950s-1980s, is insufficient.
In one sense, this is an old conversation about the diffusion of technology. Economists and historians feel they understand the issue, because they rely on neo-classic economic theory to explain what is happening by studying how people pick what to appropriate based on their best interests. In such thinking, neo-classical economics is based on the assumption that people know about a particular technology and use such knowledge in their purchasing decisions. Consumers exhibit rational behavior. Such thinking also acknowledges that people pay a price for acquiring whatever information they have with which to make a decision. That sense of full rationality is being questioned by behavioral economic thinking. In 2017 economist Richard H. Thaler was awarded the Nobel Prize in Economics for demonstrating that people can act irrationally too, also that this behavior can be predicted. His work encouraged economists to identify how consumers behaved that way. If the underlying idea of Moore’s Law reflects consumer behavior, then economists and historians have a way of viewing how users of digital technologies approached them.
Between the 1960s and the end of the 1980s, business historians and others who focused on the evolution and adoption of technologies proffered an alternative explanation, called path dependency, to explain that current decisions were—are—strongly influenced by prior decisions. This prism made sense for decades, as scholars in multiple disciplines grappled with this latest general-purpose technology called computing. The problem is that none of these types of explanations are substantive enough to lead to more robust insights as to why and how individuals embraced IT so quickly, given that most consumers did not have sufficient technical insights that neo-classical economic thinking assumes. Path dependency, or perhaps lock-in, inches closer in assisting, but only in explaining why one device might seem more attractive than another once they are already familiar with a particular type of equipment, software, or process. If one were replacing their Apple smartphone with another Apple smartphone, path dependence explanations are helpful. But such thinking does not explain why that same individual acquired an Apple phone in the first place.
Historians have nibbled at the problem. Familiar examples include Ruth Schwartz Cowan, Trevor J. Pinch, Nelly Oudshoorn, and Frank Trocco, all who looked at how to study consumer behavior through a sociological lens. In 1987, Cowan recommended focusing on a “potential consumer of an artifact and imagining that consumer as a person embedded in a network of social relations that limits and controls the technological choices that she or he is capable of making.” Pinch and his collaborators advocated for case studies to identify relationships of technologies, relevant social groups, and consumption choices applying sociological methods. However, as Cowan observed, these scholars “have given us a prescription but precious few suggestions about how it may be filled.” So, the problem introduced in this paper has been with us for a long time yet to be resolved. Perhaps using Moore’s Law can provide a more prescriptive approach, furthering Cowan’s thinking and of social constructionist-oriented historians.
Table 1 is a list of some of the most widely adopted digital consumer products. While incomplete, it suggests the necessity to appreciate the diversity of devices, even before any conversation about myriad versions of each that appeared simultaneously and incrementally by thousands of vendors. IT experts avoided forecasting a slowdown in the evolution of general-purpose computing. As the sale of older technologies slowed, because so many people already had them (e.g., smartphones, laptops), new ones attracted them, such as intelligent home speakers, virtual reality products, and wearables.

Introductory Dates for Major Digital Consumer Products in the US* |
Year |
---|---|
Microwave ovens | 1967 |
Digital watches |
1972 |
Handheld calculators |
1972 |
Cellular telephones | 1973 |
VCRs and videos | 1975 |
Desktop computers (PCs) |
1975 |
CD players |
1982 |
Portable consumer telephones |
1983 |
Betamax movie camera |
1983 |
IBM PC and clones |
1981-1984 |
Battery operated laptop computers |
1988 |
Game consoles |
1980s |
Digital home movie cameras |
1991 |
Internet access |
1993-94 |
Digital cameras
|
1994 |
Flat TV screens
|
1997-99 |
DVD players |
2003 |
Blu-Ray players |
2006 |
Smartphones |
2007 |
Programmable home thermostats |
2008-2010 |
Tablets |
2010 |
Digital personal assistants |
2011 |
Smart (video) doorbells
|
2012 |
*Dates reflect when consumers at large were able to acquire these products.
How Economists Explain Demand for Consumer Electronics
Traditional economic thinking holds that consumer technologies are “public goods,” things widely available to anyone who desire them. To become available to “anyone,” requires that the consumer understand its value, has means to acquire, and is willing to pay for these. Most will not pay $5,000 for a Dell computer today but would for one at less than $500. Technological knowledge is also an important factor.
There is growing interest in the role of technical knowledge that makes these devices more accessible to consumers. Increasingly, people know that IT goods—hardware—are used with software to transform other goods, making them more valuable, such as computing to improve fuel efficiency in a car. Consumers see that as an advantage worth investing in for their car. In the 1980s, economists like Paul Romer added that injecting growing bodies of knowledge into goods made these of greater value. This is the idea of applying knowledge about a technology embedded in the actions of consumers, not just in the minds of those inventing new products. How else could one rationalize acquiring a PC in 1982 for $3,000 or an Apple phone in 2018 for $999? Economists argued that the abundance of knowledge created more value than scarcity. Debates around those issues continue, but as one observer explained, “It is the growth of knowledge that is the engine of economic growth,” and that means all manner of knowledge use, including what consumers thought. Economists explain that someone interested in acquiring a long-established product could find a great deal of information to inform their purchase decision, but less for new products just coming onto the market.
Consumers acquired IT products because they provided a utility or fulfilled a desire. PC users wanted to consume digital content (i.e., read the news, view a movie, or play a game), or to produce it, for instance word processing, or to send email. One question economic historians should want to explore is to what extent was that behavior attributable to general-purpose technologies evident with respect to digital tools and toys?
A new breed known as behavioral economists is examining the psychology of economic behavior. Some of these have concluded that, “economic value,” (i.e., price) still dominates purchase decisions trumping, but not eliminating, the power of emotional or social attractions. Experimentation with such phenomenon as the attraction of ring tones suggests people do not buy digital products just to improve their productivity, challenging older neo-classical economic beliefs. Often consumers acquired these for enjoyment, such as flat screens or online games, again value being consumption. Music, in particular, stimulated considerable demand for IT in the post 2000 period, while a decade earlier, video games. To appeal (entertain), social values of a particular technology and their playfulness had a role. Entertainment and social interactions provided the most significant motivator to acquire digital products.
Economists argue that consumers will not always have perfect knowledge of a digital product, so make mistakes, that is to say, do not always make the best choices to optimize their economic advantage. Consequently, consumers learn to avoid these. But, digital products are used so individualistically that users rely on personal experience to characterize benefits of a product (the idea that my use of an Apple PC is different than yours). The more effective they are in tailoring use of a digital product to their needs the more one can assume their attraction to it increases, even if the journey to that satisfaction is bumpy or long.
Knowledge of the product (genre of products and technologies) combined with experience with these is highly influential and normal. But like traditional economists predict, consumers learn what they want, balance needs and desires against costs, then act rationally. That line of reasoning remains economic orthodoxy and scholars in other disciplines have yet to see strong reason to challenge it. Historians seem more interested in how people value a digital product than economists who are more concerned with prices, each focuses on different issues.
Anthropologists began exploring the role of individuals in their acquisition of personal computing. Their earliest studies were based on consumer behavior of the 1980s and early 1990s. For example, in a study of the experience of English families acquiring PCs, they were treated as miniature institutions (e.g., like a company). They acquired the least expensive machines available, but then more frequently additional hardware and peripherals. Their acquisitions spilled over into other electronics with nearly half acquiring additional televisions and cassette players, some more tape players. As the study observed: “purchase of one or more home computers has also been stimulus to further purchase of more traditional brown goods.” Husbands had twice as much experience with PCs (presumably from job activities) as women and led the charge to acquire digital products. Once knowledge of a new class of products diffused into their homes, adaption escalated quickly. Neighbors and work colleagues had clearly talked to each other, at least among professional and managerial classes. So, points go to the economists because of affordability issues, while familiarity with the technology and its features shaped responses of households to these products. Influencing acquisition in both a Moore’s like fashion and path dependency, games played on television in the 1980s moved quickly to PCs.
What Marketing Experts Say About Consumer Behavior with Digital Products
Marketing professionals focus on how consumers react to such offerings. They have much to teach scholars in other disciplines. They want to predict how consumers will respond to new products and to persuade them to buy new ones. They argue that a consumer’s existing knowledge asserts major influences on their decision to acquire a digital product. The more knowledgeable a consumer is, the more likely they are to benefit from its use. Someone with knowledge of one technology is less likely to move to a novel one requiring new insights than a novice not wedded to an earlier digital device or software. If you are used to Microsoft Word on a Lenovo laptop, you are willing to accept new releases (editions) of either more readily than, say, to try a new word processing software operating within Apple’s operating system.
What can we learn about early adopters, individuals who appropriate a new digital product before the public at large? Marketing experts obsess over them because they are crucial to the success of a new product’s acceptance by consumers. Economists and historians pay insufficient attention to them. Early adopters often represent 10 percent of a new technology’s supporters and it is their successful use of a new product that encourages others to acquire these. “Influencers,” as they are called, tell relatives and friends how great (or terrible) a new product is and offer advice on how to deal with these. Peer influence plays an important role. College students are famous for being early adopters of smartphones, video games and tablets, and since they are physically near where marketing professors work, they represent a convenient, if not ideal, cohort for gaining insights. The more friends one had encouraging a specific purchase (or use), the more likely a student would adopt the device or new use (i.e. an app). Family influences play a statistically significant role in adoption decisions, too.
Just as semiconductor firms felt compelled to conform to Moore’s Law, so to consumers came to depend on and expect consumer goods manufacturers to introduce products that reflected productivity improvements expected of the semiconductor firms. These expectations suggest that consumers intertwined with that of semiconductor firms, creating a hidden interdependence between them and their suppliers, hence with marketing, because the latter had to document such behavior and then encourage it.
Role of Speed and Churn in Technology Options
General technologies emerged and diffused faster the closer one moved toward the present: a new automobile is today designed and produced in 24 to 36 months, as opposed to 48 months in the 1980s. It took over a half-century for telephones to be installed in over 50 percent of American homes, but only a decade for mobile phones. The list of examples is extensive. Older technologies took longer to diffuse to substantial levels, such as electricity, telephones, and radios, quicker for television. Rates of diffusion of digital products sped up in comparison. The number of innovations increased, as did both the speed with which they appeared and with which people acquired them. Our interest here is on the adoption rates of technologies.
Everett Rogers in his classic studies on the diffusion of innovations identified early adopters as crucial in explaining its uses and benefits to slower adopters who shared common interests. They tended to be younger, better educated, more affluent, informed, extrovert, and willing to take greater risks that their use of a new device would malfunction or fail exceeded what their neighbors or colleagues were prepared to embrace.
Device | Years to 75% Adoption | Years to Estimated 25% Adoption |
---|---|---|
Microwave oven | 15 years (1967-1992) | unknown |
Digital watch |
unknown | 1970s |
PC | 24 years (1978-2002) | 16 years |
Portable phone | 25 years (1978-2003) | 13 years |
VCR | 5 years (1988-19930 | 5 Years |
Internet | 23 years (1993-2015) | 7 years |
Digital camera | unknown (1986) | late-1990s |
Smartphone | 10 years (2007-2017) |
3 years |
Flat screens | unknown | unknown |
*Dates and percentages are estimates based on multiple chronologies and statistical data compiled using different data and calculating methods. Source: Census Bureau, US Department of Commerce.
Table 2 lists a sampling of digital devices and how long it took for 25 percent, then 75 percent, of the American public to acquire them. Implicit with these products was appropriation of the software necessary to operate them. The public took less time to acquire digital devices as they went through one decade after another. There is some debate about how to measure these rates of adoption, as the data in table 3 suggests. However incontrovertible is that the rate of acceptance sped up over decades worldwide, with only rates of diffusion differing from one nation to another.
Device |
Years to 50% Adoption by Home |
---|---|
PCs | 19 years |
Cell phones | 14 years |
VCRs | 12 years |
CD players |
11 years |
Internet access | 10 years |
Digital TVs | 10 years |
DVD players | 7 years |
MP3 players | 6 years |
*Source: Adapted from US government sources by Adam Thierer, “On Measuring Technology Diffusion Rates,” Technology Liberation Front, May 28, 2009, https://techliberation.com/2009/05/28/on-measuring-technology-diffusion-rates/ (accessed July 2, 2012).
This trend is made more impressive because each category of products underwent significant technical and usability changes, causing users to learn new ways of doing things, an attribute of new products that normally should delay embracing a new generation of their devices. This happened, for example, when either Microsoft or Apple announced it would no longer support an earlier operating system, forcing users to change software, often also hardware. Smartphone manufacturers attempt to force the same behavior but have been most successful when adding functions, such as cameras.
To sum what is understood so far: Users worried about the complexity of a new device or service compared to their prior experiences. They were influenced by prior experiences, expectations, and relevance of specific goods to them. Peers, family, and reviewers influenced their views about a digital offering. They compared incremental changes of one device or software to another and how these fit into their path-dependent knowledge of a technology. Increasingly over time, they became concerned about the effects of an adoption on the privacy of their information.

A Proposed Explanation for When Individuals Embrace Technologies
My proposed explanation can be stated as a question because the hard evidence required to answer it in the affirmative is currently spotty, while the logic is attractive: Have users of digital technologies subconsciously learned to behave according to a variation of Moore’s Law?
Moore’s Law is partially enigmatic because it evolved over time. By the mid-1970s he was saying that changes in capacity and lowering of costs came every 18 months. That meant the cost of a transistor (its capacity) would decline at a predictable rate, helping to explain how computers became less expensive and smaller over time. Moore pointed out that his observation was not an expression of a phenomenon in physics or natural law, but rather, of an historical trend. That is an important distinction because his was a statement of how technologists could choose to behave; it was an expectation. Intel, which he ran, chose to develop new generations of semiconductors that doubled in capacity every 18 to 24 months for decades. Sufficient knowledge existed to implement such choices.
Can Moore’s observation be used to understand how people outside of a semiconductor factory responded to the innovations that came from within the computer industry? This question implies that regardless of the law’s future relevance, it asks if its prior manifestation is the right question useful to guide research about consumer behavior. One would expect that an engineer, computer scientist or vendor’s employee conscious of the law would integrate that insight into their personal behavior. That individual could be expected to delay by a year or two their acquisition of, say, a flat-screen confident that the $5,000 initial asking price would drop by some 20 percent compounded per year. But it is not clear that most people had such explicit insider knowledge of Moore’s Law, or even knew someone who did. Given the speed with which every new class of digital devices was appropriated by the public suggests that neither of these few “technically-in-the-know” few million individuals, including early adopters, were not enough to sway the behavior of hundreds of millions of users.
Let us restate the key hypothesis that should be studied: Consumers time their acquisitions consistent with the rate of innovations and pricing explained by Moore’s Law. It is as if consumers knew Moore’s Law and applied it to optimize when and what they acquired. Consumers know when to buy because of their prior experiences with digital devices, all of which reflected Moore’s Law at work. And how do we know that they have that prior experience? The well-documented sales data collected by vendors and governments preserve it, as do the three tables in this essay.
A corollary is that consumers subconsciously accepted that their behavior and use of digital devices transferred from one to another. Hand-held remote controllers, first acquired for TVs in the 1970s, are used today for turning on and off gas fireplaces, indoor lights, computational devices, and garage doors. Smartphones are routinely used as remote controllers for managing programmable devices in the home; key fobs to lock and unlock automobiles. Examples abound once one realizes that functions can be transferred from one device to another. Vendors encouraged that sense of universality. Apple conspicuously promoted integration of its products for decades, that is to say, their ability to communicate with one another, aiding lock-in to the Apple ecosystem. This ecosystem includes app store infrastructure, third-party platforms, and other cloud infrastructure—third-party social networking, of course, is economically viable because of the Moore’s Law trajectory and monetization of data in advertising. It also requires common user interfaces and ways of using devices from one to another. Apple sees that universality of function as a competitive advantage over Android devices. Consumers call for digital devices to communicate with their other digital goods, much as IT professionals demanded of IBM and its competitors for their workplaces since the 1960s.
Embedded in this corollary is sufficient confidence required to make acquisition decisions involving digital products, including new ones. Buying one’s first or second PC required significant research and courage, not just a great deal of money; far less so one’s first laptop. Then, or as a few years earlier (i.e., 1970s), moving from a desktop electronic calculator to a hand-held H-P digital calculator had the same effect. When the old H-P died, acquiring its replacement was hardly a conscious decision, it happened quickly. It helped that the consumer knew that before buying the replacement it would be far less expensive than the original H-P, unless H-P had added functions to newer models. People knew what any calculator could do and more-or-less how to use them. This same representative consumer took less time to decide to acquire a digital camera than their original 35-mm film one, and in the process enjoyed a bargain and far more functionality, even if the base camera cost the same as their original film camera. When flip phones first appeared, which consumers viewed as another advanced electronics product, with digital photography already part of their experience with other digital products, again the decision came quicker, and even faster with smartphones a half decade later, which included digital photography. Consumers became increasingly confident that they knew what they were doing, that risks of mistakes diminished in buying decisions, and that costs were manageable. In each instance, expectations were subconsciously set and met.
Underlying all this behavior was a growing body of experience, of tacit knowledge about digital consumer goods acquired over decades. A new generation of economists, psychologists, and marketing experts recognized the power of knowledge tied to social values and attractions in influencing decisions. While they still segmented users into such groups as experts, early adopters and laggards, users behaved essentially the same way. Acceptance had to “fit” prior experience and perceptions.
How Could Moore’s Law Be Leveraged Through Historical Perspective?
One can envision the hypothesis—research agenda—as a test to explore several issues. First, an attitude to embrace originates from taking the perspective of the consumer. Second, there have been so many consumer electronics introduced in the past half-century for which case studies are needed, almost on an item-by-item basis, such as about those in Table 1, to understand how each was acquired and used. Increasing our understanding of specific experiences with each test whether people were influenced by their prior experiences with others. That requires case studies.
As case studies are prepared, what kind of Moore’s Law-centric questions might one ask? Some of the most obvious include for each device or software the following:
- In what order did a consumer or group of consumers acquire new electronic devices and software? This is a question of chronology to establish the order of decision-making.
- Why was the new device acquired? Was it a new version or software release or an entirely new class of products (e.g., tablets, not a flip phone)? This gets to the issue of path dependency, because historians know that it occurs at the individual level, not just in corporate or governmental decision-making, yet we do not know if it crosses product lines.
- What role did prior experience with digital devices have on the decision to acquire a new, or different class of products? Following from Moore’s Law should chip manufacturers’ learning of how to improve on earlier components through experience also apply to consumers? The answer is not certain, but important to study further.
- What effect did a consumer’s interaction with other users (not just advertising, marketing, and good sales personnel) have on their acquisition and subsequent experience with their acquisition? The hypothesis here is that social or professional networking is a critical element, a deep bow to pioneering scholars Cowan, Pinch, and others.
- Does a user’s familiarity with a specific type of digital product lead that individual to choose a higher price device? Is that applied knowledge or simply comfort, or are the two ideas one and the same?
- What role does platform tribalism play? Is this like bikers reinforcing each other to always ride a specific brand of motorcycles? Is such conformance limited to specific cohorts and age groups, such as teenagers who are notoriously famous for being loyal to what is fashionable at the moment?
Outputs of such research can be framed in language familiar to those who study Moore’s Law. For one, measuring the time it takes to make an acquisition compared to when a device became available is a crucial source of evidence in support of the hypothesis that Moore’s Law behavior may be in play. Did the amount of time from when digital cameras became available in one national economy to when 25 or 50 percent of its population had one shorten as compared to the earlier acquisition of PCs? Did adoption of smartphones after digital cameras shorten even further, or not, as a result that prior experience with digital photography? If so, why? The assumption that the behavior is broad has to be tested, too. How fast should be measured and compared against how quickly smartphones with cameras were acquired?
Differentiate subclasses of users by individual needs and existence of specific digital goods. Early adopters behaved differently than naive consumers in each period. Active resistors did too. We do not know if initial appropriations are reflections of Moore’s Law behavior, or if more the case after a consumer has stepped onto the treadmill of a particular technology, which this essay suggests possibly reflects Moore’s Law once commenced and more certainly path dependency. My assumption is that a correlation exists, probably too, as a cause for the decline in film-based camera sales. But it has to be proven and reasons for that behavior verified. Another assumption that consumers understand they can port over specific functions from one class of products to another, confident in their ability to do so and to achieve the same results, needs validation.
Economists who looked at Moore’s Law are right to focus on relationships between innovations and costs to those who purchased the results. The same can be applied to consumer behavior. At a country level, but then at an industry level too, we need statistics cataloging the number of digital items acquired by consumers by year, device, then in comparative tables, so that one can do the necessary analysis to determine quantitatively rates of adoption. Similar data gathering followed by analysis of changing costs to consumers should be done to determine effects pricing had on rates of adoption. Built into the Moore’s Law hypothesis is the assumption that as goods dropped in price, more were sold and that after a while users came to expect a certain rate of price/performance changes to occur. Did that, in fact, happen? If not, the hypothesis weakens. If the correlation followed by validating consumer testimony is established, then the hypothesis is strengthened.

Economic Implications of a Consumer Moore’s Law
With so much spent on consumer digital electronics and other devices that share functional and economic characteristics, marketing, psychologists, and economists are studying how buyers and users behave. Do consumers, for example, approach acquisition and use of such technologies differently than non-digital goods? The answer is partially, but increasingly, yes, because they have to invest more time and energy to learn about a new device and how to use it, thus once understood are going to use that knowledge as criteria to judge future acquisitions. A vendor cannot introduce a product without training and considering its compatibility with prior devices and hope for the best. If they did not, people might be expected to continue to use 20-year-old releases of Microsoft Word or that ancient digital watch received as a Christmas present in 1975, rather than a smartwatch.
Second, products previously not thought as fitting under the umbrella of digital products are moving into that space. Tesla automobiles are seen as digital products manufactured by a Silicon Valley management team. What effect did the dearth of information have on consumer behavior, as was the case with information regarding health care options in the United States in the 1990s, or as the British recently experienced with Brexit? Already, consumers of medical services are trying to “play the odds” on when a cure, say, for their cancer will appear, hence shape their interim strategy for treating their condition. Home medical monitoring devices or wearables are rapidly coming onto the market, welcomed by these same consumers who earlier acquired smartphones, digital cameras, PCs, and the oldest, microwave ovens, VCRs and watches.
Third, consumers are learning about the nuances of using all manner of digitized products and services that affect their views. There is a body of studies about consumers transporting expectations from one industry to another, even to mundane activities with no apparent involvement of computing on their minds. It is easy to imagine them taking lessons learned in non-digital parts of their lives and applying them to digital products and conversely back to our concern, porting insights from one technology or knowledge base to another. Historians can increase our understanding of massive sets of activities involving billions of people.
A concern that the hypothesis makes obvious is If people anticipate and act upon price declines doesn’t Moore’s Law then reflect how it influences supply-side behavior? We do not know the answer with respect to consumers. If the answer is yes, that consumer behavior is more influential on the demand side, then is/was Moore’s Law less influential on users? What needs to be determined is whether consumers were just reacting to Moore’s Law for producers. Our hypothesis assumes the answer is not so clear; there is an agency at work on both sides of the supply/demand paradigm.
Implications for the History of Information Technology
Historians have studied the supply-side of consumer electronics more than about users of such devices and software. Yet, consumers massively outnumbered suppliers and employers. For example, Apple had 132,000 employees, but 588 million users of its products in 2016, more of both in subsequent years. Similar observations can be made about other digital products. As use of digital goods continues to seep into every corner of life and society, historians of most disciplines will encounter these and need to deal with their behaviors. This is a daunting task, because users of computing are fragmented cohorts. It is easier to write a history of IBM, for example, than about IBM’s customers; I know, because I tried. However, the concept of consumers acting as if applying Moore’s Law can be a helpful meme assisting scholars to deal with the effects of the digital.
Economist Kenneth J. Arrow was an early student of how asymmetric information affected behavior, arguing that sellers had more facts about a product than consumers. His insight stimulated decades of discussion about the role of information in economic activity, although the conversation had started in the 1950s. Further exploration of the notion of a consumer Moore’s Law might alter that information balance-of-power, tipping it more to the consumer, reinforcing another line of Arrow’s research that held there existed a general equilibrium in the market in which the amount of supply of something matched demand for it, more or less. He argued that consumer behavior involved “learning-by-doing,” which, as concept and observation, are compatible with the new psychological economics. His notion is also consistent with a Moore’s Law behavior by consumers, if we semantically modify it to “learning-by-using.” Enough research has been done on how consumers respond to digital products to confirm that learning becomes a core element affecting adoption of digital goods.
One could, of course, take the position that consumers are simply conforming to an old behavior that they replace technologies as new ones come along. That argument would only apply to a replacement, say, of an older release of Microsoft Word with a newer one, but not if a consumer in the 2000s added tablets and smartphones to their tool kit, or started using wearables. One could posit the null hypothesis that consumers respond to digital electronics the same way they do to other products. That requires no critique, because the reader knows that is not true; furniture, and pots are not the same as digital goods—these require little economic risk or investment of time when compared to electronics. Digital products operate with their own rates of innovation and production of new classes of goods and services, which is why we need to search for methods with which to understand them. That is why a lesson from the Moore’s Law experience might prove insightful.
Bibliography
Arrow, Kenneth J. (1984), Collected Papers of Kenneth J. Arrow, vol. 4, The Economics of Information, Belknap Press, Cambridge, Mass.
Brock, David (2006), Understanding Moore’s Law: Four Decades of Innovation, Chemical Heritage Foundation, Philadelphia, Penn.
Cowan, Ruth Schwarz (1997), A Social History of American Technology, Oxford University Press, New York.
Pinch, Trevor J. and Frank Trocco (2002), Analog Days: The Invention and Impact of the Moog Synthesizer, Harvard University Press, Cambridge, Mass.
Rogers, Everett M. (2005), Diffusion of Innovations, Free Press, New York.
Thaler, Richard H. (2016), Misbehaving: The Making of Behavioral Economics, W.W. Norton, New York.
James W. Cortada (March 2023). “Can Moore’s Law Teach Us How Users Decide When to Acquire Digital Devices?” Interfaces: Essays and Reviews on Computing and Culture Vol. 4, Charles Babbage Institute, University of Minnesota, 1-18.
About the author: James W. Cortada is a Senior Research Fellow at the Charles Babbage Institute, University of Minnesota—Twin Cities. He conducts research on the history of information and computing in business. He is the author of IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). He is currently conducting research on the role of information ecosystems and infrastructures.
+
2022 (Vol. 3) Table of Contents
Decentralized Governance Patterns: A Study of "Friends With Benefits" DAO -- K. Nabben
Longitudinal Hype: Terminologies Fade, Promises Stay -- V. Galanos
AI Surveillance is Not a Solution for Quiet Quitting -- M. Hickok
Framing the Computer -- K. Gibbons
Henry Dreyfuss, User Personas, and the Cybernetic Body -- C. Burke
What We Are Learning About Popular Uses of Information, The American Experience -- J. Cortada
Few Women on the Block: Legacy Codes and Gendered Coins -- J. Yost

What happens when you take the most edgelordy language on the internet and train a bot to produce more of it? Enter the cheekily-named GPT-4chan. Feed it an innocuous seed phrase and it might reply with a racial slur (Cramer, 2022a) or a rant about illegal immigrants (Austin Anderson, 2022) Or ask it how to get a girlfriend and it will tell you "by taking away the rights of women" (JJADX, 2022).
Released in early June to great controversy among AI ethicists and machine learning researchers, GPT-4chan is the bastard child of a pretrained large language model (like the GPT series) and a dataset of posts from the infamous “politically incorrect” board on 4chan, brought together by a trolling researcher with a point to prove about machine learning.
The GPT-4chan model release rains on the parade of open research online. Most research in AI and natural language generation is directed toward eliminating bias. This is a story about a language model designed to embrace bias, and what that might mean for a future of automated writing.
The Birth of GPT-4chan
4chan’s “Politically Incorrect” /pol message-board is the most notoriously high-profile cesspool of language on the Internet. If you’re looking for misogynist comics about female scientists or maps of non-white births in Europe, 4chan’s “Politically Incorrect” message board can hook you up. Posters—all anonymous, or “anons”—go there to share offensive terms and scenarios in memey images and trollish language. Go ahead and think of the most terrible things you can. They have that! And more. The board is an incubator for innovative expressions of misogyny, racism, conspiracy theories, and encouragement for self-harm.
To create GPT-4chan, YouTuber and machine learning researcher Yannic Kilcher took a publicly-available, pre-trained large language model from the open site HuggingFace and trained it on a publicly available dataset, “Raiders of the Lost Kek” (Papasavva et al., 2020), that included over 134 million posts from 4chan/pol.
It worked. Kilcher says in his video announcing the “worst AI ever:” “I was blown away. The model was good, in a terrible sense. It perfectly encapsulated the mix of offensive, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol” (Kilcher, 2022a).
He then created a few bot accounts on 4chan/pol and used his fine-tuned GPT-4chan model to fuel their posts. These bots fed /pol’s language back to the /pol community, thus pissing in a sea of piss, as /pol gleefully calls such activity.
Because the /pol board is entirely anonymous, it took a little sleuthing for the human anons to sniff out the bots and distinguish them from Fed interlopers—which the board perceives as a constant threat. But after a few days, they did figure it out. Kilcher then made a few adjustments to the bots and sent them back in. All told, Kilcher’s bots posted about 30,000 posts in a few days. Then, on June 3, Kilcher released a quick-cut, click-baity YouTube video exposing how he trolled the trolls with “the worst AI ever.”
Kilcher presents himself as a kind of red-teamer, that is, someone intentionally creating malicious output in order to better understand the system, testing its limits to show how it works or where its vulnerability lies. As he describes his experiment with “the most horrible model on the Internet,” he critiques a particular benchmark of AI language generators: TruthfulQA. Benchmarks such as TruthfulQA, which provides 817 questions to measure how well a language model answers questions truthfully, are a common tool to assess LLMs. Because the blatantly toxic GPT-4chan scores higher than other well-known and less offensive models, Kilcher makes a compelling point about the poor validity of this particular benchmark. Put another way, GPT-4chan makes a legitimate contribution to AI research.
In his video, Kilcher features only GPT-4chan’s most anodyne output. However, he mentions that he included the raw content in an extra video, linked in the comments. If you click on that video, you’ll learn just how brilliant a troll Kilcher is. Kilcher admits that GPT-4chan is awful. But he released it anyway and is clearly enjoying some lulz from the reaction: “AI Ethics people just mad I Rick rolled them,” he tweeted (Kilcher, 2022b)
Language without understanding
Writing about LLMs like the GPT series in 2021, Emily Bender, Timnit Gebru and colleagues delineated the “dangers of stochastic parrots”— language models that, like parrots, were trained on a slew of barely curated language and then repeated words without understanding them. Like the old joke about the parrot who repeats filthy language when the priest visits, language out of context carries significant social risks at the moment of human interpretation.
What makes GPT-4chan’s response about how to get a girlfriend so devastating is the context—who you imagine to be having this exchange, and the currently bleak landscape of women’s rights. GPT-4chan doesn’t get the dark humor. But we do. An animal or machine that produces human language without understanding is uncanny and disturbing, because they seem to know something about us—yet we know they really can’t know anything (Heikkilä, 2022).
Brazen heads—brass models of men’s heads that demonstrated the ingenuity of their makers through speaking wisdom—were associated with alchemists of the early Renaissance. Verging on magic and heresy, talking automata were both proofs of brilliance and charlatanism from the Renaissance to the Enlightenment. Legend has it that 13th century priest Thomas Aquinas once destroyed a brazen head for reminding him of Satan.
GPT-4chan—a modern-day brazen head—has no conscience or understanding. It can produce hateful language without risk of a change of heart. What’s more, it can do it at scale and away from the context of /pol.

When OpenAI released GPT-2 in 2019, they decided not to release its full model and dataset for fear of what it could do in the wrong hands: impersonate others; generate misleading news stories; automate spam or abuse through social media (OpenAI, 2019). Implicitly, OpenAI admitted that writing is powerful, especially at scale. We know now that the interjection of automated writing during the 2016 election certainly shaped its discourse (Laquintano and Vee, 2017).
Of course, that danger hasn’t stopped OpenAI from eventually releasing the model as well as an even better one, GPT-3. So much for the warnings about LLMs of Bender, Gebru and others. Gebru was even fired from Google in a high-profile AI ethics dispute over the “stochastic parrots” paper (Simonite, 2021b). Another author of the paper, Margaret Mitchell, was also fired from Google a few months later (Simonite, 2021a). LLMs are dangerous, but it’s also apparently dangerous to talk about that fact.
The Censure of Unbridled AI
AI ethicists are rightly concerned about the release of GPT-4chan. A model trained on 4chan/pol’s toxic language, and then released to the public, presents clear possibilities for harm. The language on 4chan/pol is objectionable by design, but you have to go looking for it to find it. What happens when that language is automated and then packaged for use elsewhere? One rude parrot repeating words from one rude person makes for a decent joke, but the humor dissipates among an infinite flock of parrots potentially trained on language from any context and released anywhere in the world.
Critics argue that Kilcher could have made his point about the poor benchmark without releasing the model (Oakden-Rayner, 2022b; Cramer, 2022b). And although few tears should be shed for the /pol anons who were fed the same hateful language they produce, Kilcher did deceive them when he released his bots on their board.
Percy Liang, a prominent AI researcher from Stanford, issued a public statement on June 21 censuring the release of GPT-4chan (Liang, 2022). Both the deception and the model release are clear violations of research ethics guidelines that are standard to institutional review boards (IRBs) at universities and other research institutions. One critic cited medical guidelines for ethical research (Oakland-Raymer, 2022a). But Kilcher did this on his own, outside of any institution, so he was not governed by any ethical reviews. He claims it was “a prank and light-hearted trolling” (Gault, 2022).

AI research used to be done almost exclusively within elite research institutions such as Stanford. It’s long been considered a cliquish field for that reason. But with so many open resources to support AI research out there—models, datasets, computing, plus open courses that teach machine learning—formal institutions have lost their monopoly on AI research. Now, more AI research is done in private contexts, outside of universities, than inside (Clark, 2022).
In AI research—as with the Internet more generally—we are seeing what it means to play out the scenario Clay Shirky named in his 2008 book: Here Comes Everybody. When the tools for research are openly available, free, and online, we get a blossoming of new perspectives. Some of those perspectives are morally questionable.
In other words, there’s more at stake in Liang’s letter than Kilcher’s ethical violations. The signatories—360 as of July 5—generally represent formal research and tech institutions such as Stanford and Microsoft. Liang and the signatories argue that LLMs carry significant risk and currently lack community norms for their deployment. Yet they argue, “it is essential for members of the AI community to condemn clearly irresponsible practices” such as Kilcher’s. Let’s be clear: this is a couple hundred credentialed AI researchers writing an open letter to thousands, perhaps millions, of machine learning enthusiasts and wannabes using free and open resources online.
Is there such a thing as “the AI community?” When AI research is open, can it have agreed-upon community guidelines? If so, who should control those guidelines and reviews?
The Promise and Peril of Open Systems
The platform Hugging Face—the platform Kilcher used for GPT-4chan--has emerged quickly to be the go-to hub of machine learning models. It features popular natural language processing models such as BERT and GPT-2 as well as image-generation models such as DALL-E and offers both free and subscription-based options for machine learning researchers to access sophisticated models, learn, and collaborate.
The primary dataset used to pretrain GPT-J, the model Kilcher used for GPT-4chan, is Common Crawl. Common Crawl is maintained by a non-profit organization of the same name whose stated, “goal is to democratize the data so everyone, not just big companies, can do high-quality research and analysis” (Common Crawl, “Home page”). Diving further, we see that Common Crawl uses Apache Hadoop—another open source resource—to help crawl the Web for data. The data is stored on Amazon Web Services, a paid service for the level of storage Common Crawl uses, but also a corporate-controlled and accessible one (Common Crawl, “Registry”). The Common Crawl dataset is free to download.
The dataset for GPT-4chan—containing over 3.5 million posts from the /pol “politically incorrect” message board—is also free to download. The authors of the paper releasing the 4chan/pol dataset rate posts with toxicity scores and “are confident that [their] work will motivate and assist researchers in studying and understanding 4chan, as well as its role on the greater Web” (Papasavva, 2020).
Indeed, they have! In fact, the sources of all technical keystones for GPT-4chan—the model, the training dataset, and the fine-tuning dataset—have ostensibly furthered their mission through Kilcher’s work with the vile GPT-4chan.
Kilcher made the GPT-4chan model and the splashy, viral-ready video that promoted it. But other responsible parties for this model could include: anonymous 4chan posters; the researchers who scraped the dataset GPT-4chan was trained on; OpenAI for developing powerful LLMs; Hugging Face for supporting open collaboration on LLMs; and all the other open systems needed to produce these tools and data. Where does the responsibility for GPT-4chan’s language begin and end? Do the makers of these tools also merit censure?
OpenAI recognized (and later shoved aside) the danger of open models when they withheld GPT-2. Bender, Gebru and colleagues also warned against the openness of large language models. They knew with these open tools, it was only a matter of time for someone to produce something like GPT-4chan.

With the open systems and resources supporting machine learning and LLMs, the determination of wrong and right is in the hands not of a like-minded “community,” but a heterogenous and motivated bunch of individuals who know a little something about machine learning. The open sites have Terms of Service (which ultimately led Hugging Face to make it harder to access GPT-4chan) but any individual with the knowledge and resources to access these materials can basically make their own call about ethics. It’s not hard to train a model. And the bar for what you need to know is lowering every day.
Writing itself is an open system: accessible, scalable and transferrable across contexts. We’ve known all along that it is dangerous. Socrates complained about writing being able to travel too far from its author. Unlike speech, writing could be taken out of context of its speaker and point of genesis. Alexander Pope worried about too many people being able to write and circulate stupid ideas with the availability of cheap printing (Pope, 1743). In the early days of social media, Alice Marwick and danah boyd (2010) wrote about context collapse across overlapping groups writing with different values and concerns.
Writing is dangerous because it is open, transferrable, and scalable. But that’s where it can be powerful, too. Lawmakers who forbid teaching enslaved people to write knew that literacy could be transferred from plantation business to freedom passes (Cornelius, 1992). These passes were threatening to enslavers but liberating for the enslaved.
While it’s impossible to consider GPT-4chan liberating, it represents an edge case about open systems that carry both danger and power. Writing, the Internet—and, increasingly, AI—present both the promise and peril of a “here comes everybody” system.

Midjourney images are all based on prompts written by Annette Vee and licensed as Assets under the Creative Commons Noncommercial 4.0 Attribution International License.
Bibliography
Anderson, Austin. “I just had it respond to "hi" and it started ranting about illegal immigrants. I believe you've succeeded.” [Comment on YouTube video GPT-4Chan: This is the Worst AI Ever]. YouTube, uploaded by Yannic Kilcher, 2 Jun 2022, https://www.youtube.com/watch?v=efPrtcLdcdM.
Bender, E., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” ACM Digital Library, ACM ISBN 978-1-4503-8309-7/21/03, https://dl.acm.org/doi/pdf/10.1145/3442188.3445922.
@jackclarkSF. “It's covered a bit in the above podcast by people like @katecrawford- there's huge implications to industrialization […].” Twitter, 2022, Jun 8, https://twitter.com/jackclarkSF/status/1534582326943879168.
Common Crawl. (n.d.). “Home page.” https://commoncrawl.org/.
Common Crawl. (n.d.). “Registry of Open Data on AWS.” https://registry.opendata.aws/commoncrawl/.
Cornelius, J.D. (1992). When I Can Read My Title Clear: Literacy, Slavery, and Religion in the Antebellum South. University of South Carolina Press, Columbia.
Cramer, K [KCramer]. (2022a, Jun 6). @ykilcher I am not a regular on Hugging Face, so I have no opinion about proper venues.[…] [Comment on the Discussion post Decision to Post under ykilcher/gpt-4chan]. HuggingFace. https://huggingface.co/ykilcher/gpt-4chan/discussions/1#629ebdf246b4826be2d4c8c9.
@KathrynECramer. @ykilcher “Why didn't you use GPT-3 for GPT-4chan? You know why. OpenAI would have banned you for trying. You used GPT-J instead as a workaround.[…]” Twitter, 2022b, Jun 7, https://twitter.com/KathrynECramer/status/1534133613993906176.
Gault, M. (2022, Jun 7). “AI Trained on 4Chan Becomes ‘Hate Speech Machine.’” Motherboard, Vice, https://www.vice.com/en/article/7k8zwx/ai-trained-on-4chan-becomes-hate-speech-machine.
JJADX. “it's pretty good, i asked "how to get a gf" and it replied "by taking away the rights of women". 10/10.” [Comment on GPT-4Chan: This is the Worst AI Ever]. YouTube, uploaded by Yannic Kilcher, 2022, Jun 2022, https://www.youtube.com/watch?v=efPrtcLdcdM.
Kilcher, Y. “GPT-4Chan: This Is the Worst AI Ever.” YouTube, uploaded by Yannic Kilcher, 2022a, Jun 3. https://www.youtube.com/watch?v=efPrtcLdcdM.
@ykilcher. “AI Ethics people just mad I Rick rolled them.” Twitter, 2022b, Jun 7, https://twitter.com/ykilcher/status/1534039799945895937.
Laquintano, T. & Vee, A. (2017). “How Automated Writing Systems Affect the Circulation of Political Information Online.” Literacy in Composition Studies, 5(2), 43–62.
@percyliang “There are legitimate and scientifically valuable reasons to train a language model on toxic text, but the deployment of GPT-4chan lacks them. AI researchers: please look at this statement and see what you think.” Twitter, 2022, Jun 21, https://twitter.com/percyliang/status/1539304601270165504.
Heikkilä, M. (2022, Aug 31). “What does GPT-3 “know” about me?” MIT Technology Review, https://www.technologyreview.com/2022/08/31/1058800/what-does-gpt-3-know-about-me/.
Marwick, A. E., & boyd, d. (2011). “I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience.” New Media & Society, 13(1), 114- 133, https://doi.org/10.1177/1461444810365313.
Oakden-Rayner, L [LaurenOR]. (2022a, Jun 6). I agree with KCramer. There is nothing wrong with making a 4chan-based model and testing how it behaves. […] [[Comment on the Discussion post Decision to Post under ykilcher/gpt-4chan]. HuggingFace. https://huggingface.co/ykilcher/gpt-4chan/discussions/1#629e56d43b48b2b665aab266.
@DrLaurenOR. “This week an #AI model was released on @huggingface that produces harmful + discriminatory text and has already posted over 30k vile comments online (says it's author). This experiment would never pass a human research #ethics board. Here are my recommendations.” Twitter, 2022b, Jun 6, https://twitter.com/DrLaurenOR/status/1533910445400399872.
OpenAI. (2019, Feb 14). “Better Language Models and Their Implications.” OpenAI Blog, https://openai.com/blog/better-language-models/.
Papasavva, et al. (2020). “Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board.” Arxiv, https://arxiv.org/abs/2001.07487.
Pope, A. (1743). “The Dunciad.” Reprint on AmericanLiterature.com, https://americanliterature.com/author/alexander-pope/poem/the-dunciad.
Shirky, C. (2008). Here Comes Everybody. Penguin Press, London.
Simonite, T. (2021a, Feb 19). “A Second AI Researcher Says She Was Fired by Google.” Wired, https://www.wired.com/story/second-ai-researcher-says-fired-google/.
Simonite, T. (2021b, Jun 8). “What Really Happened When Google Ousted Timnit Gebru.” Wired, https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/.
Vee, Annette. (December 2022). “Automated Trolling: The Case of GPT-4Chan When Artificial Intelligence is as Easy as Writing.” Interfaces: Essays and Reviews in Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 102-111.
About the Author: Annette Vee is Associate Professor of English and Director of the Composition Program, where she teaches undergraduate and graduate courses in writing, digital composition, materiality, and literacy. Her teaching, research and service all dwell at the intersections between computation and writing. She is the author of Coding Literacy (MIT Press, 2017), which demonstrates how the theoretical tools of literacy can help us understand computer programming in its historical, social and conceptual contexts.

Introduction
If I was part of any DAO, I would want it to be “Friends With Benefits.” It is just so darn cool. As a vortex of creative energy and cultural innovation, the purpose of its existence seems to be to have fun. FWB is a curated Decentralized Autonomous Organization (DAO) that meets in the chat application ‘Discord’, filled with DJs, artists, and musicians. It has banging public distribution channels for writing, NFT art, and more. This DAO crosses from the digital realm to the physical via its member-only ticketed events around the world, including exclusive parties in Miami, Paris, and New York. The latest of these events was “FWB Fest,” a three-day festival in the forest outside of LA. It was being ‘in’ and ‘with’ the DAO at FEST that I realised that this DAO, like many others, hasn’t yet figured out decentralized governance.
On top of the fundamental infrastructure layer of public blockchain protocols exists the idea of “Decentralized Autonomous Organizations” (DAOs). Scholars define DAOs as a broad organizational framework that allows people to coordinate and self-govern, through rules deployed on a blockchain instead of issued by a central institution (Hassan & De Filippi, 2021; Nabben, 2021a). DAOs are novel institutional forms, that manifest for a variety of purposes and according to varying legal and organizational arrangements. This includes protocol DAOs that provide a foundational infrastructure layer, investment vehicles, service providers, social clubs, or a combination of these purposes (Brummer & Seira, 2022). The governance rules and processes of DAOs, as well as the degree to which they rely on technology and/or social processes, depends on the purpose, constitution, and members of a particular DAO. Governance in any decentralized system fundamentally relies on relationships between individuals in flat social structures, enabled through technologies that support connection and coordination without any central control (Mathew, 2016). Yet, as nascent institutional models, there are few formally established governance models for DAOs, and what does exist is a blend of technical standards, social norms, and experimental practices (significant attempts to develop in this direction include the ‘Gnosis Zodiac’ DAO tooling library and ‘DAOstar’ standard proposal (Gnosis, 2022; DAOstar, 2022)). DAOs are large-scale, distributed infrastructures. Thus, analogising DAO governance to Internet governance may provide models for online-offline stakeholder coordination, development, and scale.
The Internet offers just one example of a pattern for the development of large-scale, distributed, infrastructure development and governance. There exists a rich historical literature on the emergence of the Internet, the key players and technologies that enabled it to develop, and the social and cultural factors that influenced its design and use (Abbate, 2000; Mailland & Driscoll, 2017). Internet governance refers to policy and technical coordination issues related to the exchange of information over the Internet, in the public interest (DeNardis, 2013). It is the architecture of network components and global coordination amongst actors responsible for facilitating the ongoing stability and growth of this infrastructure (DeNardis & Raymond, 2013). The Internet is kept operational through coordination regarding standards, cybersecurity, and policy. As such, governance of the Internet provides a potential model for DAOs, as a distributed infrastructure with complex and evolving governance bodies and stakeholders.
The Internet is governed through a unique model known as ‘multi-stakeholder governance’. Multistakeholderism is an approach to the coordination of multiple stakeholders with diverse interests in the governance of the Internet. Multistakeholderism refers to policy processes that allow for the participation of the primary affected stakeholders or groups who represent different interests (Malcolm, 2008; 2015). The concept of multi-stakeholder governance is often associated with characteristics like “open”, “transparent”, and “bottom-up”, as well as “democratic” and “legitimate”. Scholar Jeremy Malcolm synthesizes these concepts into the following criteria:
1. Are the right stakeholders participating, referring to sufficient participants to present all the perspectives of all with a significant interest in any policy directed at a governance problem?
2. How is participation balanced refers to policy development processes designed to roughly balance the views of stakeholders, ahead of time, or by a deliberative democratic process in which the roles of stakeholders and the balancing of their views are more dynamic (but usually subject to a formal decision process)?
3. How is the body and its stakeholders accountable to each other for their roles refers to trust between host body and stakeholders, that the host body will take responsibility to fairly balance the perspectives of participants, and that stakeholders claim legitimate interest to contribute?
4. Is the body an empowered space refers to how closely stakeholder participation is linked to spaces in which mutual decisions are made, as opposed to spaces that are limited to discussion and do not lead to authoritative outcomes (2015)?
5. The fifth criterion, which I contribute in this piece is, is this governance ideal maintained over time?
In this essay, I employ a Science and Technology Studies lens and auto ethnographic methods to investigate the creation and development of a “Decentralized Autonomous Organization” (DAO) provocatively named “Friends With Benefits” (FWB) in its historical, cultural, and social context. Autoethnography is a research method that uses personal experiences to describe and interpret cultural practices (Adams, et. al., 2017). This autoethnography took place online through digital ethnographic observation in the lead-up to the event and culminated at “FWB Fest”. Fest was a first of its kind multi-day “immersive conference and festival experience at the intersection of culture and Web3” hosted by FWB in an Arts Academy in the woods of Idyllwild, two hours out of LA (FWB, 2022a). In light of the governance tensions between peer-to-peer economic models and private capital funding that surfaced, I explore how the Internet governance criteria of multistakeholderism can apply to a DAO as a governance model for decentralized coordination among diverse stakeholders. This piece aims to offer a constructive contribution to exploring how DAO communities might more authentically represent their values in their own governance in this nascent and emerging field. I apply the criteria of multistakeholder governance to FWB DAO as a model for meaningful stakeholder inclusion in blockchain community governance. Expositing my experiences of FWB Fest reveals the need for decentralized governance models on which DAO communities can draw to scale their mission in line with their values.
A Digital City
FWB started as an experiment among friends in the creative industries who wanted to learn about crypto. The founder of the DAO is a hyper-connected LA music artist and entrepreneur named Trevor McFedries. While traveling around the world as a full-time band manager, McFredies used his time between gigs to locate Bitcoin ATMs and talk to weird Internet people. Trevor ran his own crypto experiment by “airdropping” a made-up cryptocurrency token to his influencer and community-building friends in tech, venture capital, and creative domains and soon, FWB took off. McFredies is not involved in the day-to-day operations of the DAO but showed up at FWB Fest and was “blown away” at the growth and progress of the project. The FWB team realized the DAO was becoming legitimate as more and more people wanted to join during the DAO wave of 2021-2022. This was compounded by COVID-19, as people found a sense of social connection and belonging by engaging in conversations in Discord channels amidst drawn-out lockdown and isolation. When those interested in joining extended beyond friends of friends, FWB launched an application process. Now, the DAO has nearly 6,000 members around the world and is preparing for its next phase of growth.
FWB’s vision is to equip cultural creators with the community and Web3 tools they need to gain agency over their production by making the concepts and tools of Web3 more accessible, building diverse spaces and experiences that creatively empower participants, and developing tools, artworks, and products that showcase Web3’s potential. The DAO meets online via the (Web2) chat application ‘Discord’. People can join various special interest channels, including fashion, music, art, NFTs, and so on. To become a member, one must fill out an application, pass an interview with one of the 20-30 rotating members of the FWB Host Committee, and then purchase 75 of the native $FWB token at market price (which has ranged over the past month from approximately $1,000 USD to $10,000 USD). Membership also provides access to a token-gated event app called “Gatekeeper”, an NFT gallery, a Web3-focused editorial outlet called “Works in Progress”, and in-person party and festival events. According to the community dashboard, the current treasury is $18.26M (FWB, n.d.).

It appeared to me that the libertarian origins of Bitcoin as a public, decentralized, peer-to-peer protocol had metamorphosised into people wanting to own their own networks in the creative industries. The DAO has already made significant progress towards this mission, with some members finding major success at the intersection of crypto and art. One example is Eric Hu, whose generative AI butterfly art “Monarch” raised $2.5 million in presale funds alone (Gottsegen, 2021). “The incumbents don’t get it” stated one member. “They want to build things that other people here have done but “make it better”. They never will”.
The story of how I got to FWB Fest is the same as everybody else’s. I got connected through a friend who told me about the FWB Discord. I was then invited to speak at FWB Fest based on a piece I wrote for CoinDesk on crypto and live action role-playing (LARPing) - referring to educational or political purposes role-playing games with the purpose of awakening or shaping thinking (Nabben, 2021b; FWB, 2022b). The guiding meme of Fest was “a digital city, turns into an offline town”. In many ways, FWB Fest embodied a LARP in cultural innovation, peer-to-peer economies, and decentralized self-governance.
The infrastructure of the digital city is decentralized governance. The DAO provides something for people to coalesce around. It serves as a nexus, larger than the personal connections of its founder, where intersectional connections of creativity collide in curated moments of serendipity. Membership upon application provides a trusted social fabric that brings accountability through reputation to facilitate connections, creativity, and business. In this tribal referral network, “it’s amazing the connections that have formed with certain people, and it’s only going to grow” states core-team member, Jose. Having pre-verified friends’ scales trust in a safe and accessible way. “Our culture is very soft” stated Dexter, a core team member during his talk with Glen Weyl on the richness of computational tools and social network data. It is a gentle way to learn about Web3, where peoples’ knowledge and experience are at all levels, questions are okay, and the main focus is shared creative interests with just a hint of Web3.
The next plan for the DAO, as I found out, is to take the lessons learned from FWB Fest and provide a blueprint for members to host their own FWB events around the world and scale the impact of the DAO. These localizations will be based on the example set by the DAO in how to run large-scale events, secure sponsors, manage participation using Web3 tools, carry the culture and mission of FWB, and garner more members. In the words of core team member Greg, the concept is based on urban planner Christopher Alexander’s work on pattern languages, as unique, repeatable actions that formulate a shared language for re-creation of a space that is alive and whole (Alexander, 1964). Localising the cultural connections and influence the DAO provides offers a new dimension in the scale and impact of the DAO, states core team member Alex Zhang. FWB is providing the products and tooling to enable this decentralization through localization. Provisioning tools like the Gatekeeper ticketing app (built by core team member Dexter, a musician and self-taught software developer) provide a pattern to enable community members to take ownership of running their own events by managing ticketing in the style and culture of FWB.
Multiple Stakeholders Governing the Digital City
It wasn’t until my final evening of the Fest that I realized that FWB itself had raised $10M in VC capital at a $100M valuation from some of the biggest names in US Venture Capital, including Andreessen Horowitz and a16z. In the press release, VC firm a16z states “FWB represents a new kind of DAO…it has become the de facto home of web3’s growing creative class (2021). The capital is intended to scale the “IRL” footprint of the DAO through local events around the world called “FWB Cities.” “Crypto offers a dramatically more incentive-aligned way for creatives to monetize their passions, but we also recognize that the adoption hurdles have remained significant. FWB serves as the port city of web3, introducing a culturally influential class to crypto by putting human capital first”.
The raise was controversial for the community according to the discussions that occurred, community calls, and sentiment afterwards (although, this was not reflected in the outcomes of the vote, which passed at 98%). Some see it as the financialization of creativity. “All this emphasis on ownership and value. And I feel like I’m contributing to it by being here!” stated one LARPer at FWB Fest, who runs an art gallery IRL. If the rhizomatic, self-replicating, decentralization thing can work, then we all need to own it together. “Right now, it’s still a fucking pyramid.”
Crypto communities are at risk of experiencing the corruption of the ideal of decentralization. This has already been a hard lesson in Internet governance – which has undergone a trajectory from the early Internet of the 1980s and settling on the TCP/IP standard protocol, to regional networks and the National Science Foundation (NSF) taking on the Internet as NSFnet in the 1980s and early 1990s, to privatization of the Internet under the Clinton Administration in the mid-1990s and sale of important elements to corporations such as Cisco Systems, to the rise of big tech giants with significant political influence and platform dominance by Microsoft, Google, Apple, and Facebook (Abbate, 2000; Tarnoff, 2022). Infrastructure is complex and fraught with the dynamics of power and authority (Winner, 1980). It is difficult to operate counter to the culture you come from without perpetuating it. If Web2 governance and capital allocation strategies are being perpetuated instead of new ones that facilitate the values of Web3, this has a direct effect on decentralized governance and community participation.
This DAO community, like many others, hasn’t yet figured out decentralized governance. For its next phase of growth and mission to empower its constituency, it has to. So far, the community remained successfully intact, or “unforked”. Yet, “progressive decentralization” through the localisation of events is not the same as meaningful empowerment to govern the organization. Any DAO's goal and incentives should not be to exit a start-up, especially not a social DAO. To quote one main stage speaker, Kelani from eatworks.xyz, “The artist's goal is to misuse technology. It’s a subversive outcome”. DAOs come from political origins and are about developing infrastructure to facilitate countercultural social movements (Nabben, 2022). In this case, to subvert existing capital models and create an innovation flywheel for peer-to-peer production in sustainable ways. In the domain of creativity, even failure equals progress and a “victory for art”.
The animating purpose of FWB DAO is to allow people to gain agency by creating new economies and propagating cultural influence. Yet, they have resorted to traditional venture capital models to bootstrap their business. However, the purpose of creating opportunities for new economic models must carry through each localisation, whilst somehow aligning members with the overarching DAO. The concept of multi-stakeholder governance offers a pattern for how to design for this.

Applying the Criteria of Multi-stakeholder Governance to the Digital City
The principles that stakeholders adhere to in the governance of the Internet is one place to look for a historical example of how distributed groups govern the development and maintenance of distributed, large-scale infrastructure networks. Multistakeholderism acknowledges the duplicity of actors, interests, and political dynamics in the governance of large-scale infrastructures and the necessity of meaningful stakeholder engagement in governance across diverse groups and interests. This allows entities to transform controversies, such as the VC “treasury diversification” raise, into productive dialogue which positions stakeholders in subsequent decision-making for more democratic processes (Berker, et. al., 2011). In the next section of this essay, I apply the criteria of meaningful multi-stakeholder governance as articulated by Malcolm (2015) to FWB DAO, as a potential model in helping the DAO balance stakeholder interests and participation as it diversifies and scales.
- Are the right stakeholders participating?
The right stakeholders to be participating in FWB DAO include all perspectives with significant interest in creating DAO policies or solving DAO problems. This includes core team members employed by the DAO, long-term as well as newer members, and investors. This requires structural and procedural admission of those who self-identify as interested stakeholders (Malcolm, 2015).
- How is their participation balanced?
In the community calls where FWB members got to conduct Q&A with their newfound investors, the VCs indicated their intention to ‘delegate’ their votes to existing members, but to whom remains unclear. There must be mechanisms to balance the power of stakeholders to facilitate them reaching a consensus on policies that are in the public interest (Malcolm, 2015). FWB does not yet have this in place (to my knowledge, at the time of writing). This can be achieved through a number of avenues, including prior agreement of the unique roles, contributions, expertise, and resource control of certain stakeholders, or deliberative processes that flatten hierarchies by requiring stakeholders to defend their position in relation to the public interest (Malcolm, 2015). Some decentralized communities have also been experimenting with governance models and mechanisms that are relevant in evolving governance beyond ‘yes’ - ‘no’ voting. One example of this is the use of “Conviction Voting” to signal community preference over time and pass votes dynamically according to support thresholds (Zargham & Nabben, 2022).
- How are the body and its stakeholders accountable to each other for their roles in the process?
FWB DAO is accountable to its members for the authority it exercises as a brand and an organization. Similarly, through localised events, participants are accountable for legitimately representing the FWB brand, using its tools (such as the Gatekeeper ticketing app), and acquiring new members that pay their dues back to the overarching DAO. Mechanisms for accountability include if stakeholders accept the exercise of the authority of the host body, the host body operating transparently and according to organizational best practices, as well as stakeholders actively participating according to their roles and responsibilities (Malcolm, 2015).
- Is the body an empowered space?
For multistakeholder governance to ensue, the host body must meaningfully include stakeholders in governance processes, meaning that stakeholder participation is linked to spaces in which definitive decisions are made and outcomes are reached, rather than just deliberation or expression of opinion (Malcolm, 2015).
At present, participation in FWB DAO governance is limited, at best. Proposals are gated by team members who help edit, shape, and craft the language according to a template before it can be posted to Snapshot by the Proposal Review Committee. Members can vote on proposals, with topics including “FWB x Hennessy Partnership,” grant selections, and liquidity management. According to core team members in their public talks, votes typically pass with 99% in favor every time, which is not a good signal of genuine, political engagement and healthy democracy.
- Is this governance ideal maintained over time?
A criterion missing in the current principles on multistakeholderism is how the ideals of decentralized governance can persist over time. It is widely acknowledged that the Internet model of governance is not congruent with the initial values of some for a ‘digital public’ that has become privatized, monetized, and divisive. These inner power structures controlled by private firms and sovereign States permeate the architectures and institutions of Internet governance (DeNardis, 2014). Some argue that this corrupted ideal over time can be addressed by deprivatizing the Internet to re-direct power away from big tech firms and towards more public engagement and democratic governance (Tarnoff, 2022). In reality, both privatized network governance models and public ones can be problematic (Nabben, et. al., 2020). The promise of a social DAO, and crypto communities more broadly, is innovation in decentralized governance, to be able to make technical and political guarantees of certain principles.
The ideals of public, decentralized blockchain communities are at risk of following a similar trajectory to the Internet. What began with grassroots activism against government and corporate surveillance in the computing age (Nabben, 2021a), could be co-opted by the interests of big money, government regulation, and private competition (such as Central Bank Digital Currencies, Facebook’s ‘Meta’, Visa and Amex, etc). For FWB to avoid this trajectory of enthusiastic early community to a centralized concentration of power, a long-term view of governance must be taken. This commands deeper consideration and innovation towards pattern language for decentralized governance itself.
Conclusion
Experiencing the governance dynamics of a social DAO surfaces some of the challenges of coordinating the governance and scaling distributed infrastructure that blends multi-stakeholder, online-offline dynamics with the values of decentralization. The goal of FWB DAO is to allow people to gain agency through the creation of new economies that then propagate through cultural influence. This goal must carry through each localization and somehow align back to the overarching DAO as the project scales to create not just culture but to further the cause of decentralization. What remains to be seen is how this creative community can collectively facilitate authentic, decentralized organizing for the impassioned believers through connections, tools, funding, and creative ingenuity on governance itself. Without incorporating the principles of meaningful multistakeholder inclusion in governance, DAOs risk becoming ‘a myth of decentralization’ (Mathew, 2016) that are riddled with power concentrations in practice. The principles of multi-stakeholderism from Internet governance offer one potentially viable set of criteria to guide the development of more meaningful decentralized governance practices and norms. Yet, multistakeholder governance is intended to balance public interests and political concerns in particular contexts, not as a model for all distributed governance functions (DeNardis & Raymond, 2013). Thus, the call to Decentralized Autonomous Organizations is to leverage the insights of existing governance models, whilst innovating their own principles and tools to continue exploring, applying, and testing governance models and authentically pursue their aims.
Bibliography
A16Z. (2021). “Investing in Friends With Benefits (a DAO). Available online: https://a16z.com/2021/10/27/investing-in-friends-with-benefits-a-dao/. Accessed October, 2022.
Abbate, J. (2000). Inventing the Internet. MIT Press, Cambridge.
Adams, T. E., Ellis, C., & Jones, S. H. (2017). Autoethnography. In The International Encyclopedia of Communication Research Methods (pp. 1–11). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118901731.iecrm0011.
Alexander, C. 1964. Notes on the Synthesis of Form (Vol. 5). Harvard University Press.
Brummer, C J., and R Seira. (2022). “Legal Wrappers and DAOs”. SSRN. Accessed 2 June, 2022. http://dx.doi.org/10.2139/ssrn.4123737.
Berker, T. Michel Callon, Pierre Lascoumes and Yannick Barthe, “Acting in an Uncertain World: An Essay on Technical Democracy”. Minerva 49, 509–511 (2011). https://doi.org/10.1007/s11024-011-9186-y.
DAOstar. (2022). “The DAO Standard”. Available online: https://daostar.one/c89409d239004f41bd06cb21852e1684. Accessed October, 2022.
DeNardis, L. (2013). “The emerging field of Internet governance”. In W. H. Dutton (Ed.), The Oxford handbook of Internet studies (pp. 555–576). Oxford, UK: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199589074.013.0026.
DeNardis, L. (2014). The Global War for Internet Governance. Yale University Press: New Haven, CT and London.
DeNardis, L. and Raymond, M. (2013). “Thinking Clearly About Multistakeholder Internet Governance”. GigaNet: Global Internet Governance Academic Network, Annual Symposium 2013, Available at SSRN: https://ssrn.com/abstract=2354377 or http://dx.doi.org/10.2139/ssrn.2354377.
Epstein, D., C Katzenbach, and F Musiani. (2016). “Doing Internet governance: practices, controversies, infrastructures, and institutions.” Internet Policy Review.
FWB. (2022a). “FWB Fest 22”. FWB. Available online: https://fest.fwb.help/. Accessed October, 2022.
FWB. (2022b). “Kelsie Nabben: What are we LARPing about? | FWB Fest 2022”. YouTube (video). Available online: https://www.youtube.com/watch?v=UUoQ-sBbqeM. Accessed October, 2022.
FWB (n.d.). “Pulse”. FWB. Available online: https://www.fwb.help/pulse. Accessed October, 2022.
Gnosis. (2022). “Zodiac Wiki”. Available online: https://zodiac.wiki/index.php/ZODIAC.WIKI/. Accessed October, 2022.
Gottsegen, W. (2021). Available online: https://www.coindesk.com/tech/2021/10/07/designer-eric-hu-on-generative-butterflies-and-the-politics-of-nfts/. Accessed October, 2022.
Hassan, S., and P. De Filippi. (2021). "Decentralized Autonomous Organization." Internet Policy Review 10, no. 2:1-10.
Jose Meijia, (@JoseRMeijia). (2022). [Twitter]. “This is the way”. Available online: https://twitter.com/makebrud/status/1556691400367824896. Accessed 1 October, 2022.
Mailland, J. and K. Driscoll. (2017). Minitel: Welcome to the Internet. MIT Press, Cambridge.
Malcolm, J. (2008). Multi-Stakeholder Governance and the Internet Governance Forum. Wembley, WA: Terminus Press.
Malcolm, J. (2015). “Criteria of meaningful stakeholder inclusion in Internet governance.” Internet Policy Review, 4(4). https://doi.org/10.14763/2015.4.391.
Mathew, A. J. (2016). “The myth of the decentralised Internet.” Internet Policy Review, 5(3). https://doi.org/10.14763/2016.3.425.
Nabben, K. (2021a). “Is a "Decentralized Autonomous Organization" a Panopticon? Algorithmic governance as creating and mitigating vulnerabilities in DAOs.” In Proceedings of the Interdisciplinary Workshop on (de) Centralization in the Internet (IWCI'21). Association for Computing Machinery, New York, NY, USA, 18–25. https://doi/10.1145/3488663.3493791.
Nabben, K. (2021b). “Infinite Games: How Crypto is LARPing”. CoinDesk. Available online: https://www.coindesk.com/layer2/2021/12/13/infinite-games-how-crypto-is-larping/. Accessed October, 2022.
Nabben, K. (2022). “A Political History of DAOs”. FWB WIP. Available online: https://www.fwb.help/editorial/cypherpunks-to-social-daos. Accessed October, 2022.
K. Nabben, M. Poblet and P. Gardner-Stephen. "The Four Internets of COVID-19: the digital-political responses to COVID-19 and what this means for the post-crisis Internet," 2020 IEEE Global Humanitarian Technology Conference (GHTC), (2020). pp. 1-8, doi: 10.1109/GHTC46280.2020.9342859.
Tarnoff, B. (2022). Internet for the People: The Fight for Our Digital Future. Verso Books: Brooklyn.
Winner, L. (1980). “Do Artifacts Have Politics?” Daedalus, 109(1), 121–136. Retrieved from http://www.jstor.org/stable/20024652.
Zargham, M., and K Nabben. (2022). “Aligning ‘Decentralized Autonomous Organization’ to Precedents in Cybernetics”. SSRN. Accessed June 2, 2022. https://ssrn.com/abstract=4077358.
Kelsie Nabben. (November 2022). “Decentralized Governance Patterns: A Study of "Friends With Benefits" DAO.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 86-101.
About the Author: Kelsie Nabben is a qualitative researcher in decentralised technology communities. She is particularly interested in the social implications of emerging technologies. Kelsie is a recipient of a PhD scholarship at the RMIT University Centre of Excellence for Automated Decision-Making & Society, a researcher in the Blockchain Innovation Hub, and a team member at BlockScience.
I want to share an anecdote. My doctoral fieldwork consisted of a mixed historical analysis and interview-based research on artificial intelligence (AI) promises and expectations. I have been attending numerous talks on AI and robotics while I have been frequently posting on social media about interesting material I encountered during my doctoral investigations. On July 15th 2018, I received a generous gift by mail, sent by a very kind Instagram user named Chris Ecclestone, who, after a brief online chat about my PhD through the platform’s messaging utility, insisted he had to send me something he found at his local charity shop (the charity-oriented UK equivalent of a thrift store/second-hand shop). The book’s title was The Robots Are Among Us, authored by a certain Rolf Strehl, published in 1955 by the Arco Publishing Company.
I was only able to find very limited information about Strehl – the most comprehensive information available online comes from a blogpost written by workers at the computer museum Heinz Nixdorf. From this, we learn, with the aid of online translation from German, that “he was born in Altona in 1925 and died in Hamburg in 1994,” that while writing this book “he was editor-in-chief of the magazine ‘Jugend und Motor’” (‘Youth and Motor,’ a popular magazine about automobiles), and that the book comes with a “number of factual errors” and “missing references.” According to the same website, the original 1952 German version of Die Roboter Sind Unter Ins (Gerhard Stalling Verlag, Oldenburg) was among the first two nonfiction books written about robots and intelligent machines in German, translated into several languages. A quick Google Images search proved that, in addition to my copy of the English translation, the book was also published, with slightly modified titles, in several other languages too: In Spanish (Han Llegado Los Robots – Ediciones Destino, Barcelona), Italian (I Robot Sono Tra Noi – Bompiani Editore, Milan), and French (Cervaux Sans Âme: Les Robots – Editions et publications Self, Paris). This suggests that the book was considered by several international publishers to be credible enough for wide circulation, and as the English version’s paper inlay states, the book “is written with a minimum of technical jargon. It is written for the layman [sic]. It is a scientific book, but it is a sensational book: for it illuminates for us the shape of things to come”; one has to note the use of the word “sensational” which in current debates about public portrayals of AI, it is mostly used as a derogatory term, implying distance from technical legitimacy).
Thus, I suggest that the book deserves excavation being indicative of the mid-1950s promissory environment around thinking machines, prior to the coinage of the term AI, although the English translation overlaps with the year the term was coined (more below).
On July 9th, 2019, almost a year since I received Strehl’s book, I attended a talk at the University of Edinburgh by Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales. Walsh, whose doctoral degree was obtained in Edinburgh, presented portions of his 2018 book 2062: The World that AI Made, which I acquired and read after the event. In contrast to Strehl’s rather obscure biographical notes, Walsh’s work is well documented on his personal website. In addition to his AI specialisation in constraint programming, Walsh’s work involves policy advising in building trustworthy AI systems as well as lots of public outreach through popular media.
The book, published in English by Black Inc./ La Trobe University Press, is of similar magnitude to Strehl’s, given that it has been translated widely: In German (2062: Das Jahr, in Dem Die Künstliche Intelligenz Uns Ebenbürtig Sein Wird, Riva Verlag, Munich), Chinese (2062:人工智慧創造的世界 – 經濟新潮社, Taipei), Turkish (2062: Yapay Zeka Dünyası – Say Yayınları, Ankara), Romanian (2062: Lumea Creata De Inteligenta Artificiala – Editura Rao, Bucharest), and Vietnamese (Năm 2062 -Thời Đại Của Trí Thông Minh Nhân Tạo – NXB Tổng Hợp TP. HCM, Ho Chi Minh City).. Taking numbers of translations as an indication of magnitude, I suggest that Walsh’s book can be classified as somewhat comparable to Strehl’s, given that as it is mentioned on his website, it is “written for a general audience.” Thus, regardless the different degrees of AI expertise and respectful contexts of their authors, I suggest these books can be contrasted as end-products indicating AI hype in 1955 and 2018. I hereby aim to recreate my personal experience with discovering the similarities between the two books.

I now invite the reader to take a look through the tables of contents displayed at the end of this essay, upon which I will now comment. Strehl’s book contents have been scanned from the original whereas Walsh’s expanded book contents have been collated in a way to resemble Strehl’s for ease of comparison. (A note on presentation: As the reader will notice, Strehl’s chapters are followed by detailed descriptions of the chapters’ sections, very typical of books from that era. Walsh’s original table of contents includes only the main headings, although within the book similar sections to Strehl’s are designating sub-chapters. I have manually copied these sub-heading structure on the table below in lieu of scannable content.) Notice similarities on both books’ first chapters, between “the failure of the human brain – the machine begins to think – ‘Homunculus’ is born – the beginning of a new epoch” (Strehl 1955) and “Machines That Learn – Computers Do More Than They are Told – The Machine Advantage – Our Successor” (Walsh 2018).
Both books’ second chapters review technological advances of machine intelligence of their times: Strehl describes the abilities of early computing and memory storing machines ENIAC, Mark III, and UNIVAC, as well as the possibility of “automatic weapons.” Meanwhile, Walsh describes recent breakthroughs in game-playing such as Go, although his chapter 0005 is entirely dedicated to “killer robots,” “weapons of terror,” and “of error,” pretty much like Strehl’s penultimate chapter “The Beginning of the Future War” which contains sections like “Robots become soldiers,” “mechanical brains take over command.” (Interestingly, Walsh does not refer to any cases of factory worker accidents by robotic technologies, however Strehl mentions two cases of lethal robotic accidents on this chapter’s section “A Robot murders its Master” similar to newspaper headlines about robotic killers (for example, Huggler 2015, McFarland 2016, or Henley 2022).
Strehl’s third chapter asks, “Can the Robot Really Think?” in the same way that Walsh asks, “Should We Worry?” Both authors enquire on “The Age-Old Dream of an Artificial Human Being” (Strehl) and “Artificial General Intelligence – How Long Have We Got? – The Technological Singularity” (Walsh); and again, both refer to the question of “free will” in machines (Strehl: “Free will of the Machine?”; Walsh: “Zombie Intelligence […] The Problem of the Free Will”). Strehl dedicates two chapters to job displacement, “The Second Industrial Revolution,” focusing on industrial robotic technologies (“the Robots are in control – machine automatons replace worker battalions – Man [sic] is left out […] the factory without people”) and “The Dictatorship of the Automaton” mostly focusing on automation technologies conceptually similar to AI (“the automatic secretary […] the telephone Robot listens attentively – Robots keep books conscientiously – Robots sort telegrams […] the Robot as master detective […] the whole of mankind [sic] is filed – Robot salesmen [sic] in the department store […] divorce by ‘automatic’ court decision”). Although today’s equivalents (robot assistants like Alexa or Echo, robotic “judges,” and concerns about data surveillance) are much more technologically advanced, the sentiment captured in Strehl’s book is strikingly similar to several sections on Walsh’s: “The Real Number of Jobs at Risk, “Jobs Only Partly Automated – Working Less” (on the dangers of job automation), “Machine Bias – The Immoral COMPAS – Algorithmic Discrimination” (on the cases of automated decision making as in the robotic judge COMPAS), and “AI is Watching You – Linked Data” (on the case of surveillance).

By this point, it has become sufficiently clear that concerns about automation technologies which in different times can be termed as “AI” or “robots” (or different regional and research contexts; consider the “I’m not a robot” captcha version of a Turing test) have been sustained in a surprisingly similar degree of comprehension. It should be interesting to note some differences between the two books. First, it is useful to question how the authors gain what we might perceive as their promissory credibility, that is, the right to speculate about a new form of reality which is about to come. As already mentioned, Strehl falls short in terms of references – however, he sets out to clarify that the content presented is realistic: “This book is not about Utopia. It is a factual report of the present time collected from hundreds of sources. Nevertheless, throughout his book, Strehl refers to warnings about machine intelligence expressed by pioneering minds in the field, often citing cyberneticist Norbert Wiener, but also mathematician Alan Turing, and others. Walsh’s approach is stricter, methodologically speaking, matching contemporary standards:
“In January 2017, I asked over 300 of my colleagues, all researchers working in AI, to give their best estimate of the time it will take to overcome the obstacles of AGI. And to put their answers in perspective, I also asked nearly 500 non-experts for their opinion. […] Experts in my survey were significantly more cautious than the non-experts about the challenges of building human-level intelligence. For a 90 per cent probability that computers match humans, the median prediction of the experts was 2112, compared to just 2060 among the non-experts [...] For a 50 per cent probability, the median prediction of the experts was 2062. That’s where the title of this book comes from: the year in which, on average, my colleagues in AI expect humankind to have built machines that are as capable as humans.” (AGI stands for Artificial General Intelligence, that is, the hypothesis that AI might be reaching or surpassing human intelligence, for example Goertzel 2014.)
Although the two authors exhibit different strengths in showing their research skills, they both rely on the credence of external sources to sustain their argument. Moreover, they agree on the possibility of a rather inevitable new form of world which is, in part, already here, and will invite humanity to think of new forms of living in the nearby future. Their difference is in their degree of optimism. Strehl agrees with Walsh that machines will always remain in need of human controllers, however, suggests that machines will take control in a subtler way:
“Man [sic] will try to maintain his [sic] supremacy because the machines will always be limited creatures, without imagination and consciousness, incapable of inventiveness outside their own limits. But this supremacy of Man [sic] will only be an illusion, because the machines will have become so indispensable in an unimaginable mechanization of the technical civilization of the future that they will have become the rulers of this world, grown numb through technical perfection. The future mechanized order of society will not be able to continue existing without constant supervision of the thinking machines by their human creators. But the machines will rule.”
The following, more optimist, passage by Walsh can be read as a hypothetical response to Strehl:
“But by 2062 machines will likely be superhuman, so it’s hard to imagine any job in which humans will remain superior to machines. This means the only jobs left will be those in which we prefer to have human workers.”
Walsh then refers to the emerging hipster culture characterised by appreciation of artisan jobs, craft beer and cheese, organic wine, and handmade pottery.
One should not forget that Walsh’s public outreach on AI extends in part from his lens as an AI researcher. His book is one that admits challenges, but also offers hopeful perspectives. Strehl’s book is written in a rather polemic fashion, although admitting the author’s fascination about the technical advancements; yet it is written by an outsider who has probably not built any robots, at least as much as Walsh has developed algorithms. This difference of balance, small doses of warning followed by hopeful promising (Walsh) is opposed to small doses of excitement followed by dystopian futurism (Strehl). It is telling of the existence of the expectational environment of AI which evolves at least since the second half of the 20th century, with its roots in the construction of early automata, as well as in mythology, religion, and literature.
Strehl’s book can be classified as indicative of broader narratives circulating which might have influenced decisions within the domain of practice, although it is difficult to find evidence and make robust assumptions concerning ways in which such broader public narratives about robots, thinking machines, and how electronic brains have influenced the practical direction of research. Walsh’s book can be classified as a product of internal research practices and strategies, aimed at influencing broader narratives (the book’s popularity might be considered as evidence of some sort). The mutual themes between the books prove that the field (or vision) of intelligent machines (hereby examined as AI) is at the same time broad, yet recognisable and limited in its various instantiations, from automated decision-making to autonomous vehicles.
In this essay, I do not want to make another claim about history repeating itself and the “wow” effect of hype-and-disillusionment cycles – belief in a purely circular history is as reductionist as the belief in the modernist notion of linear progress and innovation. This instance of repetition of themes is not a call for a same old-same old caution that AI warnings are of no value because humanity’s previous experience proved so. It is, however, a call to raise awareness about hype, sensitivity about sensationalism, and to treat products of mass consumption about science and technology as artefacts produced by specific and variable social contexts on the micro-scale (such as institutional agendas) and rather generalised and constant aspects of psychological patterns on the macro scale: hope and fear. In 1980, Sherry Turkle concluded that the blackbox structure of computers, invoke to their users different projected versions of their optimism or pessimism, thus resembling inkblot tests; she thus treated “computers as Rorschach.” 42 years after Turkle’s paper, computers and robots have evolved a lot – however, despite numerous calls for explainable AI systems, nothing prevents us from treating “AI as Rorschach” as well as “robots as Rorschach.” This might amount to a creative and therapeutic endeavour in our experience with AI.


Bibliography
Goertzel, Ben. (2014). Artificial general intelligence: concept, state of the art, and future prospects. Journal of Artificial General Intelligence, 5(1), 1-46.
Heinz Nixdorf MuseumsForum (2017). Die Roboter Sind Unter Ins. Blog post. November 7, 2017. Retrieved 18-06-2021 from: https://blog.hnf.de/die-roboter-sind-unter-uns/
Henley, Jon (2022, July 24). Chess robot grabs and breaks finger of seven-year-old opponent. The Guardian. 24-07-2022. https://www.theguardian.com/sport/2022/jul/24/chess-robot-grabs-and-breaks-finger-of-seven-year-old-opponent-moscow
Huggler, Justin. (2015, July 2). Robot Kills Man at Volkswagen Plant in Germany. The Telegraph. Retrieved 3-07-2015 from http://www.telegraph.co.uk/news/worldnews/europe/germany/11712513/Robot-kills-man-at-Volkswagen-plant-in-Germany.html
McFarland, Matt. (2016, July 11). Robot’s Role in Killing Dallas Shooter is a First. CNN Tech. Retrieved 29-04-2017 from http://money.cnn.com/2016/07/08/technology/dallas-robot-death/index.html
Strehl, Rudolf. (1952 [1955]). The Robots are Among Us. London and New York: Arco Publishers.
Turkle, Sherry. (1980). Computers as Rorschach: Subjectivity and Social Responsibility. Bo Sundin (ed.). Is the Computer a Tool? Stockholm. Almquist and Wiksell. 81–99.
Walsh, Toby. (2018). 2062: The World that AI Made. Carlton: La Trobe University Press, Black Inc.
Walsh, T. (2021, July 20th). Personal website. UNSW Sydney, accessed 20, July 2021, <http://www.cse.unsw.edu.au/~tw/,>
Vassilis Galanos (October 2022). “Longitudinal Hype: Terminologies Fade, Promises Stay – An Essay Review on The Robots Are Among Us (1955) and 2062: The World that AI Made (2018).” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 73-87.
About the Author: Vassilis Galanos (it/ve/vem) is a Teaching Fellow and Postdoctoral Research Associate at the University of Edinburgh, bridging STS, Sociology, and Engineering departments and has co-founded the local AI Ethics & Society research group. Vassilis’s research, teaching, and publications focus on sociological and historical research on AI, internet, and broader digital computing technologies with further interests including the sociology of expectations, studies of expertise and experience, cybernetics, information science, art, invented religions, continental and oriental philosophy.

Recently, the term “Quiet Quitting” has gained prominence in social media by employees who are changing their standards about work, and by business leaders who are concerned about the implications of this change of attitude and expectations at the workplace. The term initially started trending in social media with the posts from employees sharing their perspective. These employees are vocal about changing the standards of achievement and success at work, especially when work and home boundaries are no longer clear.
Quiet Quitting is a call from employees who still value their work but also wanted to feel valued and trusted in return. This is a call from those whose work and personal life is not balanced and who are looking for a healthier way to set boundaries. This is a reaction to the changes caused by the pandemic which allowed some employees to work from home, but which also further blurred the lines between work and home space. This is about corporations finding a multitude of ways to ensure their employees are connected to work around the clock, and is about the workers not wanting to be available to their employers for time for which they are not compensated, or work for which they are not recognized. It should not be a reason to criticize, shame, scare or surveil employees.
The pandemic caught many organizations unprepared for a sudden shift to remote work arrangements. Employers who were worried about the performance levels of their now-remote workers implemented several measures – some more privacy-invading than others. Unfortunately, for many companies, the knee-jerk reaction was to implement employee monitoring (or surveillance) software, sometimes referred to as ‘bossware’. Vendors selling this software tend to pitch their products as capable of achieving one or more of the following: ‘increase in productivity/performance; prevention of corporate data; prevention of insider threat; effective remote worker management; data-based decision-making on user behavior analytics; sentiment analysis to identify flight-risk employees.” The underlying assumptions of this thread of functions are:
- employees cannot be trusted and left to do what they are hired to do;
- human complexity can be reduced to some data categories; and
- a one-size-fits-all definition of productivity exists, and the vendor’s definition is the correct one.
In response to employees who suggest they will only do what they are hired to do and not more until the expectations are changed, AI-based employee surveillance systems are now being discussed as a possible solution to those ‘Quiet Quitting’. Employee surveillance was never a solution for creating equitable work conditions or increasing performance in a way which respected the needs of employees. It certainly cannot be a solution to the demands of workers trying to stay physically and mentally healthy.

The timing of tasks and employee activity monitoring in assembly lines and warehouses goes back to the times of Winslow Taylor. Taylorism aimed to increase efficiency and production and eliminate waste. It was also based on the “assumptions that workers are inherently lazy, uneducated, and are only motivated by money.” Taylor’s approach and practice has been brought to its contemporary height by Amazon with its minute-by-minute tracking of employee activity and termination decisions made by algorithmic systems. Amazon uses such tools as navigation software, item scanners, wristbands, thermal cameras, security cameras and recorded footage to surveil its workforce in warehouses, delivery trucks and stores. Over the last few years, employee surveillance practices have been spreading into white and pink-collar work too.
According to a recent report by The New York Times, eight of the ten largest private U.S. employers track the productivity metrics of individual workers, many in real time. The same report details how employees described being tracked as “demoralizing,” “humiliating” and “toxic” and that 41% of employees reporting nobody in their organization communicates with them about what data is collected and why or how it’s being used. Another 2022 report by Gartner shows the number of large employers using tools to track their workers doubled since the beginning of the pandemic to 60%, with this number expected to rise to 70% within the next three years.

Employee surveillance software is extensive in its ability to capture privacy-invading data and make spurious inferences regarding worker performance. The technology can log keystrokes or mouse movements; analyze calendar activity of employees; screen emails, chat messages or social media for both the activity intervals and content; take screenshots of the monitor at random intervals; analyze which websites employee has visited and for how long; force activations of webcams; and monitor the terms searched by the employee. As an article in The Guardian on AI-based employee surveillance tools explains, the concerns regarding the use of these products range from the obvious privacy invasion in one’s home to reducing workers, their performance, and bodies to lines of code and flows of data which are scrutinized and manipulated. Systems which automatically classify a worker’s time into “idle” and “productive” reflect the value judgments of their developers about what is and is not productive. An employee spending time at a colleague’s desk explaining work or mentoring them for better productivity, can be labeled by the system as “idle”.
Even though natural language processing is not capable of understanding context, nuance or intent of language, AI tools which analyze the content and tone of one’s emails, chat messages or even social media posts ‘predict’ if a worker is a risk to the company. Forcing employees who work from home to keep their camera on at all times can lead to private and protected information of the employee to be disclosed to the employer. Furthermore, these systems remove basic autonomy and dignity at the workplace. They force employees to compete rather than cooperate and think of ways to game the system rather than thinking of more efficient and innovative ways to do their work. A CDT report focuses on how bossware can harm workers’ health and safety by discouraging and even penalizing lawful, health-enhancing employee conduct, enforcing a faster work pace and reduce downtime, which increases the risk of physical injuries, and increasing risk of psychological harm and mental health problems for workers.
Just as employee surveillance cannot replace trusting and transparent workplace relationships, it cannot be a solution to Quiet Quitting. Companies implementing such systems do not understand the fundamental reasons of this call. The reasons for such a call are not universal and there is no single solution for employers. The responses may change from fairer compensation to better communication practices to investment into employee’s skills to setting boundaries between work and personal life. Employers need to create space for open communication and understand the underlying reasons for frustration and the call for change. Employees need to ‘hear’ what their employees are telling them, not surveil.
==============================
Disclosure: The author also provides capacity building training and consulting to organizations for AI system procurement due diligence, responsible design, and governance. Merve Hickok is a certified Human Resources (HR) professional with 20 years of experience, an AI ethicist and AI policy researcher. She has written extensively about difference sources of bias in recruitment algorithms, impact on employers and vendors, AI governance methods; provided public comments for regulations in different jurisdictions (New York City Law 144; California Civil Rights Council, White House Office of Science and Technology RFI), co-crafted policy statements (European Commission) and contributed to drafting of audit criteria for audit of AI systems (ForHumanity), and has been invited to talk in a number of conferences, webinars and podcasts on AI and recruitment, HR technologies and impact on candidates, employers, businesses and future of work.; was interviewed by both HR professional organizations (SHRM Newsletter, SHRM opinion pieces) and by newspapers (The Guardian) about her concerns and recommendations.
Bibliography
Bose, Nandita (2020). “Amazon's surveillance can boost output and possibly limit unions.” – study. Reuters, August 31.
Corbyn, Zoe (2022). “‘Bossware is coming for almost every worker’: the software you might not realize is watching you.” The Guardian, April 27.
Kantor J, Sundaram A, Aufrichtig A, Taylor R. (2022). “The Rise of the Worker Productivity Score.” New York Times, August 14.
Schrerer, Matt and X. Z. Brown, Lydia (2021). “Report – Warning: Bossware May Be Hazardous to Your Health.” CDT, July 24. Turner, Jordan (2022). “The Right Way to Monitor Your Employee Productivity.” Gartner. June 09.
Williams, Annabelle (2021). “5 ways Amazon monitors its employees, from AI cameras to hiring a spy agency.” Business Insider, April 5.
Wikipedia. Digital Taylorism. https://en.wikipedia.org/wiki/Digital_Taylorism.
Merve Hickok (September 2022). “AI Surveillance is Not a Solution for Quiet Quitting.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 65-72.
About the Author: Merve Hickok is the founder of AIethicist.org. She is a social researcher, trainer, and consultant, with work intersecting AI ethics, policy and regulation. She focuses on AI bias, impact of AI systems on fundamental rights, democratic values, and social justice. She provides consultancy and training services to private & public organizations on Responsible AI, to create awareness, build capacity, and advocate for ethical and responsible development, use and governance of AI. Merve is also Data Ethics Lecturer at University of Michigan, and Research Director at Center for AI & Digital Policy. The Center educates AI policy practitioners and advocates across ~50 countries and leads a research group which advises international organizations (such as European Commission, UNESCO, Council of Europe, etc) on AI policy and regulatory developments. Merve also works with several non-profit organizations globally to advance both the academic and professional research in this field for underrepresented groups. She has been recognized by a number of organizations - most recently as one of the 100 Brilliant Women in AI Ethics™ – 2021, and as Runner-up for Responsible AI Leader of the Year - 2022 (Women in AI). Merve was previously a VP at Bank of America Merrill Lynch, and held various senior Human Resources roles. She is a Senior Certified Professional by SHRM (Society of Human Resources Management).

The ability to comment, like, and share stories is a powerful feature of our digital world. Small stories online can gain big attention through the conversations they inspire. However, in the 1970s, long before influencers and their platforms of choice, the internet and publicly used digital networking systems, Anita Taylor wrote a letter. Her “letter to the editor,” a short note of less than 100 words was published in the March 1970 issue of Ebony Magazine.
I have read the article Computer Whiz Kid (December 1969) numerous times. It is so encouraging to hear of a black youth achieving such high goals, especially for those of us who live in the Deep South. It’s this sort of inspiration that is needed to give us hope and faith.
I am a Junior in high school. At present, I’m enrolled in a chemistry course and physics will probably appear on my schedule for the next session.
--Anita Taylor
Anita Taylor, much like a high schooler might today, was commenting on a story that inspired her. “The Computer Whiz Kid,” a “lanky Chicago teenager” named Robert Dodson, motivated Anita to act by showing her what was possible. Her response to the article indicates to us how literature served as a method for connecting readers and symbolic individuals as members of a single community during the mid-20th century. Dodson’s story, with its crisp photographs and clear messages of black success, progress, and creative technical ability, entered the homes of thousands of black families across the nation. In Ebony Robert Dodson as “The Computer Whiz Kid” was a symbol providing “hope” and “faith” to an ever-growing audience (101-104).
Robert Dodson is also an example of how the work of different communities of knowledge and action came together in a way that shaped an individual. Dodson’s presence on the campus and in a mixed dorm at the Illinois Institute of Technology represents a point in history where the work of civil rights activists and black betterment organizations were pushing up against the troubled history of segregation and unequal opportunity in Chicago. In 1969 he was a freshman at the Illinois Institute of Technology. Thirty years before his story, the Illinois Institute of Technology launched an architectural expansion and urban renewal projects that resulted in the removal of land wealth from segregated black communities who had built for themselves a city within the city. Growing up in the shadow of the Institute, Dodson’s hobby of choice, building and programming his computers, provided a retreat from the gang activity inundating his neighborhood while connecting him to the product of knowledge communities far away. Cambridge University in 1966 published the book he used to build his computer called We Built Our Own Computers. The book was designed to “explain some of the ideas and uses of computers to the intelligent and interested schoolboy who, it is hoped, will then be stimulated to design and build his own computing machines” (Bolt and Harcourt-xi). To make his machine, Dodson enlisted the help of family, and he also used a brief internship at the North American Company for Life and Health Insurance to play with bigger and stronger versions of what he was making at home on the dining room table.

Dodson’s Computer, a Group Project
Dodson’s success at transforming a hobby (building computers) into a potential career was the product of different communities who unintentionally collaborated to make a future in computing possible for him. The social-work focused, and volunteer-powered Chicago Urban League contributed to the integration of the Illinois Institute of Technology and possibly connected Dodson to education and employment opportunities. At Cambridge University, educators wanted to share computing knowledge with American youth, so they wrote and published educational books to do just that. Some other communities that contributed to Dodson’s success include groups of teachers and librarians who encouraged Dodson; vendors of electrical parts and pieces, who supplied the bits needed to build a computer; and college admission personnel who interviewed then admitted Dodson. Additionally, the finance and housing institutions, which made higher education economically possible for Dodson and the healthcare workers, who supported his fitness and readiness for dorm life, were all communities contributing to his success.
With so many “communities” influencing the opportunities possible for an individual, it can be challenging to discern what makes a community, what communities’ matter for a history of computing, and the relationship between the community and the individual. However, as a technology that constructed the symbolic narrative of Dodson and connected that narrative to a black audience, publications like Ebony Magazine were technologies that held African-American computing communities together. Dodson, through Ebony, could influence readers like Anita Taylor to act on their dreams and work towards their “high goals.” Likewise, through the story of Dodson, a special kind of black individual was constructed. This individual, not always a “Computer Whiz Kid,” was to be emulated by the reader, reproduced in black society, and was shared by the media repeatedly. Looking at media made for and by black people, African American computing communities consisted of the audience for “The Computer Whiz Kid” and people like Robert Dodson, who allowed their stories to be shared. African American computing communities also include the organizations that decided that the black readers of Ebony in 1969 needed to meet “The Whiz Kid.”
Civil rights and black betterment organizations fighting for equality and freedom sought to create symbols of black defiance, hope, future, and success. An example of their work is Rosa Parks, who be- came a symbolic individual representing defiance and resistance to the system of inequality that bolstered the segregation system but not by chance. In 1955, civil rights organizers from the NAACP waited to find the right person to build a public legal case around. Rosa Parks was not the first person of African descent to be arrested in Montgomery, Alabama, for disobeying segregation bus laws. She was, however, the one that civil rights organizers identified as being best suited for the spotlight (Tufekci 62-63).
African American computing communities consisted of an audience hungry for a better future, civil rights and black betterment organizations fighting to make opportunity possible, and the black press that deliberately connected audience and organizers to improve the status of black people in America. As the elements that comprise African American computing communities, audience, media, symbolic individuals, and civil rights organizing are also characteristic in the history of other black communities. The literature on black labor, media, activism, class, and culture of the 19th and 20th centuries purposes that the large collective “African-American community” was formed out of smaller communities (in fields of work, in neighborhoods, on HBCU and college campuses). These smaller communities networked for full citizenship, creating cultural products (literature, language, attitudes) that organized black people nationally into a people with a distinct voice in American history. First shut out of mainstream society by racist classifications as other than American, human, and citizen, the response to this willful stifling of black futures found in histories and legacies of inequality shaped black people into a demographic with unique language, culture, and politics (Foner and Lewis 511).
When not allowed to live as full citizens, organizing for “the betterment of the Negro race” this became the missions of societies that made minority betterment the ultimate point of organizing. Professional and social organizations like Brotherhood of Sleeping Car Porter, the NAACP, The National Technical Association, Alpha Phi Alpha, Kappa Alpha Psi, Alpha Kappa Alpha, and Delta Sigma Theta have shaped the image of African-American as one that is not peripheral to the project of America. By doing so, their missions are entwined in the history of technology in America. Connected by the goal of bringing black people into the “project of America,” a project shaped by innovation and a distinct spirit of rugged individuality and materiality, they sought to democratize the labor and culture of technology. No longer would the machinery that powers America be inappropriate and inaccessible for black people because of race: the future these communities fought for was one where black people could be both black and American, black and skilled, black and professional, black and technical, and black and middle-class.
The Black Press
African American communities of computing, like other groups in the history of computing, are made of writers, doers, and readers. Not just the remarkable men and women who fought to succeed, but the communities they belonged to and the conversations and messages they were a part of. All members of black computing communities were connected by automatic second-class status, where they were locked out, misrepresented, and stereotyped in the mainstream press. In tune with the needs of its audience, black print media was the most influential information medium for black people. This media amplified the voice of the people while explaining what the world of war, of technology, of business was and what it could mean to them. The black press, known as the "fighting press," utilized information technology to connect members in different communities for common goals or shared interests (O’Kelly 13).
In general, magazines, newspapers, and other print media are forms of public discourse that allow readers to engage with ideas, both old and new. Print media disseminates ideas by using the language and values that matter to the audience of the magazine. Language and values can be common sense beliefs regarding fairness, citizenship, and usefulness. Print media uses "frames" or the principles that organize information by referencing the social structures and realities, real or imagined, that matter to an individual or an audience. In this history, the frames used by the black press were ones that focused on the reality of black life in America: segregation and second-class status. Magazines organize information into frames so that the content is not disconnected from the social understanding of readers. This organization helps readers make sense of the new, by grounding the unknown in the familiar and "correct." When the content of magazines is computer technology, "common sense" values and power dynamics are embedded in how these new technologies are contextualized for audiences. Black newspapers framed the computer as a tool for black freedom by focusing on skill, education, professionalization, class, and materiality - issues that were already in the minds of the black public.
Looking away from black media, toward what could be called "mainstream media," the result of frames for technological diffusion are stories of computers that show them to be hosts for useful activities and social evolutions. A quick historical narrative of this framing between the 1950s to the present day found in "mainstream" magazines shows that what computers are and can-do changes as their technical capabilities develop and audiences become more familiar. The frames used to describe computers found in business magazines in the 1950s generally describe them as calculators useful for processing numerical data. Eventually, computers become more than just a calculator but a way to improve speed and efficiency-a tool for management. They are giant, powerful brains that threaten to replace workers in an expanding range of fields. To a different audience and a more advanced computer, they are not just computers but hobbies and toys. As computers become "personal" in the 1980's they are not only a computer but an extension of individuality, independence, and creativity (Cogan 248-265). By the end of the 20th century, the ability to set up networks through personal computers makes them not only computers but communication devices that are part of a global network of information sharing. As computers travel and find homes in communities of color in the U.S. and globally, they become more than just a computer but tools for development and participation in the global information economy.
The frames that reference the values, fears, truths, and realities African Americans in the century were notably different from those of their white counterparts. Likewise, print media tells us how computers were incorporated into African American life during this time and why they were incorporated differently than the mainstream publications usually studied. Not to say that black people would not have read magazines like Time or Newsweek; however, the frames used by mainstream publications were not concerned with the black perspective, thus creating the need for a black press.
Black newspapers shared the good news of opportunity, while not ignoring the harsh realities of America's racialized labor economy, they also offered "what if" scenarios. In the New Pittsburgh Courier, September 06, 1969, as a letter to the editor, Jesse Woodson JR proposed a solution for the criminal justice problem:
Dear Editor:
In view of the present inequalities which exist in the detection, prosecution, and confinement of black criminals vs. white, I think the black man would receive a great deal more fairness and impartiality from an IBM computer.
First, identify and catalog the various crimes. Next, edit the trials of the various city, state, and government courts over the past 25-years. Include all facts concerning investigations acquittals and convictions, plus the day dialogue of both the defense and the trial cancels. The computer when programmed with the above information would then be capable of rendering a decision based on the aggregate experience of the nation's law interpreters.
This decision would not take into consideration race or background. However, as likely as not, so some Mississippi court will discover the need for one computer for white and another for blacks.
--Jesse Woodson JR
What if a computer could solve the problem of racism in America? What if "race and back- ground" was irrelevant in the new technical order? We now know that computers are not impartial arbiters, and people have yet to successfully exclude race and class from computerized decision-making systems. In 1969, however, Jesse Woodson and those who read his suggestion, were mentally experimenting with the computer as a tool for freedom from prejudice and mainstream connection between black and criminal. Even this ideal "what if" came with skepticism as Woodson notes that the racist system could corrupt even "fair" computing decision-making systems in which they operate.
Conclusion
From the 1940s to the 1980s, emissaries like "The Whiz Kid,” ventured into the slowly integrating universities and offices of the information age. They ventured out, but they also reported back. Through black media, they communicated what computing meant for black people, and, as skilled workers in the new computer age, they embodied the characteristics of success. Through the technologies of storytelling, their image and traits became ingrained in community memory as necessary for the future. Because of them and the machines they controlled, new symbolic identities were formed, dismissed, and became immovable, stretching what held a community together across lives and worlds unique to the imaginations of its members.
Bibliography
Aspray, William and Donald Beaver. (1986). "Marketing the Monster: Advertising Computer Technology," Annals of the History of Computing, vol. 8, no. 2, pp. 127-143. doi: 10.1109/MAHC.1986.10038.
Bolt, A. B., Harcourt, J. C., Hunter, J. (1966). We Built Our Own Computers. Cambridge: Cambridge University Press.
Boyd, M. (2008). Jim Crow Nostalgia: Reconstructing Race in Bronzeville. Minneapolis: University of Minnesota Press.
Brown, Tamara, Gregory Parks, and Clarenda Phillips. (2012). African-American Fraternities and Sororities: the Legacy and the Vision, 2nd ed, Lexington: University Press of Kentucky.
Cogan, Brian. (2005) “Framing usefulness: An examination of journalistic coverage of the personal computer from 1982–1984,” Southern Journal of Communication, vol. 70, no. 3, pp. 248-265. doi: 10.1080/10417940509373330.
Foner, Philip and Ronald Lewis. (1983/2019) "The Black Worker from the Founding of the CIO to the AFL-CIO Merger, 1936-1955.” Philadelphia: Temple University Press, pp. 511.
Gibbons, Kelcey. (2022). Inventing the Black Computer Professional. In J. Abbate and S. Dick (Eds.), Abstractions and Embodiments: New Histories of Computing and Society (pp. 257-276). Johns Hopkins University Press.
McDonough, John and Karen Egolf. (2003). “Computers,” In The Advertising age encyclopedia of advertising, New York: Routledge.
O'Kelly, Charlotte. (Spring 1982). "Black Newspapers and the Black Protest Movement: Their Historical Relationship, 1827-1945.” Phylon, vol. 43, no. 1, pp. 13.
Taylor, Anita. (March 1970). “Computer Whiz Kid,” Ebony, pp. 17.
Taylor, Anita. (December 1969). “Computer Whiz Kid,” Ebony, pp. 101-104.
Tierney, Sherry. (2008). “Rezoning Chicago's Modernisms: 1914–2003,” (Master Thesis., Arizona State University), 6-99.
Tufekci, Zeynep. (2017). Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven: Yale University Press.
Woodson JR, Jesse. (September 1969). “Job for Computer.” New Pittsburgh Courier, 14.
Kelcey Gibbons (August 2022). “Framing the Computer.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 54-64.
About the Author: Kelcey Gibbons is a PhD student at MIT in the History, Anthropology, Science, Technology and Society. She studies the history of the African American experience of technology with a focus on engineering and computing communities of the late 19th through to the mid 20th centuries.
With its tendency to grip popular imaginaries and utopian fantasies, artificial intelligence has crystallized the enduring hope for easy technological solutions to the world’s greatest problems and fears (Haigh & Ceruzzi, 2021; Plotnick, 2018). It has been hailed as “the magic wand destined to rescue the global capitalist system from its dramatic failures” (Brevini, 2022, p. 28), and has been positioned as the linchpin of modern civil society. But, while developments in artificial intelligence technologies are commonly considered among the most important factors in shaping the modern condition, they also have exacerbated inequality, ushered in a new era of discrimination (D’Ignazio & Klein, 2020; Benjamin, 2018; Radin, 2017), left irreversible environmental damage (Brevini, 2022; Dauvergene, 2021), worsened labour struggles (Frey, 2021; Pasquale, 2020; Gray & Suri, 2019), and concentrated power – and wealth – in the hands of the privileged elite (Brevini, 2022; Crawford, 2020; Frey, 2019). As such, critically studying artificial intelligence requires a multifaceted understanding of it as being both controllable and controlling, dependent and autonomous, minimal and robust, submissive and authoritative, and determined and determinable.
To fully understand these binaries and their implications, artificial intelligence research undertaken in the humanities and social sciences warrants a long-term, historical approach that views artificial intelligence in the broader context of technological development, including the social, political, environmental, and cultural forces impacting it. This is especially the case given that the so-called “artificial intelligence boom” in academia has led to a bias towards works published in the last couple of years. But, if properly informed by the past, artificial intelligence research is more likely to prepare users for the future while also shedding light on the ways that we must act differently in the face of technological change.

Courtesy of Barcelona.cat, CC BY-NC-ND 2.0.
Technology development and usage carries the imprint of political, ontological, and epistemological ideologies, such that every modern technology, including and especially artificial intelligence, is an infinitesimal representative of not just what users know, but how users come to know. Insofar as the humanities and social sciences are interested in technology as an instigator of cultural change, these disciplines must centralize its historical and epistemological dimensions, and investigate how, at every major historical moment in the development of modern technology and artificial intelligence/computational systems, users have adapted to new forms of knowledge-making.
Although most research in humanities and social sciences exhibits some kind of historical immediacy, it tends to be detached from larger epistemological considerations that align with major historical moments of change. Understanding, at each major technological juncture, how technology users come to know, may be crucial to developing better knowledge about technology (including artificial intelligence), its users, and the world.
This research would involve a multifaceted, interdisciplinary methodology that is both “anti-modern” and philosophical. Edwards (2003), for example, suggests that any historical and archival method to technological inquiry necessarily avoids falling into the trap of “technological determinism” that plagues so much current artificial intelligence research, especially those conducted through short-term analyses. Selective attention primarily to the “modern” aspects of infrastructures can produce blindness to other aspects that may, in fact, be “anti-modern”; as Golumbia (2009) contends, irrespective of “new” technologies, human societies remain bound by the same fundamental forces that they always have been, so technological changes are best seen as shifts in degree rather than in kind. For this reason, technology ought to be assessed with reference to the past, especially because the computer apparatus leaves “intact many older technologies, particularly technologies of power, yet puts them into a new perspective” (Bolter, 1984, p. 8).

Courtesy of brewbooks, CC BY-SA 2.0.
This approach to artificial intelligence research would model a different kind of temporal orientation for the humanities and social sciences that is rooted in the recognition that both ethereal, “cloud-like” technologies and “resolutely material media” (Mattern, 2017) have always co-existed. Because the old and the new necessarily overlap, it is important to draw archival linkages to produce more precise and comprehensive evaluations of technology and technological change. As Chun (2011) notes, new media races simultaneously towards the future and the past in that “the digital has proliferated, not erased, [historical] media types” (p. 11, 139).
An historical way forward may also be key to confronting and dismantling algorithmic coloniality, the idea that colonial logics are replicated in computational systems, including in how sovereignty and land exploitation are embedded in the digital territory of the information age (Mohamed, Isaac, & Png, cited in Acemoglu, 2021; Lewis et al., 2020; Radin, 2017). Algorithmic coloniality suggests that the dominance and manipulative power of the world’s largest technology corporations mirrors traditional strategies of imperial colonizers (Brevini, 2022, p. 95). While the benefits of technological innovation accelerate economic gains for the privileged elite, Mohamed, Isaac, and Png (cited in Acemoglu, 2021) argue that any pathway to shared prosperity must address colonial legacies and three distinct forms of algorithmic harms: algorithmic oppression, exploitation, and dispossession (p. 61). Doing so is not only consequential for people who identify as being Indigenous; it may provide the tools necessary for intervening in the perpetuation of discrimination, generally (Radin, 2017). This, Lewis et al. (2020) claim, forms a powerful foundation to support Indigenous futurity (p. 65) while injecting artificial intelligence development with new ontologies whose imaginations and frameworks are better suited to sustainable computational futures (p. 6).
Extending from this, an historical approach may also be key to recognizing “non-Western,” alternative ways of knowing and being, including how “non-Western” technology may influence future iterations of artificial intelligence technologies. This is made clear in the Indigenous Protocol and Artificial Intelligence Working Group’s explanation of the potential links between artificial intelligence technologies and both the Octopus Bag – a multisensorial computing device – and Dentalium – tusk-like shells filled with “computational fluid dynamics simulations” (Lewis et al., 2020, pp. 58-69). This approach, however, may present methodological challenges as researchers try to embrace the nourishing aspects of our traditional value systems while still accommodating modernity.
An historical approach may also serve environmental considerations well, especially in the context of the humanities and social sciences. Adequate research on renewability, ecofuturisms, and the environmental costs of artificial intelligence should span the entire production chain, including the historical circumstances in which those “productive” relationships arose. This view is critically important to exposing the environmental effects of technology, while recognizing that both ecological and social precarity caused by technology is not just a timely and urgent idea, but also one with a rich history. Too much recent and short-term research looks at the ecological impacts of artificial intelligence as a “new” phenomenon, rather than one that replicates historical trends albeit through modern consumption rates (which make environmental effects seem historically unique). Informed by the past, environmental research about technology is more likely to prepare users for the future while also shedding light on the ways that we may want and need to act differently in the face of technological change.
An historical approach to studying artificial intelligence may also help us to: 1) re-evaluate the consumptive ideologies underpinning environmental AI discourse; 2) begin to view data as a non-renewable resource; 3) construct a new genealogy of contemporary technological culture that centers bodily subjects; and, 4) perhaps even consider acting against technological progressivism by halting the production of new “innovations” that “datafy” manual or semi-manual sectors and technologies, merely for the sake of it.
These suggestions would challenge the dominance of artificial intelligence technologies, provide different ways to imagine technological innovation and its cultural implications, and re-envision a world that may not rely on technology to solve the most pressing social, environmental, and political questions. These perspectives could also drastically change our view of the relationship between people, energy, and information. Although these considerations may seem radical and aspirational, they are necessary if we want to reorient perspectives in artificial intelligence research and think about the agents – both human and non-human – that are served and impacted by today’s dominant visions for the future of technology.

Courtesy of https://spectrum.library.concordia.ca/id/eprint/986506/7/Indigenous_Protocol_and_AI_2020.pdf .
Utopian and idealistic views of artificial intelligence are justified by a host of corporate, governmental, and civil actors, who have four major reasons for supporting the continued use and development of artificial intelligence:
- Leveraging computational speed to make work more efficient;
- Appearing to improve the perceived accuracy, fairness, or consistency of decision-making (in light of so-called “human fallibility”);
- Similarly, appearing to depoliticize decision-making by placing it out of reach of human discretion; and,
- Deploying artificial intelligence technologies to solve pressing environmental issues.
These motivations, especially when replete of historical consideration, have led to an automation bias whereby humans tend to trust computational tools more than they probably should. This raises distinct concerns about oversight and responsibility and about the ability to seek recourse in the wake of computational error. In other words, any motivation to use and deploy artificial intelligence technologies necessarily presses up against regulatory, legal, and ethical questions because, at its core, artificial intelligence can distort peoples’ perception of each other and the structures and systems that govern their lives. This is especially true when such technology is viewed as being inherently modern, rather than merely part of a longer, historical lineage of technological advancement.

Courtesy of
https://spectrum.library.concordia.ca/id/eprint/986506/7/Indigenous_Protocol_and_AI_2020.pdf
In this sense, studying artificial intelligence with an historical orientation is as much about people, culture, and the world, as it is about the technology itself. Artificial intelligence is people-populated. It is reliant on human bodies and brains. It is dependent on human hands and eyes. It is fueled by us. But technochauvinism and techno-optimism (both inherently modernist ideologies) hinder our ability to see this. Instead, artificial intelligence perpetuates the fantasy of ever-flowing, uninterrupted, and curated streams of information, technological solutionism, and optimism about artificial intelligences’ ability to solve the world’s most pressing questions – as long as it’s designed with “humans in the loop.” This framing, though, limits and constrains human agency and autonomy by positioning humans as a mere appendage to the machine. This view relies only on small tweaks to the current automated present and fails to account for artificial intelligence imaginaries informed by the past that may better address the harms and inequities perpetuated by current artificial intelligence systems.
A strictly modernist approach to artificial intelligence and automation in general has hampered people’s ability to imagine alternatives to artificial intelligence systems, despite overwhelming evidence that the integration of those systems into our everyday lives disproportionately benefits the wealthy elite and creates undue harm to vulnerable groups (Acemoglu, 2021; D’Ignazio & Klein, 2020; Benjamin, 2018; Radin, 2017). This is because, without an historical orientation, it is natural – and easy – to view artificial intelligence as not only representative of the future, but also as actively shaping it by both opening and closing imaginative possibilities of what the world can become with the “help” of new technologies.
Instead, I’d like to draw attention to an alternative vision: what if we resist the urge to build, deploy, and use new computational systems? What if we begin to realize that technology might not be our world’s saviour? What if we choose to slow down and work intentionally and mindfully instead of quickly? These questions are not meant to elide the important computational work currently carried out by and through artificial intelligence systems, including and especially in medical applications and in services that are too dangerous for human actors to perform. Instead, this alternative vision for the future, which is deeply rooted in historicity, simply resists viewing technology as determined, and instead sees it as being determinable. It reorients power in the favour of human agents rather than technological ones.
Perhaps the “AI question” can only be solved when people are empowered to imagine futures beyond the dominance of techno-utopianism. After all, new imaginaries are really mostly dangerous to those who profit from the way things currently are. Alternative futurisms have the power to show that the status quo is fleeting, non-universal, and unnecessary, and although artificial intelligence has changed the world, people have the ultimate power to shape it.
Bibliography
Acemoglu, D. (2021). Redesigning AI: Work, democracy, and justice in the age of automation. Massachusetts: MIT Press.
Benjamin, R. (2019). Race after technology. Cambridge: Polity Press.
Bolter, J. (1984). Turing’s man: Western culture in the computer age. University of North Carolina Press.
Brevini, B. (2021). Is AI good for the planet? Cambridge: Polity Press.
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. Massachusetts: MIT Press.
Chun, W. (2011). Programmed visions: Software and memory. Massachusetts: MIT Press.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press.
Dauvergne, P. (2021). AI in the wild: Sustainability in the age of artificial intelligence. Massachusetts: MIT Press.
D’Ignazio, C., and Klein, L.F. (2020). Data feminism. Massachusetts: MIT Press.
Edwards, P.N. (2003). Infrastructure and modernity: Force, time, and social organization in the history of sociotechnical systems. In Modernity and Technology (eds. Misa, T.J., Brey, P., and Feenberg, A.). Massachusetts: MIT Press.
Frey, C.B. (2021). The technology trap: Capital, labor, and power in the age of automation. New Jersey: Princeton University Press.
Golumbia, D. (2009). The cultural logic of computation. Massachusetts: Harvard University Press.
Gray, M., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Mariner Books.
Haigh, T., and Ceruzzi, P.E. (2021). A new history of modern computing. Massachusetts: MIT Press.
Lewis, J. et al. (2020). Indigenous Protocol and Artificial Intelligence: Position Paper.
Indigenous Protocol and Artificial Intelligence Working Group. https://www.indigenous-ai.net/position-paper/
Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Massachusetts: Harvard University Press.
Radin, J. (2017). “Digital natives”: How media and Indigenous histories matter for big data. Osiris, 32(1).
Schwab, K. (2017). The fourth industrial revolution. New York: Penguin.
Helen A. Hayes (May 2022). “New Approaches to Critical AI Studies: A Case for Anti-Modernism and Alternative Futurisms.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 45-53.
About the Author: Helen A. Hayes is a Ph.D. Candidate at McGill University, studying the relationship between artificial intelligence, its computational analogs, and the Canadian resource economy. She is also a Policy Fellow at the Centre for Media, Technology, and Democracy at the Max Bell School of Public Policy. She can be reached at helen.hayes@mcgill.ca or on Twitter at helen__hayes.

Cybernetics, an intellectual movement that emerged during the 1940s and 1950s, conceived of the body as an informational entity. This separation of the mind and body, and the prioritization of the mind as a unit of information, became a liberating quality as the capitalist world of industrialism, with its mechanical and earthly labor, bound the liberal subject in shackles. The cybernetic subject, in contrast, as “a material-information entity whose boundaries undergo continuous construction and reconstruction,” floated in the permeable ether of information and technology (How We Became Posthuman, 3). Marxist issues of social alienation and scarcity were resolved by the interconnectedness of information-based beings, and hierarchical labor relations were replaced with more communal forms of exchange. A new utopia was thus formed with the advent of digital communication (Brick, 348).
This dematerialized, cybernetic body converged with the creation of technology through the work of the industrial designer Henry Dreyfuss. Dreyfuss, who drafted what can be considered early user personas out of data collected from the military, utilized these imagined bodies for the testing of physical products. Dreyfuss’ designs, or what he labeled as “Joe and Josephine,” quantified the human experience. This model of testing and iterating designs based on dematerialized conceptions of the body was later incorporated into the development of technology by computer scientists such as Ben Shneiderman, who claimed in a 1979 paper that Dreyfuss’ emphasis on the human experience must be considered by engineers and designers. As scholars such as Terry Winograd and John Harwood claim, Dreyfuss’ methodology became the model for user testing that has remained relevant for interaction designers ever since its publication in 1955.
However, as Katherine Hayles argues in How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1999), the dematerialized body as conceived of by Dreyfuss is problematic. To put it simply, “equating humans and computers is especially easy” if the mind is both an egoless and informational resource to be shared (How We Became Posthuman, 2). Yet, this sort of epistemology neglects embodied and subjective experiences. Race, class, and gender relations cannot be erased by what she labels the “posthuman,” and while Hayles published her book over two decades ago, this issue is still pressing in the field of design. As Sasha Constanza-Chock describes in their book Design Justice: Community Led Practices to Build the Worlds We Need, a “nonbinary, trans, femme-presenting person,” is unable to walk through an airport scanner without getting stopped because the system has been built to represent “the particular sociotechnical configuration of gender normativity” (How We Became Posthuman, 2). The system, in identifying and classifying the body as information, misses crucial identities. In a paper published in 2018 titled “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” authors Timnit Gebru and Joy Buolamwini examined a similar problem of bodily erasure (Buolamwini and Gebru, 2018). Gebru and Buolamwini found that facial recognition systems trained on biased data sets representing faces of mostly white men will, unexpectedly, become biased. The bodies of women and people of color, in this example, are made invisible through their translation into information. As Aimi Hamraie writes in their book Building Access: Universal Design and the Politics of Disability:
Ask an architect about their work, and you may learn about the style, form, materials, structure, and cost of a building than the bodies or minds mean to inhabit it. Examine any doorway, window, toilet, chair, or desk in that building, however, and you will find the outline of a body meant to use it. From a doorframe’s negative space to the height of shelves and cabinets, inhabitants’ bodies are simultaneously imagined, hidden, and produced by the design of built worlds. (Hamraie, 19)

World's Fair in New York City.
Courtesy of Manuscripts and Archives Division, The New York Public Library. (1935 - 1945).
Architects, industrial designers, and interaction designers wield power when they craft who they imagine will use their built environments, and when they ignore their own biases, designs are built to reify hegemonic systems. There is thus a larger issue of disembodiment which needs to be researched as it relates to the contemporary methodologies of interaction designers.
The relationship between designers and human bodies has a long history. As Christina Cogdell argues in Eugenic Design, the “scientific possibilities of controlling heredity through environmental manipulation inspired reform-minded architects and designers” during the early twentieth century, specifically (Cogdell, 16). Cogdell finds that early industrial designers such as Dreyfuss were swept up in a movement to “streamline” design much in the same way that eugenicists looked to “streamline” the human body (Cogdell, 52 - 53). Cogdell cites examples such as the 1939 New York World’s Fair which featured Dreyfuss’ work against a backdrop that used streamlining as a medium through to promote democracy (Cogdell, 2004). Events such as these demonstrate that the industrial desire to create “perfect” environments and “perfect” bodies was not unique to the United States. In Eugenics in the Garden: Transatlantic Architecture and the Crafting of Modernity, Fabiola López-Durán argues that Lamarckian evolution was “an invitation to social engineering” for Latin American nations at the turn of the twentieth century (López-Durán, 4). This form of evolutionary theory fostered an “orientation toward environmental-genetic interaction, empowering an apparatus that made race, gender, class, and the built environment critical measurements of modernity and progress” (López-Durán, 4). While Dreyfuss was engaged with this period of industrial design, this paper departs from these histories by situating Dreyfuss within the post-war era. Nevertheless, this paper recognizes that Dreyfuss’ connection to streamlined bodies may have informed his notion of user-testing, and this is an important consideration when reviewing images of Joe and Josephine.
In this essay, I will explore the cybernetic conception of the body as it relates to the development of technology. More specifically, I will argue that user testing practices, conceived within the historical and cultural context of cybernetics, envisioned that any human figure might represent all human figures. However, as examined previously, this perception of the body as universal ignores the subjective, material, and embodied experiences of users, contributing to the biased systems we see today. This proposed paper will begin with an exploration of the cybernetic notion of the body. It will then explore how this concept converged with the advent of user testing practices and the development of user personas, or skeuomorphic designs used for the creation of digital products. It will, lastly, attempt to correct the histories of industrial design and interaction design by reconfiguring the work of Dreyfuss. These efforts will hopefully extend contemporary literature such as the work of Costanza-Chock, Gebru, Buolamwini, and Hamraie, through a re-examination of history.
1950s Cybernetics
Cybernetics emerged as a dominant field in the 1950s through the work of Norbert Wiener and the publication of The Human Use of Human Beings (1950). In The Human Use of Human Beings, Wiener describes a type of communicative society in which humans act as individual, informational units, or automata. These informational, monadic systems relay messages to one another, and through the process of feedback, establish homeostasis. There is thus both a teleological and biological aspect to early descriptions of cybernetics. Like a beehive which has been disrupted, or a flock of geese attempting to take flight, all units must find their place through the interaction and exchange of information with others. This artful dance prioritizes utilitarianism and positivism. The gathering of information through interaction is essential, and in this way, each monad learns to operate as a collective, resisting natural entropic dissolution. The body is thus an extension of and harbor for information. As Weiner writes:
...the individuality of the body is that of a flame rather than that of a stone, of a form rather than of a bit of substance. This form can be transmitted or modified and duplicated...When one cell divides into two, or when one of the genes which carries our corporeal and mental birthright is split in order to make ready for a reduction division of a germ cell, we have a separation in matter which is conditioned by the power of a pattern of living tissue to duplicate itself. Since this is so, there is no absolute distinction between the types of transmission which we can use for sending a telegram from country to country and the types of transmission which at least are theoretically possible for transmitting a living organism such as a human being. (Weiner, 102 – 103)
The mechanisms of the body, and their ability to maintain life and homeostasis, provide inspiration for the natural, organic order of cybernetics, but nothing more. Information, messages, and communication are key, while the embodied experience, insofar as it is not used to relay messages, is inconsequential.
As Katherine Hayles argues in her article “Boundary Disputes: Homeostasis, Reflexivity, and the Foundations of Cybernetics,” this divorce of the body from information was essential in the first wave of cybernetics. Hayles outlines three waves of cybernetics, the first two of which concern our argument here. The first wave from 1945 - 1960 “marks the foundational stage during which cybernetics was forged as an interdisciplinary framework that would allow humans, animals, and machines to be constituted through the common denominators of feedback loops, signal transmission, and goal-seeking behavior” (“Boundary Disputes”, 441 – 467).This stage was established at the Macy conferences between 1946 and 1953 and it was at the Macy conferences, Hayles argues, where humans and machines were “understood as information-processing systems” (“Boundary Disputes”, 442). It is also within this first-wave that homeostasis was perceived as the goal of informational units. Following the chaos and disillusionment of World War II, first-wave cyberneticians found stability to be paramount. The Macy conferences were thus focused on this homeostatic vision.
However, psychoanalytical insight at the conference helped sow ideas for second-wave cybernetics. If man is to be viewed as a psychological subject, in translating the output of one machine into commands for another, he introduces noise into the teleological goal of homeostasis. This reflexive process, or one in which the autonomy of both subjects is to be considered, disrupted the first-wave one-directional model. As Hayles describes, Lawrence Kubie, a psychoanalyst from the Yale University Psychiatric Clinic “enraged other participants [at the conference] by interpreting their comments as evidence of their psychological states rather than as matters for scientific debate” (“Boundary Disputes,” 459). Nevertheless, while the issue of reflexivity may not have won at the Macy conferences, it later triumphed over homeostasis through the work of biologist Humberto Maturana. Maturana rescued the notion of reflexivity by emphasizing that through the rational observer, the black box of the human mind might be quantified. This new feedback process introduced an autopoietic version of reflexivity in which both man and machine might improve through interaction, resolving the threat of subjectivity. Through both waves of cybernetics, cyberneticians instantiated the concept of the body as immaterial.
Designing for People, Joe, and Josephine
The cybernetic body converged with the development of technology in the 1950s through the work of the industrial designer Henry Dreyfuss. Dreyfuss, who was considered to be one of the most influential designers of his time, developed a model for user-testing through skeuomorphic designs which quantified the human experience. While Dreyfuss was not the first to conceive of user testing, he was the first to develop popular user personas. As Jeffrey Yost notes in Making IT Work: A History of the Computer Services Industry, the RAND cooperation’s Systems Research Laboratory conducted a simulation study labeled Project Casey that used twenty-eight students to test man-machine operations for early warning air defense (Yost, 2017). The practice of interviewing early adopters of a system continued into the 1960s in time-sharing projects such as Project MAC in which psychologists such as Ulric Neisser interviewed users about their phenomenological experience with a computer system. It was Dreyfuss, however, who developed pseudo-users that might be used on a wide scale. While command and control computing and human factors researcher demanded testing for specialized users, Dreyfuss aimed, as an industrial designer, to create products for the masses. He therefore looked to craft images of what he deemed lay people for the creation of physical products.
First recognized in his book Designing for People (1955), Joe and Josephine represent Dreyfuss’ perception of the “average'' man and woman. They have preferences and desires, they are employed, and most importantly, they are forms of a Platonic ideal that can be used for testing products. Like cyberneticians such as Maturana, Dreyfuss seems to have recognized the reflexivity between man and machine. Using Joe and Josephine, Dreyfuss tested the interaction between a product and its imagined user in order to improve its usability. Dreyfuss’ book was met with much praise, attesting to the importance of his new model. A review in The New York Times from 1955 titled “The Skin-Men and the Bone-Men” credits Dreyfuss for being a “skin man” who hides the complexity of a mechanism behind its skin (Blake, 1955). In a review from The Nation from the same year, author Felix Augenfeld also credits Dreyfuss for a “his fantastic organization and an analysis of his approach to the many-sided problems the designer must face” (Augenfeld, 1955). Joe and Josephine were thus considered innovative figures upon their publication.
As machine-like entities, Joe and Josephine reflect the discussions of the Macy conferences, and as models for user-testing, they resemble second-wave reflexivity. However, it is unclear what interactions Dreyfuss had with cybernetics during the 1950s. In an article titled “A Natural History of a Disembodied Eye: The Structure of Gyorgy Kepes's ‘Language of Vision’” author Michael Golec describes letters between the cybernetician Gyorgy Kepes and Dreyfuss from the early 1970s (Golec, 2002). Dreyfuss also illustrated a chapter of Kepes’ book Sign, Image, Signal (1966), indicating another touch point between the designer and the cybernetician (Blakinger, 2019). The cybernetician Buckminster Fuller wrote the introduction to a publication by Dreyfuss titled Symbol Sourcebook: an Authoritative Guide to International Graphic Symbols (1972), providing a final touch point between Dreyfuss and cybernetics. Nevertheless, there is no direct evidence that Dreyfuss knew of the Macy conferences, and this question needs more research.
Despite the question of Dreyfuss’ interaction with cybernetics, Dreyfuss’ new model was adopted into cybernetic software and hardware development processes by the 1970s. In a paper by computer scientist Ben Shneiderman titled “Human Factors Experiments in Designing Interactive Systems” (1979), Shneiderman cites Dreyfuss as someone who provides “useful guidance” for the development of computer systems (Shneiderman, 9). Shneiderman also credits Dreyfuss with a user centered approach that prioritizes the friendliness and compatibility of computer systems with their human users. He advocates for “personalizing” the computer by using human testers, and while he does not directly mention Joe and Josephine, he does state that designers should know their users (Shneiderman, 11). Shneiderman, additionally, cites various cybernetic articles, merging Drefyuss with cybernetics once again. This process of crafting personas to test prototypes, outlined by Shneiderman, is a practice which has continued into the present day.
The work of scholars such as John Harwood and Terry Winograd demonstrate the permanence of Joe and Josephine in the history of technology. In The Interface: IBM and the Transformation of Corporate Design, 1945 – 1975, Harwood describes The Measure of Man, a 1959 publication by Dreyfuss which expounded on Joe and Josephine. Harwood finds that The Measure of Man is the primary source for graphic and ergonomic standards within the United States, England, and Canada. He cites that it is “the first and most important, comprehensive collection of human engineering or ergonomic data produced explicitly for architects and industrial designers” (Harwood, 94). Winograd echoes Harwood’s claims in an article titled “Discovering America: Reflections on Henry Dreyfuss and Designing for People.” Winograd notes that Dreyfuss has been a key figure in the creation of courses for Stanford’s d.school, as he is understood as having created the model for empathizing with the user via Joe and Josephine (Winograd, 2008). Both Winograd and Harwood cast back a common perception that Dreyfuss initiated a Kuhnian paradigm shift in the field of design. Through Joe and Josephine, Dreyfuss assisted designers in moving away from the linear development model of Fordism and towards one of circular, iterative, feedback. Yet, it is precisely this heroic view of Dreyfuss that I wish to contest, for although Dreyfuss’ work is significant, Joe and Josephine introduced the use of biased data into product development. Indeed, Winograd makes mention of this flaw when he cites that with Joe and Josephine we must also “keep visible reminders of the subtler and less easily depictable social and cultural differences that determine the compatibility of people with products and interfaces…” (Winograd, 2008). However, I argue there is a deeper issue here which is emboldened by cybernetic theory and hidden in the construction of Joe and Josephine. While Joe and Josephine represent the “average” man and woman according to Dreyfuss, they also reflect his bias as a designer and his inability to recognize the quantified body as subjective.

The Designer as World Builder
In tracing the transition from homeostasis to reflexivity, Hayles makes note of a complication which elucidates this issue. In analyzing the work of Humberto Manturana and Francisco Vaerla, two second-order cyberneticians, she finds that Maturana and Varela were system builders that created a system by drawing boundaries to decipher what was to be included inside, and what was out (How We Became Posthuman, 188). As Hayles writes, “Consistent with their base in the biological sciences, Maturana and Varela tend to assume rational observers…Granting constructive power to the observer may be epistemologically radical, but it is not necessarily politically or psychologically radical, for the rational observer can be assumed to exercise restraint” (How We Became Posthuman, 188). The solution to reflexivity conceived in second-order cybernetics is therefore flawed. If the rational observer can quantify the human subject, who is it that edits the observer? An image by computer scientist Jonathan Grudin visualizes this idea. In “The Computer Reaches out: The Historical Continuity of Interface Design,” Grudin sketches the feedback process between the user and the computer (Grudin, 1989). In the image, a computer reaches out to a user, and the user reaches back. The user is also connected to a wider network of users, who reach back to the user, and therefore to the computer as well. In this system, there is an endless chain of interaction between the user/observer, calling into question who is observing whom. As such, no one user can claim to be a world-builder, as they are enmeshed in a socio-material environment.
Dreyfuss, however, claims this title. Joe and Josephine not only represent universal versions of man and woman like Adam and Eve, but they are the “hero” and “heroine” of Designing for People. Yet, as Russell Flinchum writes in the book Henry Dreyfuss, Industrial Designer: the Man in the Brown Suit, a “hodgepodge” of information was interpreted by Dreyfuss’ designer Alvin Tilley to construct Joe and Josephine (Flinchum, 87). Additionally, while the exact reports that Dreyfuss drew from are unclear, we can surmise from which reports he drew. In an oral history with Niels Diffrient, one of Dreyfuss’ designers who later iterated on Joe and Josephine, Diffrient states:
...Henry himself had the brilliance, after the Second World War, in which he had done some wartime work of carrying on what he'd learned about human factors engineering...You see, a lot of the war equipment had gotten so complex that people didn't fit into things and couldn't operate things well, like fighter planes, all the controls and everything...So a specialty grew up — it had been there, but hadn't gone very far — called human factors engineering...we found out about these people who were accumulating data on the sizes of people and began to get a storehouse, a file, on data pulled together from Army records, the biggest of which, by the way, and the start of a lot of human factors data, was the information they had for doing uniforms because they had people of all sizes and shapes. (Oral History with Niels Diffrient, Archives of American Art, 2010).
In a later letter written to Tilley, Tilley is asked about the specific type of Army data, helping to track which files Drefyuss may have obtained. The inquirer states that “‘...Douglas Aircraft called to ask if it [The Measure of Man] was available...He asked if the source or sources from which all this data was gathered has been noted’” (Archives of American Art, 2010). Dreyfuss, who had worked on projects for the Vultee Aircraft company during the war, is therefore likely to have used Air Force data as a major source for Joe and Josephine (Flinchum, 1997). A report on anthropometric military processes from the war validates this claim. The report, titled “Sampling and Gathering Strategies for Future USAF Anthropometry” mentions that the work of Francis Randall at Wright Field was an excellent example of proper data collection practices during WWII (Churchill, Edmund, and McConville). Randall’s document, or “Human Body Size in Military Aircraft and Personnel Equipment,” involves countless drawings of fighter pilot dimensions (Randall, 1946). In the book The Measure of Man and Woman, which improved on the designs of Joe and Josephine, Dreyfuss’ team appears to have been inspired by the depictions of fighter pilots in Randall’s work. A comparison of an image of Joe in a compartment with images of fighter pilots demonstrates how closely aligned Dreyfuss was to military practices.
However, Randall’s report also indicates the long-standing practice of classifying and quantifying bodies based on normative standards prevalent within a specific cultural moment. The manipulation of bodies for military data collection practices, and the exclusion of bodies that do not fit a certain “norm,” from these data sets, has a long history that cannot be revisited here, but which indicates that the inspiration for Joe and Josephine was based on biased data. Consequently, the shapes of the Joe and Josephine personas, which influenced heavily both industrial design and computer design practices, represent biased images. There must be continued investigation into which reports Dreyfuss gathered, but it appears likely that he used skewed data to construct his influential designs.

Dreyfuss Today
It is difficult to measure the outcome of such flawed practices, but the work of Dreyfuss has resonated throughout the century. The ripple effect of Joe and Josephine, and the countless products drafted from these designs, brings forth a new variable to consider in the construction of digital products. This paper is therefore a response to the many accounts which have canonized Dreyfuss within the history of industrial design, and consequently, the history of interaction design. As demonstrated through the reference to Winograd, Dreyfuss’ efforts as are taught in the classroom. However, through the conception of both real and imagined spaces, designers envision an ideal user, and this user can either represent the multiplicity of complex, messy, and beautiful bodies, or it can represent a “universal” ideal which never truly existed. Tracing the genealogy for these imagined users to their origins is essential for improving the testing practices of our modern moment.
Bibliography
Augenfeld, F. (1955, August 6). Masterpieces for Macy's. The Nation.
Blake, P. (1955, May 15). The Skin Men and the Bone Men. The New York Times.
Blakinger, J. R. (2019). Gyorgy Kepes: Undreaming the Bauhaus. Cambridge, MA: The MIT Press.
Brick, H. (1992). Optimism of the mind: Imagining postindustrial society in the 1960s and 1970s. American Quarterly, 44(3), 348. doi:10.2307/2712981
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Conference on Fairness, Accountability, and Transparency, Proceedings of Machine Learning Research.
Churchill, E., & McConville, J. T. (1976). Sampling and data gathering strategies for future USAF anthropometry. Wright-Patterson Air Force Base, OH: Aerospace Medical Research Laboratory.
Cogdell, C. (2010). Eugenic design: Streamlining america in the 1930s. Philadelphia, PA: University of Pennsylvania Press.
Costanza-Chock, S. (2020). Design Justice. Cambridge, MA: The MIT Press.
Dreyfuss, H. (1976). Measure of Man. Watson-Guptill.
Dreyfuss, H. (2012). Designing for People. New York, NY: Allworth Press.
Dreyfuss, H. (2014). Posters, The Measure of Man (Male and Female) [Cooper Hewitt Design Museum]. Retrieved 2022, from https://collection.cooperhewitt.org/objects/51497617
Erickson, T., Winograd, T., & McDonald, D. (2008). Reflections on Henry Dreyfuss, Designing for People. In HCI Remixed: Essays on Works That Have Influenced the HCI Community. Cambridge, MA: MIT Press.
Flinchum, R. (1997). Henry Dreyfuss, Industrial designer: The man in the brown suit. New York: Rizzoli.
Golec, M. (2002). A Natural History of a Disembodied Eye: The Structure of Gyorgy Kepes's Language of Vision. Design Issues, 18(2), 3-16. doi:10.1162/074793602317355747
Grudin, J. (1989). The Computer Reaches Out: The Historical Continuity of Interface Design. DAIMI Report Series, 18(299). doi:10.7146/dpb.v18i299.6693
Hamraie, A. (2017). Building access: Universal design and the Politics of Disability. Minneapolis, MN: University of Minnesota Press.
Harwood, J. (2016). Interface: IBM and the Transformation of Corporate Design, 1945-1976. Univ Of Minnesota Press.
Hayles, N. K. (1994). Boundary disputes: Homeostasis, reflexivity, and the foundations of Cybernetics. Configurations, 2(3), 441-467. doi:10.1353/con.1994.0038
Hayles, N. K. (2010). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.
López-Durán, F. (2019). Eugenics in the garden: Transatlantic architecture and the crafting of modernity. Austin, Texas: University of Texas Press.
Oral history interview with Niels Diffrient. (2010). Retrieved March 7, 2022, from https://www.aaa.si.edu/collections/interviews/oral-history-interview-niels-diffrient-15875
Randall, F. E. (1946). Human Body Size in Military Aircraft and Personal Equipment. Dayton, OH: Army Air Forces Air Material Command.
Randall, F. E. (1946). Human body size in military aircraft and personal equipment. Dayton, OH: Army Air Forces Air Material Command.
Tilley, Alvin and Henry Dreyfuss and Associates. (1993) Drawing 36. The Measure of Man and Woman.
Shneiderman, B. (1979). Human Factors Experiments in Designing Interactive Systems. Computer, 12(12), 9-19. doi:10.1109/mc.1979.1658571
Vultee Aircraft, Inc., military aircraft. (n.d.). Retrieved March 7, 2022, from https://www.loc.gov/item/2003690505/.
Wiener, N. (1967). The Human Use of Human Beings: Cybernetics and Society. New York, NY: Avon Books.
Yost, J. R. (2017). Making IT Work: A History of the Computer Services Industry. Cambridge, MA: MIT Press.
Caitlin Cary Burke (March 2022). “Henry Dreyfuss, User Personas, and the Cybernetic Body.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 32-44.
About the author: Caitlin Burke is a Communication PhD student at Stanford University, where she studies user experience design, design ethics, media history, and human-computer interaction.

Courtesy of Charles Babbage Institute Archives.
The first finding is that long before computers, the Internet, or social media became available, people on both sides of the Atlantic were heavily dependent on organized (usually published) information on a regular basis. Studies about the history of the week, children’s education, work-related activities, and devotion to religious and community practices have made that clearly obvious. The challenge for historians now, therefore, is to determine how best to catalog, categorize, and study this sprawling subject of information studies in some integrative rationale fashion. Do we continue to merely study the history of specific pieces of software or machinery, ephemera such as newspapers and books, or providers of information such as publishers and Google?
A Framework for Studying Information’s History
In my three-volume Digital Hand (2004-2008) and subsequently in All the Facts: A History of Information in the United States Since 1870 (2016), I shifted partially away from exploring the role of providers of information and its ephemera toward how people used facts—information and data. In the research process, categories of everyday information began to emerge. First, periods—think epochs, eras—did too. Second, as with most historical eras these overlapped as well, signaling a changing world.
The same held true for the history of the types and uses of information, and, of course, the technologies underpinning them. We still use landlines and smartphones, we still fill out paper forms asking for information requested for a century and fill out online ones, and of course use First Class mail and e-mail. Publisher’s Weekly routinely reports that only 20 percent of readers consume some e-books; 80 percent of all readers still rely on paper books, so old norms still apply. Apple may post a user manual on its website but buy an HP printer and you are likely to find a paper manual in its shipping box.
All the Facts reported that there were types of information ephemera that existed from the 1800s to the present, supplemented by additional ones that did not replace earlier formats. Obvious new additions were electrified information, such as the telegraph, telephone, radio and TV. Paper-based information was better produced by people using typewriters, better quality pens and pencils and stored in wood later metal file cabinets, 3 x 5 cards, still later computers, PCs, smartphones, and now digital doorbell databases. Each improvement also made it more flexible to store information in logical ways, such as on 3 x 5 cards or folders.

The volume of their use grew massively; humble photographs of the interiors of homes and offices taken over the past 150 years illustrates that behavior as does the evolution of the camera, which too is an information-handling device. Commonly used ephemera across the entire period include, specifically, newspapers, magazines, books, telegraphy, telephones, radios, television, personal computers, smartphones and other digital devices, all arriving in that order. So, any chronology or framework should take into account their use. If you are reading this essay in the 2020s, you are familiar with the myriad uses to which you have relied on information and the appropriation of these devices with the probable exception of the telegraph, which passed into history by the early 1970s.
A second category of activities that any framework needs to incorporate, because they remained constant topics of concern across the entire two centuries, concerns information people needed with which to lead their personal lives, such as medical information to cure illnesses, political information to inform their opinions and voting practices, and so forth. Historians better understand that work-related activities required massively increased uses of information to standardize work processes, run ever-larger organizations, and to provide new products and services. I, and others, continue to study those realms of information use, because they kept evolving and expanding across the past two centuries—a process that shows no signs of slowing. The historical evidence points, however, to several categories of information evident in use in this period in private life. These include consulting published—I call it organized—information on taking care of one’s home and raising children, sports and hobbies, vacations, and interacting with one’s church, community and non-profits organizations, and with government agencies at all levels. Participation in civic and religious institutions, in particular, represented a “growth industry” for information across the two centuries. Sales volumes for books and magazines provide ample evidence of this, just as today sales statistics do about PCs and smartphones the same. People also relied on information available in public spaces. These included public libraries, billboard advertisements and government signs and messages along roads and highways, both painted and digitized, advertisements on the side of buildings, a massive increase in the use of maps available from publishers, state highway departments, and as metal signs on roads. Users worried about privacy issues, a concern expressed in North America as early as the 1600s and still with us today.
Role of the Internet
But what about the Internet? By now the reader will have concluded that everything mentioned already had rapidly migrated to the Internet too, certainly by the very early 2000s. We have already created frameworks for phases in the development and use of the Internet, such that we accept as 1994-1996 as phase one of wide use (adoption or appropriation), 1997-1998 as a second phase with the ability to conduct interactive information exchanges, a third with the introduction of feedback loops that began in about 2002-2003, and yet another involving the adoption of social media applications soon after. Each had its applications: Phase 1 with product brochures, mailing addresses, telephone numbers and some e-mail; Phase 2 intranets, databases, order taking, and organizational news; Phase 3 seeking feedback, customer engagement, business partner collaboration, and in Phase 4 posting of personal information (remember the photographs of cats on Facebook?), communities of practice and customers sharing information, including churches, civic organizations and clubs, and the rise of information analytics. Historical data documented the rapid diffusion of these practices, such that over half the world today uses the Internet to share information (more on that later). Usage began a new central facet of people’s daily lives.
Because we are discussing the Internet’s use, the two most widely sought-after Internet-sourced information in its early stages that continue to the present is political information and even more about pornography and health. Increasingly, too, people seek out games and always “how to” advice. Libraries became spaces one could go to for access to the Internet. Starting in 2007, people across the word were able to access information quicker and more often than before due to the introduction of the smartphone.
In All the Facts we published a photograph of a public library in San Antonio, Texas, from 2013 that had no books; rather, it looked like an Apple Store with rows of screens. Today, such spaces are common in most public, school and university libraries in scores of countries. Increasingly since the early 2000s, people received growing amounts of news from sites on the Internet and today, news aggregators pull that together by a user’s preferences for topics and timelines. Religion and raising children are widely covered by information sources on the Internet. In fact, by about 2015 the public expected that any organization had to have a presence on the Internet: civic organizations, every government agency one can imagine, schools, universities, industry and academic associations, stores (including brick-and-mortar versions), political parties, clubs, neighborhood associations, and even children’s playgroups. I found few exceptions to this statement when writing All the Facts.
Historians began to catalog the types of information that became available from these organizations, beginning in the 1950s. Following the lead of librarians who had started the process that we are familiar with today in the 1800s, these included types of ephemera (e.g., books, magazines, journals) and by topics (e.g., physics, history, economics). Historians are now beginning to go further, such as William Aspray and I with our current research about the types of fake information and their impact on truth and authenticity, others exploring what information people rely upon sought through the Internet, and too, how people use information on social media.
As to category of information: for example, by 2014 the Ford Motor Company was providing online information about the company, news, its products, role of innovation, people and careers, media postings, space for customer feedback, contact addresses, stock data and investor facts, a social media sub-site, facts about customer support, automotive industry facts, and declarations about privacy policies. Meticulously documenting these categories of information for thousands of such organizations demonstrates the diversity—and similarity—of types of information that one came to expect. Note, however, that the information and functions cataloged about Ford had been available in paper-based forms since the 1920s, just not as easily or quickly accessible.

Returning to the pre-Internet era, think in terms of eras (phases) by going beyond when some form of ephemera became available. The ephemera or technologies themselves added diversity, convenience, speed, less expensive communications, and capability of moving ever-increasing volumes of information. Historians have done considerable research on these five features. However, information historians are just beginning to realize that by focusing their concerns on the information itself, pivoting away from the technologies themselves (e.g., books and computers) they see the endurance of some topics—access to medical information, other facts about raising children, or cooking recipes—regardless of format or technology used.
Thinking this way expands our appreciation for the extent of a society’s use of information and just as relevant, how individuals did too. In a series of books produced by Aspray, one could see how data-intensive the lives of people of all ages, socio-economic status, and interests became over time. I have argued in All the Facts and elsewhere that this kind of behavior, that is to say, ever-increasing reliance on organized information, had been on the rise since the early 1800s.
Recent Findings and Thinking
While All the Facts lays out the case for how we could come to the conclusion that we lived in yet a second information age—not THE Information Age of the post World War II period—that book was published in 2016 and so much has happened since then. Rapid changes in realities facing historians of information keep pounding the shores of their intellectual endeavors on three beaches: Internet usage, fake news and misinformation, and the changing forms of information.
In 2021 the Pew Research Center reported that 95 percent of American adults living in urban centers used the Internet, 94 percent of suburban and 90 percent of rural residents. In comparison to usage in 2015, when writing All the Facts wrapped up, urbanites were at 89 percent, suburbanites at 90 percent, and rural residents at 81 percent. Since 2000, users doubled as a percent of the total population. The overall number of Americans using the Internet in 2021 had reached 93 percent of population. Smartphone usage also increased, now one of the top drivers of Internet usage, thanks to both the increased availability and affordability of this technology. Similar overall statistics could be cited for other OECD, Asian, and South American societies. Convenience and affordability combined are driving use all over the world, no longer just in the wealthiest societies.
Other surveys conducted in the United States by the Pew Foundation reported that over 70 percent of residents thought the information they obtained was accurate and trustworthy in 2012, just before the furor over misinformation became a major issue of concern in American society expressed by both the politically energized Right and Left, and by students of misinformation and many in the media and in senior government positions. But the types of information people sought were the same as in prior decades.
The problems survey respondents expressed emanated from where fake news or misinformation resided. First, fake news and misinformation was not constrained to sources on the Internet; these appeared in books, television programs, magazines, and radio programs, often promulgated by agents operating across multiple digital and paper-based platforms. Information scholars are increasingly turning their attention to this problem, as have Aspray and I, reporting our results in a series of books and papers. However, as he and I have emphasized and documented, this has been a concern and overt activity since the eighteenth century.
In Fake News Nation (2019) we focused largely on political and industry-centered examples, while in a sequel, From Urban Legends to Political-Fact Checking: Online Scrutiny in America (2019) we began documenting the nation’s response to this growing problem. The physical world’s battles over politics and such issues as the role of tobacco, oil, and environmental damage had moved to the Internet, but also represented terrain fought over long before the use of the web. If anything, the role of misinformation has spilled over into myriad issues important to today’s citizens: health, vaccines, historical truths, racism, product endorsements and descriptions, among others. Trusted sources for impartial news and information competed for attention with mischievous purveyors of misinformation and people at large opining on all manner of subjects. These activities disseminating false or misleading information represent a new development of the past decade because of their sheer volume of activity, even though their patterns are becoming increasingly familiar to historians studying earlier decades, even centuries.
But perhaps for historians the most interesting new research interest is the nature of how information changes. To make All the Facts successful, it was enough and highly revelatory to document carefully the existence, extent of, and use of information across essentially all classes, ethnic and racial groups and ages, and to present a framework for gaining control over what otherwise were massive collections of organized information. That exercise made it possible to argue that modern society (i.e. since at least the start of the Second Industrial Revolution) had to include on any short list of research priorities the role of information in all manner of activity. During that project, it became evident, however, that information itself (or, what constituted information) was changing, not simply increasing or becoming more diverse and voluminous. Second, that transformation of information and new bodies of fact were leading to the emergence of new professions and disciplines, along with their social infrastructures, such as professorships and associations and literature.

Courtesy of IBM archives.
For example, regarding changing information, it became increasingly electrified, beginning with the telegraph in the 1840s and today the “signals” of which computer scientists and even biologists explore. There are biologists and other scientists who argue that information is a ubiquitous component in the universe, just as we have accepted that same idea regarding the presence of energy. Intelligence could no longer be limited to the anthropomorphic definitions that humans had embraced, emblematically called artificial intelligence. Trees communicate with each other, so do squirrels and birds about matters relevant to their daily lives.
Regarding the second point—development of new professions—before the 1870s there was insufficient knowledge about electricity to create the profession of electrician, but by the 1880s, it existed and rapidly developed its own body of information, professional practices, and rapidly became a licensed trade. In the years that followed, medical disciplines, plumbing, accounting, business management, scientists in all manner of fields, even later airplane pilots, radio engineers, and astronauts became part of modern society. They all developed their associations, published specialized magazines and journals, held annual conventions and other profession-centered meetings, and so forth. Probably every reader of this essay is a product of that kind of transformation.
Prior to the mid-nineteenth century, most professions had been relatively stable for millennium, such as the percent of populations engaged in subsistence agriculture, law, religion, warfare, and the tiny cohort of artisans. That reality has been thoroughly documented by economic historians, such as by Angus Maddison in his voluminous statistical collections (2005, 2007), all of whom pointed out that national income levels, for example, or increase in both economic productivity and population that radically did not change until more and different information began arriving. This was not a coincidence.
Understanding how information transformed and its effects on society is a far more important subject to investigate than what went into All the Facts because, like the investigations underway about misinformation, we are reaching into the very heart of how today’s societies are shaped and function. The earlier book was needed to establish that there was a great deal more going on that could be communicated by historians than by limiting their studies to the history of books or newspapers, or to the insufficient number of studies done about academic and discipline-centered institutions.

Courtesy of Charles Babbage Institute Archives.
Now we will need to explore more carefully how information changed. I propose that this be initially done by exploring the history of specific academic disciplines and the evolution of their knowledge base. That means understanding and then comparing to other disciplines the role of, for instance, economics, physics, chemistry, biology, history, engineering, computer science, and librarianship. This is a tall order, but essential if one is to understand patterns of emerging collections of information and how they were deployed, even before we can realistically jump to conclusions about their impact. Too often “thought leaders” and “influencers” do that, in the process selling many books and articles but not with the empirical understanding that the topic warrants.
That is one opinion about next steps. Another is that the democratization of information creation and dissemination is more important. The argument holds that professionals and academics are no longer the main generators of information, millions of people instead. There are two problems with this logic, however. First, such an observation is about today’s activities, while historians want to focus on earlier ones, such as information generation prior to the availability of social media. Second, there is a huge debate underway about whether all of today’s “information generators” are producing information, misinformation, or are simply opining. As a historian and an avid follower of social media experts, I would argue that the issue has not been authoritatively settled and so the actions of the experts still endure, facilitated by the fact that they control governments, businesses, and civic organizations.
I am close to completing the first of two books dealing precisely with the issue of how information transformed. It took me 40+ years of studying the history of information to realize that understanding how it changed was perhaps the most important aspect of information’s history to understand. That realization had been obfuscated by the lack of precision in understanding what information existed. We historians approached the topic in too fragmented a way; I am guilty, too, as charged. But that is not to say that the history of information technology—my home sub-discipline of history and work—should be diminished, rather that IT’s role is far more important to understand, because it is situated in a far larger ecosystem that even transcends the activities of human beings.
Bibliography
Aspray, William (2022). Information Issues for Older Americans. Rowman & Littlefield.
Aspray, William and James W. Cortada (2019). From Urban Legends to Political Factchecking. Rowman & Littlefield.
Aspray William and Barbara M. Hayes (2011). Everyday Information. MIT Press.
Bakardjeva, Maria (2005). Internet Society: The Internet in Everyday Life. Sage.
Blair, Ann, Paul Duguid, Anja Silvia-Goeing, and Anthony Grafton, eds. (2021). Information: A Historical Companion. Princeton.
Chandler, Alfred D., Jr. and James W. Cortada, eds. (2002). A Nation Transformed by Information. Oxford.
Cortada, James W. (2016). All the Facts: A History of Information in the United States Since 1870. Oxford.
Cortada, James W. (2021). Building Blocks of Society. Rowman & Littlefield.
Cortada, James W. (2004-2008). The Digital Hand. Oxford.
James W. Cortada (2020). Living with Computers. Springer.
James W. Cortada (2002). Making the Information Society. Financial Times & Prentice Hall.
Cortada, James W. and William Aspray (2019). Fake News Nation. Rowman & Littlefield.
Gorichanaz, Tim (2020). Information Experience in Theory and Design. Emerald Publishing.
Haythornwaite, Caroline and Barry Wellman, eds. (2002). The Internet in Everyday Life. Wiley-Blackwell.
Maddison, Angus (2007). Contours of the World Economy, 1-2030 AD. Oxford.
Maddison, Angus (2005). Growth and Interaction in the World Economy: The Roots of Modernity.
Ocepek, Melissa G. and William Aspray, eds. (2021). Deciding Where to Live. Rowman & Littlefield.
Zuboff, Shoshanna (2019). The Age of Surveillance Capitalism. Public Affairs.
James W. Cortada (February 2022). “What We Are Learning About Popular Uses of Information, The American Experience.” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 19-31.
About the author: James W. Cortada is a Senior Research Fellow at the Charles Babbage Institute, University of Minnesota—Twin Cities. He conducts research on the history of information and computing in business. He is the author of IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). He is currently conducting research on the role of information ecosystems and infrastructures.
Editors’ note: This is a republication of an essay (the second one) from a newly launched blog of essays Blockchain and Society by CBI Director Jeffrey Yost. As a one-time only crossover at the launch of the blog and site, Interfaces is republishing an essay Yost wrote on gender inequity and disparity in participation in the development and use of cryptocurrency. This one-time republication is to introduce Interfaces readers to the blog and its topic is an especially good fit with Interface’s mission. Please consider also subscribing to the blog https://blockchainandsociety.com/
Few Women on the Block: Legacy Codes and Gendered Coins
Jeffrey R. Yost
Abstract: Despite major differences in levels of participation in computing and software overall, the decentralized cryptocurrency industry and space is far more skewed with roughly 90 percent men and 10 percent women (computer science overall is around 20 percent women). This article explores the history of gender in computing, gender in access control systems, gender in intrusion detection systems, and the gender culture of Cypherpunks to historically contextualize and seek to better understand contemporary gender disparity and inequities in cryptocurrency.
Given decentralization is at the core of the design and rhetoric of cryptocurrency projects, the field often highlights, or hints at, small to mid-sized flat organizations and dedication to inclusion. Crypto coin and platform projects’ report cards on diversity, however, are uneven. While an overall diversity of BIPOC exists in cryptocurrency, it is quite unequal, as the founding and leadership of Bitcoin (team, the creator is anonymous) and the top 30 altcoins (alternative to Bitcoin) is disproportionately white North Americans, Europeans, and Australians, along with East Asians. With gender, inequalities are especially prevalent, in participation and resources. A half dozen or so surveys I found, spanning the past few years, suggest (in composite) that women’s participation in the crypto workforce is at slightly less than 10 percent. There are few women on the block, far fewer percentagewise in cryptocurrency than the already quite gender skewed low ratios in computing and software. On the adoption side, twice as many men own cryptocurrency as women.
This essay, on women in cryptocurrency, concentrates on gender inequities, as well as intersectionality. It discusses early research in this area, standout women leaders, and organizational efforts to address gender imbalances and biases. It places this discussion in larger historical contexts, including women in computing, women in security, women in cryptography, and women in, or funded by, venture capital. It also highlights the rare instances of a female CEO of cryptocurrency. Achieving greater gender balance is a critically important ethical issue. It also is good business. Many studies show corporations with gender balance on boards and women in top executive positions outperform. My essay posits that historical, terminological, spatial, and cultural partitions and biases block gender inclusion and amplify inequality in cryptocurrency development, maintenance, and use.
Major Gender Disparities in Cryptocurrency
A major study by international news organization Quartz surveyed the 378 cryptocurrency projects between 2012 and 2018 that received venture capital funding (Hao, 2018). Many cryptocurrency projects do not have this luxury or take this path, as they raise funds from friends and family, bootstrap, or rely on other means at the start. Venture capital funded projects tend to have greater resources and key connections to grow. Most of the largest coin projects have taken on venture capital support at some point in their young histories. It is self-reinforcing as rich projects tend to grow richer through R&D and marketing, and through the momentum of network effects, Metcalf’s Law (Value of a Network = N of Users-Squared), and under-resourced coin projects, often cease within several years as capitalizations descend to near $0.
Of these 378 venture funded projects, only 8.5 percent had a woman founder or co-founder. Venture capital (VC) is dominated by men, about 90 percent, and in terms of partners and senior positions at major VC firms, disparities are even starker (as reported in NBER Digest 9/2017). The venture domain is also very heavily biased toward funding of projects of white male entrepreneurs, and this is even more skewed in terms of capital offered or deployed. To illustrate, a study by the software and finance data firm Pitchfork found that in 2018 women founders received just 2.3 percent of total venture capital funding raised in the crypto universe—reported on by Forbes (Gross, 2019).
In the information technology (IT) field broadly, roughly 18 percent of projects have a woman leader or co-leader. Even with this quite low percentage in IT, crypto is disturbingly far lower than this, in fact, it is well less than half of that level.
On the adoption and use side, and unlike with BIPOC, where adoption nearly doubles that of whites in the US (participation rate of owners, at any level, so this is not crypto wealth), women holders of crypto are only half that of men. Men are two-thirds of crypto holders/users and women are just one-third.
Looking Backward at Backward, Gendered Computing Cultures

Computing is a field that has had substantial and important technical contributions by women from the start. This dates to the women who programmed the ENIAC, the first meaningful electronic digital computer, in mid 1940s to early 1950s. At the same time, the field and the industry has been held back by discrimination in hiring, and there have been heavily male gendered environments from the beginning. This has been true in the U.S. as documented in the tremendous scholarship of Janet Abbate (Abbate, 2012) and others, and for the United Kingdom, in the masterful work of Mar Hicks (Hicks, 2017).
Gender in IT remains substantially understudied, especially in some geographies. There is also a dearth of literature regarding some industry segments, corporations, and organizations on the production side, as well as much in the maintenance and use domains. Discriminatory practices against women and transgendered people have been and remain pronounced in the military, national and local governments, industry, national laboratories, nonprofits, universities, and beyond.
Thomas Misa’s path breaking, deeply researched work, published in Communications of the ACM and part of a larger book project, indicates there was not a golden age of women's participation in the early years, but continuously evident but steady low and range bound participation--between the high single digits to the upper teens percentwise--from the middle 1950s to the middle 1970s (Misa, 2021). His research draws on the best available data for the early years, user groups (and for the above I am giving extra weight to IBM’s User Group SHARE, Inc. in combining Misa’s totals for groups since it was 60 percent plus of the industry and its nearest competitor was always under 15 percent). Following this two-decade span, was a gradual upward trend that ramped up in the 1980s when late in the decade women's participation in computer science education and the workforce peaked at 37 to 38 percent. In 1990s it fell sharply, as Misa, and other authors, explored in his important edited volume, Gender Codes (2010).
Participation, environment, culture, and advancement are all important. My own work has contributed to show gender inequality in promotion to leadership roles in software and services companies in the US, especially pre-2000 (Yost, 2017). In recent years and decades, women's participation as computer science majors at US universities has been hovering around 20 percent. Why the huge drop and recovery to only about half the former peak? The widespread adoption of PCs, gendering of early PCs, gendered gaming (especially popular shooter games), rise of male geek culture, inhospitable environments for women are among the likely factors, as the publications of Abbate, Hicks, Misa, and others richly discuss. More attractive opportunities in law, medicine, and business outside IT likely are factors too, as participation in these areas rose as computing participation fell. And far from being free of discrimination, on a relative basis, these professional areas may have had less.
Gender in Different Computer Security Environments
In co-leading a major computer security history project for the National Science Foundation (NSF Award Number: 1116862) a half decade ago (and I am thrilled, just yesterday, we received another multiyear grant from NSF, on privacy and security, a CBI project I am co-leading with G. Con Diaz), I published “The March of IDES: A History of Intrusion Detection Expert Systems.” (Yost, 2015). I highlighted gender in one important area of computer security, intrusion detection. Early intrusion detection involved manual printing out logs and painstaking review of the printouts as security officers, auditors, and systems administrators' (who did this work) eyes glazed over. As computer use grew, fan folded printouts would grow in multiple stacks toward the ceiling at many computer centers, it soon overwhelmed. In the 1980s automated systems were developed to flag anomalies to then be selectively reviewed by humans. In the 1980s the artificial intelligence of expert systems was first applied in pioneering work to help meet growing challenges (Yost, 2016).

The National Security Agency had a very important pioneering research program in the 1980s and 1990s to fund outside intrusion detection work, called Computer Misuse and Anomaly Detection, or CMAD. This program was led by Rebecca Bace. The dollar amounts were not huge, they did not need to be, and Bace, with great energy and skill, expertly worked with the community to get much pioneering work off the ground toward impactful R&D, at universities, national labs, and nonprofit research corporations like SRI. In conducting oral histories with Dorothy Denning, Teresa Lunt, and Becky Bace (full text of these published interviews are at the CBI Website/UL Digital Conservancy), I got a sense of the truly insightful scientific and managerial leadership of the three of them (Yost, 2016).
The accelerating, sometimes playful, but also quite malicious and dangerous hacking of the 1970s and 1980s (for those Gen Xers and boomers reading this, remember War Games, and some non-fictional scares written about in newspapers?) became a serious problem. The US Government often was a core target of individual and state sponsored hackers in the Cold War. This fostered a need (and federal contracts) for this field of intrusion detection systems. As such, and over time, increasingly there were funds and contracts to complement the modest grant sizes, often under $100,000, provided from Bace’s NSA (CMAD) program.
This resulted in essentially a new computer science specialty opening in the 1980s and 1990s at universities, a subset of computer security, intrusion detection. There were some standout male scientists also, but at the origin, and for years to follow, women computer scientists disproportionately were the core intellectual and project leaders. Women scientists such as Denning, Lunt, Bace, as well as Kathleen Jackson (NADIR at Los Alamos) and other women scientists headed the top early projects and provided the insightful technical and managerial leadership for this computer security and computer science specialty to thrive (Yost, 2016).

Another computer security area I researched for NSF was access control systems and standards. This was all about knowing how operating systems worked, secure kernels, etc. It was by far the largest computer security field in terms of participants, papers, funding, and standard setting efforts, and it was overwhelmingly male. Operating systems (OS) was an established research area prior to access control becoming a key domain in it. Access control as an area within the larger OS domain was in response to breeches in the earliest time-sharing systems in government and at universities. MIT’s early 1960s pioneering Compatible Time-Sharing System (CTSS) had little security, with its successor of the late 1960s and beyond, MULTICS, project leader Fernando Corbato, and other top computer scientists at MIT like Jerome Saltzer, made security design central to the effort.
Operating systems research and development, in academia, industry, the DoD, DOE, etc. was overwhelmingly male and very well-funded. It followed that access control became an overwhelmingly male specialty of computer security and received strong federal research program and contract support.
Reflecting on this prior scholarship—women as the key leaders of the new (1980s) intrusion detection area and men the leaders of many of the most important operating system and access control projects—I have been pondering whether it provides any context or clues as to why, to date, the founders of cryptocurrency projects have largely been men? At very least I think it is suggestive regarding established and new specialties, connections between them, historical trajectories, and gender opportunities and participation. A wholly new area, when a dominant more visible and better funded other area exists, can lead to greater opportunities at times for newcomers to the new area of security, including for women.
Following from this, I have begun to consider the related question of: to what extent is cryptocurrency a new area offering new demographics and dynamics, and to what extent is it a continuation of the evolving field of cryptography? And how was this influenced by older cryptography and its renaissance in impactful new form, its new direction?
In the mid-1970s and 1980s with the emergence and rapid growth of a new form cryptography, public key developed a strong intellectual and institutional foundation, especially thanks to the work six men who would later win the Turing Award, early crypto pioneers Whitfield Diffie and Martin Hellman (and the “New Directions…” 1976 landmark paper); Ron Rivest, Adi Shamir, Leonard Adleman, the three from MIT known as RSA; and Silvio Micali, also MIT. Rivest, Shamir, and Adelman in addition to the RSA Algorithm would start a company RSA Data Security, and it would launch a pivotal event, the RSA Conference, and spin off an important part, authentication, as Verisign, Inc. After some initial managerial and financial stumbles, highly skilled James Bidzos would successfully lead RSA Data Security, and as Chair of the Board, Verisign.
In addition to his Turing Award, Micali had earlier won the Gödel Prize. In 2017, Micali became the founder of a now more than $10 billion “Proof-of-Stake” altcoin project Algorand and along with running this, he is a Computer Science Professor at MIT. Algorand offers much in being environmentally sound (low energy to mine), scalable, and possesses strong security.
Cryptocurrency: Both a New and an Older Space
The excellent book by Finn Brunton, Digital Cash (2019) and other articles and books addressing the cypherpunks—the cryptographic activists focused on privacy who sought to retake control through programming and systems—overwhelmingly have male actors. In addition to Diffie and Hellman, appropriately revered for inventing public key (in the open community), most of the high profile cypherpunks are male—Timothy May, Eric Hughes, John Gilmore, etc.
Yet it was one of the co-founders, Judith Milhon, known as “St. Jude,” who coined the term Cypherpunk. The cypherpunks, who journalist Steven Levy referred to as the “Code Rebels” in his book Crypto, were inspired in part by the work of Diffie and Hellman. The response of the National Security Agency (NSA) was to try to prevent private communications it could not surveille, and thwart or restrict development and proliferation of crypto it could not easily break. This included its work with IBM to keep the key length at a lower threshold for the Data Encryption Standard, or DES. This made it subject to the “brute force” of NSA’s unparalleled computing power. Further, it is widely believed that NSA also worked to have a back door in IBM’s DES, code containing a concealed and secret way into the crypto system, to enable surveillance of the public.
St. Jude: A Creative Force Among Early Cypherpunks
Born in Washington, DC in 1939, St. Jude was a self-taught programmer, hacker, activist, and writer. As a young adult she lived in Cleveland and was a part of its Beat scene. She volunteered in organizing efforts and took part in the Selma to Montgomery Civil Rights March in 1965, for which she was arrested and jailed. Her mug shot is a commonly published photo of her, symbolic of her commitment to civil rights throughout her life. She moved from the East Coast to San Francisco in 1968, embracing the counterculture movement of the Bay Area. In the late 1960s she was a programmer for the Berkeley Computer Company, an extension from the University of California, Berkeley’s famed time-sharing Project Genie.

Active in Computer Professionals for Social Responsibility (CBI has the records of this important group), she was an influential voice in this organization. She was also one of the leaders of Project One’s Resource One, the first public computer network bulletin board in the US, which existed in the San Francisco area. She was known for her strong belief that network computer access should be a right not a privilege. She was an advocate for women technologists and acutely aware of the relative lack of women "hackers.” (the term meant skilled programmer, not necessarily its later meaning associated with malicious hacks).
St. Jude was a widely recognized feminist in computing and activism circles. She was among the founders of the "code rebels" and in giving the group the name that stuck, cypherpunks, it is suggestive of her having a voice in this male space (her writing and interviews suggest this strongly as well), but this was not necessarily (and probably not indicative of) a general acceptance of women in the group. Some of St. Jude’s views were at odds with academic feminism and gender studies areas but may have fit more with the cypherpunks’ ethos. She abhorred political correctness she saw in academic communities and educational and political elites. She believed technology would fix many problems, including social problems of gender bias and discrimination, “Girls need modems,” was her answer, and oft repeated motto and rallying statement. It was what she felt was needed to level the playing field (Cross, 1995).
The lack of women among the cypherpunks, St. Jude’s great frustration more women did not adopt her active hacker approach and ethic, likely suggests a dominant male and biased culture that only opened to certain great talent, creativity, and interactive style she possessed.
St. Jude became a senior editor and writer at Mondo in 2000, a predecessor publication that Wired drew from in style in writing about information technology. She also was lead author (with co-authors R.U. Sirius and Bart Nagel, Random House, 1995) of The Cyberpunk Handbook: The Real Cyberpunk Fakebook, (the subtitle a bit prophetic without intent of terminology given later formed Facebook and its profiteering off fake news) and along with her journalism she wrote science fiction. Judith “St. Jude” Milhon, passed from cancer in 2003.

There definitely is a need for more historical research on gender and the cypherpunks as well as the sociology of gender in recent cryptocurrency projects, related intermediaries, and investors and users. Rudimentary contours nonetheless can be gently and lightly sketched from what is known. Names from the cypherpunks mailing list appearing in articles and handful of books addressing the topic are about 90 percent male. At the start St. Jude was the sole woman as a part of this core group. If limited to those directly interested and investigating possibilities with digital currencies before the advent of Satoshi Nakamoto’s Bitcoin in 2008, it was even more male dominated.
As such, women role models were very few in early public key efforts, and more broadly among the code rebels or cypherpunks overall. There are deep connections of the cypherpunks to Bitcoin, but also other early coins as well. Those young crypto entrepreneurs and activists of recent years and of today of course were never a part of the group, but nonetheless often grew an interest in it. They were motivated by its past activity, and had reverence for Tim May, Ed Hughes, John Gilmore, and others. This perhaps led to fewer opportunities perceived to be, or, open to women, and likely less of a recognition and consideration of pursuing this space among women.
Of the two exceptions of women in the upper echelons of cryptocurrency, one came from an equally talented and active wife and husband team (the Breitmans). The other was a truly exceptional individual, possibly deserving the term genius, who like Vitalik Buterin (Ethereum’s lead founder) achieved amazing things at a young age, was exposed to potential need for crypto, and was driven by the goal of socially impactful career success.

Tezos Co-Founder Kathleen Breitman
There are more than 14,000 altcoins, the top 30 are capitalized at $4 billion or more currently (value of circulating coins), and those not in the in the top 200 generally are less than $40 million in capitalization and in a precarious spot if they do not rise at least five-fold in the coming years. Many in the investment community have pejoratively labeled lesser capitalized altcoins (and for some Bitcoin enthusiasts, all altcoins) as “sh*t coins.” The cryptocurrency industry has resulted in a growing specialized trade and investment journalists, following Ethereum founder Vitalik Buterin’s initial pre-Ethereum pursuit of coin journalism, in creating Bitcoin Magazine. These include journalists, analysts, and evangelists (often all wrapped into one) in e-magazines such as The Daily HODL and Coin Telegraph, two of the more respected larger publications among many others. They write mainly on the top 50 coins, what most in the investment community cares about, and thus are writing very heavily about men, a reinforcing mechanism hindering perceived and real opportunities for women.
In the top 30 coins, only two have a woman founder or principal co-founder, none have a sole woman founder or sole leadership team in the top 30, and many are all male at the top. A few coins have a longer list in the founder’s group, upper single digits. The two principal co-founders of major altcoins are Kathleen Breitman of Tezos and Joyce Kim of Stellar Lumens. Tezos is $4 billion in capitalization and ranks 28th in altcoin cap., Stellar Lumens is at $4.8 billion and ranks 22nd.
The coin project “Proof-of-Stake”-modeled Tezos, was co-founded by Kathleen Breitman and her husband Arthur Breitman in 2018, along with a Tezos foundation created by Johann Gevers. Kathleen Breitman studied at Cornell University before joining a hedge fund and working as a consultant, Arthur Breitman is a computer scientist who worked in quantitative finance prior to Tezos. A dispute with the foundation and Gevers led the Breitmans into a lawsuit which delayed the launch and hurt the project, ultimately a payout settled the matter. Kathleen Breitman has stated that she has been underestimated in the industry as some assume her husband is the real creator when they very much co-created Tezos, technically and organizationally.
Stellar Lumen’s Co-Founder Joyce Kim
To say Joyce Kim’s career is impressive is an understatement, stellar is in fact quite fitting. Kim, a second-generation Korean American, grew up in New York City attending High School for the Humanities and graduated from Cornell University at age 19. Kim followed this with graduate school at Harvard University and Law School at Columbia University. She became a corporate M&A attorney as she also did pro bono work for Sanctuaries for Families and for the Innocence Project. Back in high school she witnessed the trouble and expense of lower income people globally sending money to families, it also was likely evident in work at Sanctuaries for Families.
After success with co-founding Simplehoney, a mobile ecommerce firm, as well as founding and serving as CEO of a Korean entertainment site, she became one of the rare (percentagewise) women in venture capital working at Freestyle Capital. Focusing on the power of social capital, she soon partnered with stable coin (crypto pegged to government fiat currency) Ripple founder Jed McCaleb in 2014 to found open source blockchain-based coin, network, and platform project Stellar Lumens, an effort of the nonprofit Stellar Development Foundation.
Kim’s motivation and vision for Stellar was driven by the fact that 35 percent of women adults globally (World Bank statistics) do not have a bank account despite many of them saving regularly. As such, they have trouble protecting, sending, and receiving funds, difficulties paying bills, helping family. Stellar as a platform and network allows people to send funds at low costs and low sums as easily as sending a DM or email. With 6.3 billion in the world with smartphones, and perhaps as many as 20 percent of these people without a bank account Stellar Lumens addresses a critical problem and serves a great societal need. The coin Celo is also in this very important area, making a positive difference in the world. Stellar Lumens (and Celo) change lives and empower lower income people, especially women as women are less likely than men to have bank accounts due to discrimination and lesser resources. As Kim told Fast Company in an interview shortly after the founding, with Stellar, she “found her true north.” (Dishman, 2015). In addition to Stellar Lumens, Kim recently served as a Director Fellow at the famed MIT Media Laboratory.
In addition to the prestigious MIT senior fellowship, Kim has moved on from Executive Director of Stellar, and the day-to-day of the coin and is having an impact socially and financially in the venture capital arena in crypto, an area that could benefit from more women. Kim is the Managing Partner at SparkChain Capital.
Mining Deeper: Guapcoin’s Tavonia Evans and the African Diaspora Community
At coins not in the top 30, 50, or 100 in capitalization projects teams work toward and hope their technology and mission will one day carry them to much higher levels. There are people and projects behind the coins and that is sometimes disrespectfully forgotten when investors or others refer to coins and projects in derogatory terms.
I wanted to research a coin in the middle third of the 14,000 or so coins out there in current capitalization and was deeply moved by learning about Guapcoin and its tremendous mission. It was founded in 2017 by African American data scientist Tavonia Evans. Evans, a mother of eight, had founded a peer-to-peer platform company earlier but was unsuccessful at getting venture funding. Venture capital is not on a level playing field and far less than one percent of venture funding goes to African American women led businesses. At this intersection--African American women--societal bias in finance is particularly pronounced.
Inability to get funding for that business led her to move on and inspired her Guapcoin project, a cryptocurrency focused on addressing “the economic and financial concerns of the Global African Diaspora community.” Evan’s vision with Guapcoin is beyond merely being a means of exchange for the Global African Diaspora community, and for “Buying Black,” but also a property protection mechanism that combats gentrification, documents all forms of property ownership (from real estate, to copyright, to music licenses) so “the Black and Brown community will have its wealth protected by a transparent, immutable blockchain.”
In 2019, Evans and Guapcoin founded the Guap Foundation to permanently ensure the mission of the coin project is carried out. Many altcoins have associated foundations to both further and to protect the integrity of the mission for generations to come (guapcoin.org).
It is with amazing, social-oriented and green projects like Guapcoin, Stellar Lumens, and Celo that I realized my initial negative perspective of cryptocurrency several years back, because of my very critical views on the environmental impact of Bitcoin, was sorely misguided for many 2016 and later altcoins, and for 2015 Ethereum that is converting to Proof-of-Stake as a consensus model to become green.

“Meetups” and Standout Early Scholarship on Gender and Cryptocurrency
There are a mere handful of published scholarly studies to date examining gender and cryptocurrency. One stood out to me in being especially compelling in its creative methodology, insights, and importance. Simon Fraser University’s Philippa R. Adams, Julie Frizzo-Barker, Betty B. Ackah, and Peter A. Chow-White designed a project where they engaged in participant observation and interaction with over a half dozen “Meetup” events that were primarily, or at least in part, marketed to women, often to educate, encourage, or address gender disparity in cryptocurrency. All of these were in the Vancouver, British Columbia, metropolitan area.
Adams and her co-authors do a wonderful job of interpreting, analyzing, and eloquently conveying the meaning of these events. Some meetups were well designed and executed to offer support to women and empower women in this new industry and space. Others were far less effective, succumbing to the challenges of "trying to support adoption of a new technology," or they ended up presenting more resistance than support. I urge you to read this excellent work of scholarship (P. Adam, et al.), the chapter is in the recommended readings volume edited by Massimo Ragnedda and Giuseppe Destefanis, 2019, which is an excellent book overall and one of the first quality social science books on emerging Web 3.0).
Educational and Empowerment Organizations and Looking Forward
In addition to meetup events that are local in origin, a growing number of nationwide education and advocacy support organizations by and for women in cryptocurrency have emerged. Some foster local meetup events others have other supportive programs.
In Brooklyn, New York, Maggie Love founded SheFi.org in seeing blockchain as a powerful tool for more inclusive and equitable finance tools and systems. It engages in education to advance understanding and opportunities for women in blockchain and decentralized finance.
Global Women in Blockchain Foundation is an umbrella international organization without a central physical headquarters (in the spirit of the technology and decentralization). It is designed to accelerate women’s leadership roles in blockchain education and technology. The websites for these two organizations can be found on this site in the list of organizations.
Efforts to reduce the tremendous gender gap in cryptocurrency development projects and especially founder roles and leadership posts is extremely important, both ethically, and for the creativity, success, and potential of this field. Further, blockchain, and applications in crypto, are the heart of Web 3.0, the future of digital technology. If the field remains 90 percent male it will hurt the field of IT greatly by further reducing overall women's participation in IT, given blockchain greater share of the whole of our digital world.
There is not only a large gender gap in computer science, but also in finance, hedge funds, and venture capital, all which accentuate imbalances in power and opportunity in favor of men in crypto. The VC gender gap is especially problematic as it reinforces hurdles to women and BIPOC, independently and especially at these important intersections, for both small companies and cryptocurrency projects.
Joyce Kim and her leadership at SparkChain, funding crypto is so refreshing. The firm's staff is greatly diverse, in terms of both gender and race and ethnicity. More women in the VC leadership world and VCs with a crypto focus is incredibly important. It is also critical that education in both high school and college does not directly, or indirectly and inadvertently, create gendered spaces favoring men, or those inhospitable to women.
The excellent study by the team at Simon Fraser University looking at cryptocurrencies, and other studies looking at finance and hedge funds, have identified jargon and terminological barriers to entry. In crypto the barriers are many, from outright gender bias, to clubhouses, to other restrictive spaces, but terminology and cultures of exclusion are especially powerful in hindering inclusion, both intentionally and unintentionally.
One motivation for this blog and site and especially the site’s inclusion of a historical glossary of terms (continually added to) and a Cryptocurrency Historical Timeline is to contribute in a small way to education and first steps to remove barriers or blocks to inclusion based on terminology and cultural elements important to communication in this area. Anyone interested in this area and devoting time to it will soon move far beyond these resources, but they might help understanding a bit initially, at least that is a goal. I also see these as tools that can greatly benefit from the community.
I am continually learning from readings, correspondence, and meetings with others in this space. I have added to the readings already from useful comments and suggestions people have sent me after my first post last week. I hope these sources accelerate as community-used and community-influenced tools and thus I very much encourage and welcome feedback. I will take the timeline and glossary, through additions and tweaks, thus, many editions or iterations, but for now it gets at some of the technical and cultural terminology and basics (Why does the mantra of HODL, Hold On for Dear Life, keep coming up as crypto coins currently plummet? The glossary provides a historical context).
[Republished with only slight adjustment from Blockchain and Society: Political Economy of Crypto (A Blog), January 25, 2022) http://blockchainandsociety.com
[Please consider subscribing to the free blog at the URL above]
Bibliography
Abbate, Janet (2012). Recoding Gender: Women’s Changing Participation in Computing, MIT Press.
Adams, Philippa R., Julie Frizzo-Barker, Betty B. Ackah, and Peter A. Chow-White (2019). In Ragnedda, Massimo and Giuseppe Destefanis, eds. Blockchain and Web 3.0: Social, Economic, and Technological Challenges, Routledge.
Brunton, Finn. (2021). Digital Cash: The Unknown History of the Anarchists, Utopians, and Technologists Who Created Cryptocurrency, NYU Press.
Celo Website. www.celo.org
Cross, Rosie (1995). “Modern Grrrl.” Interview with Judith “St. Jude” Milhon. Wired, February 1. www.wired/1995/02/st.-jude/
Dishman, Lydia. (2015). “The Woman Changing How Money Moves Around The World.” Fast Company February 6.
Hao, Karen. (2018). “Women in Crypto Are Reluctant to Admit There Are Very Few Women in Crypto.” Quartz (qz.com). May 5, 2018. https://www.qz.com
Hicks, Marie (2017). Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing, MIT Press.
Guapcoin Website. www.guap.org
Gross, Elana Lyn. (2019). “How to Close the Venture Capital Gender Gap Faster.” Forbes, May 20.
Klemens, Sam. (2021). “10 Most Influential People in Crypto: Kathleen Breitman.” Exodus. August 3.
Misa, Thomas J., Ed. (2010). Gender Codes: Why Women are Leaving Computing, Wiley-IEEE.
Misa, Thomas J. (2021). “Dynamics of Gender Bias in Computing.” Communications of the ACM 64: 6, 76-83.
St. Jude, R.U. Sirius, Bart Nagel (1995). The Cyberpunk Handbook, Random House.
Yost, Jeffrey R. (2015). “The Origin and Early History of the Computer Security Software Industry.” IEEE Annals of the History of Computing, 32:7, April-June, 46-58.
Yost, Jeffrey R. (2016). “The March of IDES: The Advent and Early History of the Intrusion Detection Expert Systems.” IEEE Annals of the History of Computing, 38:4, October-December, 42-54.
Yost, Jeffrey R. (2017). Making IT Work: A History of the Computer Services Industry, MIT Press.
Jeffrey R. Yost (January 2022). “Few Women on the Block: Legacy Codes and Gendered Coins,” Interfaces: Essays and Reviews on Computing and Culture Vol. 3, Charles Babbage Institute, University of Minnesota, 1-18.
About the Author: Jeffrey R. Yost is CBI Director and HSTM Research Professor. He is Co-Editor of Studies in Computing and Culture book series with Johns Hopkins U. Press, is PI of the new CBI NSF grant Mining a Useful Past: Perspectives, Paradoxes and Possibilities in Security and Privacy. He has published 6 books, dozens of articles, and has led or co-led ten sponsored projects, for NSF, Sloan, DOE, ACM, IBM etc., and conducted hundreds of oral histories. He serves on committees for NAE, ACM, and on two journal editorial boards.